text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Experimental demonstration of peripherally-excited antenna arrays
A flexible phased array system with low areal mass density
Mohammed Reza M. Hashemi, Austin C. Fikes, … Ali Hajimiri
A low-profile wideband compressed single-arm spiral antenna array for mid-band 5G beam steering applications
Benjamin J. Falkner, Hengyi Zhou, … Hisamatsu Nakano
Wide-Angle Scanning Phased Array Antenna using High Gain Pattern Reconfigurable Antenna Elements
ByungKuon Ahn, In-June Hwang, … Han Lim Lee
Electronically steered metasurface antenna
Michael Boyarsky, Timothy Sleasman, … David R. Smith
Principles of adaptive element spacing in linear array antennas
Tanzeela Mitha & Maria Pour
3-D twelve-port multi-service diversity antenna for automotive communications
Lekha Kannappan, Sandeep Kumar Palaniswamy, … Jayaram K. Pakkathillam
Design and analysis of Maxwell fisheye lens based beamformer
Muhammad Ali Babar Abbasi, Rafay I. Ansari, … Vincent F. Fusco
Magnetic Pendulum Arrays for Efficient ULF Transmission
Srinivas Prasad M N, Rustu Umut Tok, … Scott Bland
Accurate optimization technique for phase-gradient metasurfaces used in compact near-field meta-steering systems
Khushboo Singh, Muhammad U. Afzal & Karu P. Esselle
Ayman H. Dorrah ORCID: orcid.org/0000-0001-8793-39941 &
George V. Eleftheriades1
Nature Communications volume 12, Article number: 6109 (2021) Cite this article
Fibre optics and optical communications
Emerging technologies such as 5G communication systems, autonomous vehicles and satellite Internet have led to a renewed interest in 2D antennas that are capable of generating fixed/scannable pencil beams. Although traditional active phased arrays are technologically suitable for these applications, there are cases where other alternatives are more attractive, especially if they are simpler and less costly to design and fabricate. Recently, the concept of the Peripherally-Excited (PEX) antenna array has been proposed, promising a sizable reduction in the active-element count, especially when compared with traditional phased arrays. Albeit at the price of exhibiting some constraints on the possible beam-pointing directions. Here, we demonstrate the first practical implementation of the PEX antenna concept, and the proposed design is capable of generating single or multiple independently scannable pencil beams at broadside and tilted radiation directions, from a shared radiating aperture. The proposed structure is also easily scalable to higher millimeter-wave frequencies, and can be particularly useful in MIMO and duplex antenna applications, commonly encountered in automotive radars, among others.
The proliferation of new innovations such as 5G communication systems, autonomous vehicles and satellite Internet, has renewed the interest in highly directive 2D antennas to realize long-distanced point-to-point communication and/or object detection. However, these 2D antennas are typically operated at millimeter-wave frequencies which increases the cost of designing, fabricating, and deploying such antenna systems. For applications that require fixed-beam generation, parabolic reflector antennas are regarded as prime candidate solutions, owing to their directive beams and relatively wide operation bandwith. Nevertheless, their inherently large size and heavy weight hinder some applications. Accordingly, it becomes more suitable to deploy light-weight printed-circuit-board (PCB) and waveguide antennas as proposed in refs. 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24. Notably, the majority of real-life applications, mandating portable and mobile operation, crucially require antennas that are capable of generating not only highly directive but scannable pencil beams as well. From a technical standpoint, traditional active phased arrays can perfectly achieve scannable beams at the expense of high deployment cost.
The tradeoff between scannability and cost can be mitigated by deploying alternative 2D antennas that are simpler to design and less costly to fabricate than traditional phased arrays. For example, previous research efforts lead to the introduction of concepts such as thinned and sparse arrays25,26,27,28,29,30,31. These special phased arrays can reduce the number of active antenna elements required by leaving some antenna elements unexcited, or removing them completely from the grid, and in some extreme cases, breaking the periodicity of the 2D grid entirely. To design these lower-element-count phased arrays, various optimization algorithms are often employed. An alternative technique is to stack or interleave sub-antenna arrays as proposed in refs. 32,33,34,35,36,37. Overall, these approaches and others fail to achieve a sufficient reduction in the active-element count. Furthermore, there is usually a cost to achieving a significant reduction of active elements using these techniques, such as a lower directivity, higher sidelobe levels, or a limited beam scanning range from broadside. In addition, some of these overlapped phased array techniques require complex feeding networks, which can be cumbersome35,36,37.
An interesting alternative antenna concept to phased arrays is the "Continuous Transverse Stubs (CTS) Array"38. This concept does not require phase shifters, instead, it typically relies on some relative mechanical rotation between its constituent components39. The CTS array usually comprises a parallel-plate waveguide perforated at the top with a 1D array of long slots, which are excited with a plane wave orthogonal to the long side of the slots. The slots radiate a pencil beam in free space through a traveling leaky-wave antenna (LWA) operation (series-fed operation). The generated pencil beam can be scanned in both elevation and azimuth along predefined contours, by mechanically changing the relative orientation between the incident plane wave and the 1D slots. This CTS array concept has been realized using a fully-metallic structure with three separate metal layers in ref. 39. These metal layers are implemented using contact-less gap waveguides, allowing the mechanical rotation of the top radiating layer relative to the bottom feeding layers, thus changing the relative orientation between the excited plane wave and the long slots. This leads to a continuous scanning of the generated pencil beam in both azimuth and elevation along a single predefined contour. Note that the long-slot radiating unit cell cannot achieve successful radiation when excited with a plane-wave along its long edge, which limits the possible scanned contours from the structure. To achieve full-space scanning, it is required to mechanically rotate the three metal layers together, in addition to an independent relative rotation between them. It is worth highlighting that in addition to the traveling-wave (series-fed) operation in ref. 39, it is possible to operate the CTS array in a parallel-fed fashion, which offers excellent H-plane scanning and very large operational bandwidths40. This parallel CTS antenna solution is used extensively in SatComm-on-the-move commercial systems, even though mechanical rotation is still needed to achieve full-space scanning.
On the other hand, numerous switched-beam antenna designs have been proposed41,42,43,44, that are operated in a similar manner to the CTS array, and are also able to combat the aforementioned tradeoff between scannability and cost. These antenna designs also eliminate the need for phase shifters, and are able to change the relative orientation between the excited plane wave and the radiating structure by using single-layer41,42 or multi-layer43,44 switched multi-port beamforming networks (lenses). These networks exhibit multiple switched-input ports situated along the focal plane/region of a lens such as a parabolic or Luneburg lens, and they leverage the Fourier-transform properties of the lens to excite a LWA structure with switched plane waves, by activating the different input ports. Thus, this LWA structure is operated using a switched-beam operation, and each input port corresponds to a discrete pencil beam in space situated along predefined contours. One difficulty with these switched-beam antennas is that they cannot scan the generated pencil beams continuously, which is the price of the switched-beam operation, and usually the 3-dB beamwidth of the generated beams needs to be carefully optimized for an overlapping spatial coverage. This becomes more challenging when very narrow beams are required. Another potential drawback is that some lens designs such as parabolic reflectors are inherently constrained to reduced spatial scanning (reduced angular coverage), due to eventual aberrations and spillover loss from feed elements displaced farther from the focal point. Also, these switched-beam antennas are typically not designed for broadside operation, which is inconvenient in some applications. More importantly, the LWA unit cells in refs. 41,42,43,44 are only excited along a single side of the LWA, and are not excited from the orthogonal side (although it is possible in principle), which limits the number of possible scan contours from these switched-beam antennas. Thus, a LWA unit cell design that is optimized for broadside operation, while allowing excitation from the two orthogonal sides of the LWA is highly desirable, and will increase the number of possible scan contours from the LWA, effectively constituting multiple antennas in one shared radiating aperture.
In a nutshell, the aforementioned CTS and switched-beam antennas either require mechanical rotation or input-port switching to change the relative orientation between the excited plane wave and the slot array. Recently, a remarkable phased-array alternative concept—the Peripherally-Excited (PEX) Antenna Array45,46,47,48—has been proposed, and it scans the excited plane wave inside the parallel-plate waveguide in a completely different manner. The PEX antenna concept stems from the Huygens' Box structure49,50,51,52,53, which enables the excitation of plane waves inside a closed cavity along arbitrary directions. In the PEX antenna concept, the Huygens' box is made leaky by appropriately perforating its top plate, and exciting it with an underlying electronically-steered plane wave. These plane waves are generated by peripheral Huygens' sources situated along the edges (sides) of the radiating aperture, according to the Huygens' equivalence principle. The Huygens' box concept has been experimentally validated previously49,50,51,52, where it has been demonstrated that plane waves can be excited along arbitrary directions inside a fully-closed metallic cavity, using peripherally-placed Huygens' sources. This is a quite remarkable feat as metallic cavities only inherently support standing waves, and the PEX array concept leverages these traveling plane waves and uses them to excite a radiating leaky-wave structure, generating electronically-steerable pencil beams.
The PEX array achieves a sizable reduction in the active-element count compared to traditional phased arrays, especially for larger aperture sizes45,46, as it requires active antenna elements solely along the edges (sides) of the PEX cavity. The radiating aperture itself is merely filled by a periodic arrangement of passive antenna elements, and the structure is operated as a LWA. This significantly reduces the active-element count compared to conventional phased arrays since the active-element count now scales with the perimeter of the radiating aperture, instead of its area. This reduction in the active-element count becomes more pronounced as the size of the radiating aperture gets larger, i.e. when high directivity pencil beams are required. In addition, the proposed PEX unit cell has been designed and optimized with 2D plane-wave excitation in mind, hence, it can be operated from all its sides, leading to more scanned planes. The proposed design can also be leveraged to generate multiple pencil beams from the PEX array simultaneously, thus mimicking the role of multiple independent antennas sharing the same radiating aperture, which saves valuable real-estate. The implications of this antenna are far-reaching as it can be potentially deployed in multiple-beam (MIMO) and duplex applications with simultaneous transmit and receive operation from the same antenna (not necessarily from the same direction).
This paper herein introduces the first experimental demonstration of the PEX antenna concept, while highlighting several desirable features and capabilities of the proposed antenna concept. This paper tackles many of the implementation challenges for realizing the PEX array, namely, implementing the PEX array using a low-cost and low-profile manufacturing process such as printed-circuit boards (PCBs), designing appropriate active (Huygens') sources to be used as peripheral sources inside these PCBs, introducing suitable electronic phase-shifting PCB sub-systems, suppressing the mutual coupling between the closely-spaced peripheral sources, and designing a radiating unit cell that is capable of radiating at broadside and tilted angles successfully without exhibiting any open bandgaps, among other challenges. Our proposed antenna can be easily scaled to higher millimeter-wave frequencies, as discussed later. In the proposed implementation, a Huygens' source is constructed from a coaxial feed backed by an array of metallic vias, which is entirely compatible with standard PCB fabrication, and can be easily embedded inside commercial two-layer dielectric substrates. Furthermore, the proposed PEX unit cell is capable of generating electronically-steered pencil beams at broadside and tilted radiation directions, by adopting techniques in closing the bandgap for broadside leaky-wave antennas54,55. In the following "Results" section, we describe the PEX antenna array concept and its relation to the Huygens' box. Additionally, we propose a practical peripheral Huygens' source implementation. Then, we present a specially-engineered radiating unit cell design that is capable of achieving successful radiation at broadside and tilted angles. After that, we show a practical implementation of the PEX antenna design using that unit cell, and the resulting structure is very easily scalable to higher millimeter-wave frequencies. We fabricate and experimentally test a prototype of the antenna design and demonstrate its versatility and practicality. In the "Discussion" section, we conclude the paper with some observations and remarks, including how to extend the scan range of the PEX array, beyond its predefined scanned-planes, by a mechanical rotation or employing tunable substrates. In the "Methods" section, we describe the main methods and procedures of designing, simulating, fabricating and testing the proposed PEX antenna array. Furthermore, we provide additional information pertaining to the phase-shifting feeding network and the frequency response of the PEX array in the Supplementary Notes.
Schelkunoff's equivalence principle
The peripherally-excited (PEX) antenna array stems from the Huygens/Schelkunoff equivalence principle. In simple terms, the equivalence principle states that the electromagnetic wave in a given region (Volume) is unique and is solely determined by the tangential electric and magnetic fields, and the electric and magnetic surface currents along the boundary surface enclosing that region56,57. A particular case of interest here is the two-dimensional (2D) variation of Schelkunoff's equivalence principle (see Fig. 1a), where electromagnetic waves \(\overline{{{{{{{{\bf{E}}}}}}}}}\)-\(\overline{{{{{{{{\bf{H}}}}}}}}}\) are defined inside a surface Si that is outline by a closed contour C. External to this contour is a surface So, that is free from any electromagnetic waves, and is conveniently filled with a perfect-electric conductor (PEC). To realize the discontinuous fields across the contour boundary, only a magnetic surface current \(\overline{{{{{{{{{\bf{M}}}}}}}}}_{{{{{{\rm{s}}}}}}}}\) needs to be impressed along the contour C. The PEC region negates the need for any electric surface currents \(\overline{{{{{{{{{\bf{J}}}}}}}}}_{{{{{{\rm{s}}}}}}}}\) along the contour C, as \(\overline{{{{{{{{{\bf{J}}}}}}}}}_{{{{{{\rm{s}}}}}}}}\) is shorted by the PEC. The contour C can exhibit any outline, shape and size, and it maybe polygonal or curvilinear. For a vertically polarized electromagnetic wave, this setup can be realized in practice using two closely-spaced PEC planes (a parallel plate waveguide with subwavelength thickness). Then, the contour C defines where the PEC side walls are placed, along which a magnetic surface current \(\overline{{{{{{{{{\bf{M}}}}}}}}}_{{{{{{\rm{s}}}}}}}}\) is impressed.
Fig. 1: PEX array concept.
a A general depiction of Schelkunoff's equivalence principle with magnetic surface currents lining the periphery of a region surrounded by a perfect-electric conductor (PEC). The electromagnetic fields inside the region are solely determined from the tangential magnetic surface currents on the Contour C. b A general illustration of the Huygens' Box, which is a practical implementation of Schelkunoff's equivalence principle using a metallic cavity. c A general schematic depiction of the PEX antenna array, which is essentially a Huygens' Box with additional passive antenna elements (slots) on the top side, allowing the plane waves underneath to radiate. d Side view of the proposed peripheral PEC-backed effective magnetic current sources which inject a circulating electric current into the side walls of the metallic cavity. They are created using a coaxial feed, and are used to realize the peripheral Huygens' sources required for the PEX cavity. (Dimensions: p = 7.15 mm, d = 14.3 mm, ϵr = 2.2, dp = 5.5 mm, tp = 1.4 mm, and h = 1.575 mm on a Rogers RT/duroid 5880 1oz substrate).
Huygens' Box
The resulting Huygens' Box structure is a thin flat metallic cavity with PEC side walls that are lined with effective magnetic surface currents (effective Huygens' sources) (see Fig. 1b)45,46,47,48,49,50,51,52,53. The magnetic surface currents can be effectively realized using an array of electric current sources that are separated by a distance p, placed along the edges (sides) of the metallic cavity. The separation distance p is ideally chosen to be smaller than or equal to λ/2, to satisfy sampling theorem requirements45,46,47,48,49,50,51,52,53, where λ is the wavelength inside the metallic cavity. This allows the Huygens' box to excite any electromagnetic wave distribution within the cavity, as long as the electromagnetic wave exhibits a vertically polarized electric field and is a solution to Maxwell's equations, such as a single or multiple propagating plane waves45,46,47,48,49,50,51,52,53, even though plane waves do not belong to the inherent modal set of typical metallic cavities.
PEX antenna array
The PEX antenna array is a Huygens' box with perforations or slots (passive antenna elements) on the top side of the cavity (see Fig. 1c). The perforations allow the excited plane wave(s) underneath to radiate one or more pencil beams to free space, similar to traveling-wave (leaky-wave) antennas. The PEX array can be realized using a double-sided printed-circuit board (PCB) that forms the top and bottom metallic plates, and an array of metallic vias that realizes the metallic side walls. In turn, the magnetic surface currents can be implemented using coaxial ports as proposed in Fig. 1d. Each coaxial feed injects a circulating electric current Ie along the metallic side walls, which realizes an effective magnetic surface current \(\overline{{{{{{{{{\bf{M}}}}}}}}}_{{{{{{\rm{s}}}}}}}}\) as required. The close placement of these coaxial feeds to the metallic side walls constitutes a major challenge, as PECs tend to cancel any tangential electric currents placed substantially close to them. As a result, the feed can undergo a significant impedance mismatch which compromises its ability to inject power into the PEX cavity. Hence, the distance between the coaxial feeds and the metallic side walls dp is carefully chosen, and is set to more than a quarter of the wavelength (λ/4), making the current loop formed slightly smaller than the wavelength (λ). This allows the "scattering" from the electric currents on the metallic side walls to be in phase with that of the coaxial feed, preventing any impedance mismatch problems, which facilitates the injection of power into the PEX cavity. More details on the reflection, coupling and transmission response between the various sides of the PEX array are discussed in Supplementary Note 2.
Accidental degeneracy
The PEX antenna array requires periodic perforations (passive antenna elements) on the top side of the PEX cavity to allow the plane wave(s) underneath to radiate pencil beam(s) to free space. The perforations ideally should enable radiation along broadside and other tilted directions in a traveling-wave (leaky-wave) manner. However, for such periodic structures, an open bandgap commonly emerges between the relevant 3–4 eigenmodes in the dispersion relation23,54,55,58, and results in a frequency range in the dispersion relation that is not covered by any eigenmode solution. Whenever a wave is excited in the frequency range of the open bandgap, it is not allowed to propagate inside the structure, and is entirely reflected back towards the source of excitation. Such an open bandgap typically exists at the Γ-point in the Brillouin diagram, which corresponds to the broadside radiation direction. Hence, successful radiation at and around broadside is a quite challenging endeavor. To overcome this challenge, the bandgap can be closed by achieving accidental degeneracy between the eigenmodes at the Γ-point in the dispersion relation23,54,55,58. This is achieved by manipulating the eigenmodes of the periodic structure, and forcing them to coexist at the Γ-point at exactly the same resonance frequency. For a traveling-wave antenna, the eigenmodes are also required to have a balanced radiation strength (similar Q-factors) to guarantee the accidental degeneracy of the eigenmodes, and the complete closure of the bandgap at the Γ-point23,54,55. Note that the Q-factor in this case is directly related to the leakage constant (α) of the unit cell, where a lower Q-factor, corresponds to a higher leakage constant, and more radiation from the individual unit cells.
Radiating unit cell with closed bandgap
The radiation from the proposed unit cell is achieved by cross-shaped slots (see Fig. 2a), and the dimensions of these slots control the amount of leakage constant and the Q-factor. These cross slots have been chosen for their immunity against the typical scan blindness of other slot shapes, where scan blindness in this context refers to the inability of some slots to achieve significant radiation when the traveling wave is oriented in a specific way relative to the slot. As a result, these slots achieve successful radiation at broadside and tilted directions while maintaining a high level of polarization purity for the generated pencil beam(s) (i.e., low cross-polarization levels). More importantly, four semi-square shaped slots are included to realize accidental degeneracy of the eigenmodes at the Γ-point, and close the bandgap in the dispersion relation. This is confirmed from the 2D dispersion relation depicted in Fig. 2b, and the 2D dispersion contours depicted in Fig. 2c, which both show that all the eigenmodes are coexistent at the Γ-point. This clearly demonstrates the successful closure of the bandgap at 13.1 GHz, and suggests that broadside radiation is possible using this specially engineered unit cell. For the proof-of-concept demonstration in this paper, the frequency of operation (13.1 GHz) has been chosen for demonstration purposes. It is a good compromise as it is high enough for the resulting 2D array design shown later to be physically small and manageable, while still being low enough to allow the use of inexpensive commercial connectors, cables, and terminations. The corresponding electric field distribution of the individual eigenmodes at the Γ-point (13.1 GHz) are shown in Fig. 2d–g. Noticeably, the electric field distributions around the cross-shaped slots are antisymmetric, especially for the 3rd and 4th eigenmodes, making the unit cell capable of achieving successful radiation (leakage) from the underlying electric field. More details about the process of designing and optimizing the unit cell, and closing the bandgap are discussed in the "Methods" section.
Fig. 2: Unit cell simulation.
The specially engineered PEX array unit cell with a closed bandgap in the dispersion relation: a Top view of the unit cell showing the cross-shaped radiating slots, and the semi-square slots that are added to achieve accidental degeneracy between the eigenmodes, and close the bandgap at the Γ-point. b, c The corresponding full-wave simulated 2D dispersion relation and 2D dispersion contours showing a closed bandgap at the Γ-point. d–g The corresponding full-wave simulated electric field distribution (V/m) of the 1st, 2nd, 3rd, and 4th eigenmodes at the Γ-point, respectively. (Dimensions: w = 1 mm, l = 7 mm, S1 = 3.3 mm, S2 = 2.7 mm and unit cell size = 14.3 mm × 14.3 mm × 1.575 mm).
Analytical beam-pointing directions
To construct the PEX antenna array, the unit cells are placed in a 2D periodic square lattice surrounded by the active peripheral sources (see Fig. 1c). The passive unit cells are individually excited by arbitrary propagating plane waves generated from the peripheral sources and having the general wave vector (kxd, kyd), which provides the individual unit cells with appropriate excitations (with the correct weight and/or phase). The expected radiation direction from the unit cells can be simply calculated by phase matching the plane wave propagating underneath the radiating slots (kxd, kyd) with the radiated free-space wave, leading to the following expressions (see45,59 for the full derivation):
$$\sqrt{{\epsilon }_{{{{{{\rm{e}}}}}}}}\,\cos \left(\psi \right)=\sin\, \left(\theta \right)\cos \left(\phi \right)-\frac{{\lambda }_{o}}{d}n,$$
(1a)
$$\sqrt{{\epsilon }_{{{{{{\rm{e}}}}}}}}\,\sin \left(\psi \right)=\sin\, \left(\theta \right)\sin \left(\phi \right)-\frac{{\lambda }_{o}}{d}m,$$
(1b)
where λo is the free-space wavelength, ϵe is the effective relative permitivity of the unit cell, \(\psi ={\tan }^{-1}({k}_{{{{{{\rm{yd}}}}}}}/{k}_{{{{{{\rm{xd}}}}}}})\) is the azimuthal orientation of the plane wave underneath the radiating slots measured from the x-axis, (n, m) are integer constants accounting for the possible Floquet modes that can be radiated from the structure, ϕ is the generated pencil beam azimuthal orientation measured with-respect-to the x-axis, and θ is the generated pencil beam tilted direction measured with-respect-to the z-axis (More details on the Theta-Phi (θ-ϕ) spherical coordinate system used are provided in Supplementary Note 4). These phase matching equations assume an infinitely periodic 2D array of unit cells, that can support the excitation of Floquet modes, however, the equations remain accurate for relatively large radiating apertures. On the other hand, it is also possible to express the inverse equations that calculate the pencil-beam direction (ϕ, θ) as a function of the direction of plane wave (ψ) inside the PEX cavity45,59:
$${\sin }^{2}\left(\theta \right)={\epsilon }_{{{{{{\rm{e}}}}}}}+\frac{{\lambda }_{o}^{2}}{{d}^{2}}\left({n}^{2}+{m}^{2}\right)+2\sqrt{{\epsilon }_{{{{{{\rm{e}}}}}}}}\frac{{\lambda }_{{{{{{\rm{o}}}}}}}}{d}\left[n\,\cos \left(\psi \right)+m\,\sin \left(\psi \right)\right],$$
$$\tan \left(\phi \right)=\frac{\sqrt{{\epsilon }_{{{{{{\rm{e}}}}}}}}\,\sin \left(\psi \right)+\frac{{\lambda }_{o}}{d}m}{\sqrt{{\epsilon }_{{{{{{\rm{e}}}}}}}}\,\cos \left(\psi \right)+\frac{{\lambda }_{{{{{{\rm{o}}}}}}}}{d}n}.$$
Note that for the preliminary structure shown in ref. 45, the unit cells comprise subwavelength square-shaped slots, which do not disturb the dispersion relation of the unit cells, compared to a parallel plate waveguide. Hence, the effective relative permitivity of the unit cells therein is merely that of the underlying dielectric medium (ϵr). However, for the proposed PEX unit cell in Fig. 2a, the perturbations to the unit cells are significant, and the dispersion relation is quite different from that of an unperturbed parallel plate waveguide. Thus, an effective relative permitivity (ϵe) is required to be defined, and it can be approximated from \(\left({\epsilon }_{{{{{{\rm{e}}}}}}}={\lambda }_{{{{{{\rm{o}}}}}}}^{2}/{\lambda }_{{{{{{\rm{PEX}}}}}}}^{2}\right)\), where λPEX is the wavelength within the PEX cavity. Thus, this means that the effective relative permitivity (ϵe) depends on the relative permitivity of the dielectric filling the PEX cavity (ϵr), the frequency of operation, and the size and shape of the slot arrangement used (see Fig. 2a). If the unit cell is assumed to be operated around the Γ-point of the dispersion relation (13.1 GHz), λPEX is approximately equal to the periodicity of the unit cells (the size of the unit cells d), and the effective relative permitivity (ϵe) can be estimated from \({\epsilon }_{{{{{{\rm{e}}}}}}}={\lambda }_{{{{{{\rm{o}}}}}}}^{2}/{d}^{2}\). Using this value for ϵe, the previous expressions simplify to:
$${\sin }^{2}\left(\theta \right)=\frac{{\lambda }_{o}^{2}}{{d}^{2}}\left[1+{n}^{2}+{m}^{2}+2n\,\cos \left(\psi \right)+2m\,\sin \left(\psi \right)\right],$$
$$\tan \left(\phi \right)=\frac{\sin \left(\psi \right)+m}{\cos \left(\psi \right)+n}.$$
The estimated pencil-beam directions (ϕ, θ) that are possible for the unit cell depicted in Fig. 2a are plotted later in Fig. 3b, considering the different Floquet modes (0, ±1), (±1, 0) and (±1, ±1) that are within the visible radiation region. To obtain these beam-pointing angles in Fig. 3b, all the possible plane wave directions (ψ) have been considered \(\left(\psi \in [0,2\pi ]\right)\). The corresponding grating-lobe limit for the proposed PEX unit cell is also plotted in Fig. 3b using a black dotted circle. Inside this circle, all the shown beam-pointing directions are realizable without the generation of any grating lobes, as they satisfy the unimodal condition. Note that because of the symmetric nature of the proposed unit cell, the generated pencil beams can possible scan along multiple orthogonal predefined contours as demonstrated by the lines inside the grating-lobe limit circle. On the other hand, the empty portions of the visible region in Fig. 3b correspond to directions where the generated beam cannot be pointed (at the Γ-point frequency). Thus, for extremely large antennas with very narrow beams, the portion of the visible region that can be practically covered by the generated beams will be reduced, because of the narrower beamwidth. Notably, the possible radiation beam-pointing directions (ϕ, θ) are constrained, as the generated pencil beams are scanned only along predefined contours, nevertheless, the scanning is considered acceptable especially the achieved range of tilted angles.
Fig. 3: PEX array simulation.
Full-wave simulation of the 15 × 15 PEX antenna array: a Top view of the PEX array which is etched on a Rogers RT/duroid 5880 1oz substrate, showing the different sides of the array A, B, C and D, and the coordinate system used. (Dimensions = 245.8 mm × 245.8 mm × 1.575 mm). b Comparison between the analytical (solid) and the full-wave simulated (red dots) pencil-beam directions (ϕ, θ) at 13.1 GHz for multiple plane wave directions (ψ) exciting the PEX array, both showing a similar scanning behavior along predefined contours with a versatile range of tilted angles. c, d The full-wave simulated realized gain (dB) patterns plotted along the E-plane and H-plane, respectively, at 13.1 GHz, for various values of (ψ), when only side A is excited. Successful electronic scanning of the generated pencil beams is observed.
PEX array simulation
Full-wave simulations of a 15 × 15 PEX array are performed using the structure as shown in Fig. 3a, demonstrating the capabilities of the proposed PEX antenna (details on the full-wave simulation setup are discussed in the "Methods" section). If the ports along a single side of the PEX array are all excited with an equal amplitude and a progressive phase shift, a single linearly polarized pencil beam is generated, with polarization along the propagation direction of the excited plane wave underneath the radiating slots. The generated pencil beam can be electronically scanned towards broadside and other tilted angles, by simply controlling the amount of the applied progressive phase shift, i.e., the plane wave propagation direction (ψ). Based on this concept, the full-wave simulated beam-pointed directions achieved by the PEX array at the center frequency of 13.1 GHz are shown in Fig. 3b, showing great agreement with the analytical expressions in Eq. (3). The center of this plot corresponds to broadside radiation, which is achieved when the ports along one side (such as side A) are excited in phase, and the remaining sides B, C and D are left unexcited. This excites a plane wave along the x-axis direction (ψ = 0), generating a broadside pencil beam as depicted in Fig. 3c, d. In these plots, the E-plane contains the direction of the peak of the beam and the electric field, whereas the H-plane contains the direction of the peak of the beam and the magnetic field (More details in Supplementary Note 4). For the radiation patterns plotted along the E-plane in Fig. 3c, an angle (γ) is defined as shown in Supplementary Note 4, with zero along the direction of the peak of the pencil beam. The generated broadside pencil beam achieves a realized gain of 25.1 dB, with an aperture efficiency of 84.6%, quite high for a traveling wave antenna. Notably, aperture efficiency compares the directivity of the beam to the maximum directivity achieved by a uniformly radiating aperture with the same physical size56, thus, antennas that achieve such high aperture efficiencies guarantee the generation of narrow beams from a compact antenna that does not require much real estate. Such antenna designs are highly desirable in various applications such as satellite communications, point-to-point communications, and autonomous vehicles. On the other hand, the radiation efficiency of the design is 77.2% where the remaining power is lost to heat due to the metallic and dielectric losses. Overall, the simulation results confirm that the proposed peripheral excitations are well matched to the PEX cavity, and do not suffer from any current cancellation or mutual coupling problems.
From the total power excited into side A, 8.0% is reflected back to side A, 1.3% is coupled to side B, 1.3% is coupled to side D, and 44.4% is transmitted across the PEX array to side C. Needless to say that a bigger PEX array will exhibit less power lost to the resistive terminations along side C, albeit at the price of an accompanying reduction in the aperture efficiency achieved, as the PEX antenna is constructed from a uniform leaky-wave antenna. As a result, the proposed PEX antenna array experiences an inevitable tradeoff between aperture efficiency, radiation efficiency, and termination efficiency. The latter refers to any remaining unradiated power that is lost to the resistive terminations. To mitigate this tradeoff, it is possible to design a PEX antenna array with tapered radiation from the unit cells, which is a standard practice for leaky-wave antennas that aim at achieving high aperture and radiation efficiencies simultaneously.
Additionally, tilted pencil beams are generated by exciting one side of the PEX array (such as side A) with a progressive phase shift, which excites a plane wave at a tilted angle with respect to the x-axis (ψ ≠ 0). Sample tilted pencil beams are generated as shown in Fig. 3c, d, from plane waves excited along (ψ = ±5°, ±10°, ±15°, and ±20°), respectively. Note that although the PEX array is designed for continuous beam steering by tuning the peripheral feeds, the performance is characterized for only a discrete set of descriptive cases for simplicity. The electric-field polarization of the generated beams is again along the propagation direction the excited plane wave, i.e., approximately along the x-axis when only side A is excited, and the corresponding orientations of the E-plane and H-plane are as described in Supplementary Note 4. Notably, it is sufficient to solely excite side A for these directions of plane wave propagation (ψ), as the corresponding plane waves form a considerably small angle with the x-axis, and there is no need to invoke any excitations from sides B, C, or D. The antenna parameters of all these pencil beams are summarized in Table 1. It is observed that the generated pencil beams achieve acceptable levels of realized gain, X-pol, SLL, radiation and aperture efficiencies. Also, a breakdown of the different portions of power that reach the four sides of the PEX array when side A is excited is included in Table 1. This power breakdown does not include the effects of the feeding network that will be later used to excite the PEX array in the experiment. For the different cases, it is observed that a large portion of the incident power at side A reaches the opposite side C. Again, a larger PEX array will exhibit more radiation from the leaky wave, and less power will reach side C as a result, however, the aperture efficiency will inevitably drop as well (if the radiation is not tapered), as described earlier.
Table 1 Antenna parameters simulation. The full-wave simulated antenna parameters for a plane wave excited at various directions (ψ) inside the PEX array.
Notably, the proposed PEX antenna array achieves radiation by exciting a traveling leaky-wave mode. Hence, the generated pencil beams undergo a slight spatial scanning as the frequency of operation is changed, as demonstrated in more detail in Supplementary Note 2. The 1 and 3 dB bandwidths achieved by the generated pencil beams at the center frequency (13.1 GHz) for different plane-wave directions (ψ) are also shown in Table 1. For instance, at broadside, the generated pencil beam achieves a 2.5% 1 dB-bandwidth and a 4.5% 3 dB-bandwidth. This instantaneous bandwidth constitutes the actual useful bandwidth that can be used for communications, as it represents the frequency bandwidth where the beam scanning with frequency leads to a 1 or 3 dB attenuation. Note that this instantaneous bandwidth is limited by the spatial scanning of the generated pencil beams, and not beam degradation. The proposed PEX array can in principle be operated over a wider bandwidth while maintaining a directive beam, without beam degradation (as shown in Supplementary Note 2).
Notably, from the symmetry of the proposed PEX array, the additional possible beam-pointing directions, realized by exciting the other sides B, C, and D, can be inferred by analyzing the representative examples in Table 1, while cross-referencing with Eq. (3) and Fig. 3b.
Multiple-beam simulation
The principle of superposition can be leveraged by the PEX antenna array where more than one plane wave are generated simultaneously inside the PEX cavity. This generates multiple pencil beams at broadside and/or tilted angles. For example, side A is excited by a plane wave at (ψ = 0°), and side B is excited by a plane wave at (ψ = 285°), which radiates two pencil beams as depicted in Fig. 4a, d, where one beam is pointed towards broadside, and the other beam towards a tilted angle. It is also possible to generate two beams both pointing at tilted angles simultaneously. For instance, side A generates a plane wave at (ψ = − 10°), and side B generates a plane wave at (ψ = 280°), which radiate two tilted pencil beams as depicted in Fig. 4b, e. Furthermore, superposition can be applied even further where three or more pencil beams are generated. As a demonstration, side A generates a plane wave at (ψ = 0°), side B generates a plane wave at (ψ = 285°), and side C generates a plane wave at (ψ = 105°), which radiate three pencil beams as depicted in Fig. 4c, f. All of the multiple beams presented in Fig. 4 exhibit similar antenna parameters to those shown in Table 1.
Fig. 4: Multiple-beam simulation.
The normalized full-wave simulated radiation patterns for various values of (ψ), when both sides A and B (and C) of the 15 × 15 PEX array are excited simultaneously. a–c The full-wave simulated far-field 3D gain patterns for the cases: a ψ = 0° and 285°, b ψ = −10° and 280°, and c ψ = 0°, 285°, and 105°, plotted in linear scale (V/m) and normalized to an arbitrary value of 100. d–f The full-wave simulated far-field U-V gain patterns for the cases: d ψ = 0° and 285°, e ψ = −10° and 280°, and f ψ = 0°, 285°, and 105°, plotted in dB scale. All the beams can be independently scanned, and a multitude of radiation directions are achieved.
Note that one of the main challenges in achieving multiple-beam or duplex operation from any antenna is the mutual coupling that may occur between the different beams, which limits the radiation efficiency of the antenna and prevents the successful duplex operation60. To minimize this unwanted mutual coupling, it is typically required that the transmitted and received beams exhibit a low beam-coupling factor, i.e., the beams are required to be orthogonal60. The proposed PEX antenna achieves this by exhibiting orthogonal polarizations for the beams generated from sides A and B, as the electric-field polarization of the radiated beams is along the propagation direction of the excited plane wave underneath the radiating slots (which is orthogonal for sides A and B). Thus, this causes a remarkably low mutual coupling between the orthogonal sides of the PEX array, and there are no coupling or degradation effects between the generated beams by sides A and B (see Table 1 and Supplementary Note 2), which enables the successful multiple-beam or duplex operation of the proposed antenna array. Thus, the proposed PEX array is capable of generating single and/or multiple pencil beams, that are independently scanned along predefined contours with a versatile range of tilted angles.
More importantly, the results presented here correspond to a 15 × 15 PEX array, with 31 ports along each of the four sides of the array. For single-beam generation, only a single side needs to be excited for most of the radiation directions, which sufficiently excites the 225 radiating unit cells of the PEX array. Hence, the proposed PEX achieves a sizable reduction in the active-element count, albeit for the cost of providing limited beam-point directions. The reduction in active-element count is more remarkable for larger PEX array sizes45. On the other hand, it is worth highlighting that the proposed PEX antenna array is a traveling-wave antenna, thus, when the frequency of operation is changed from the center frequency (13.1 GHz), the generated pencil beam(s) may slightly scan with frequency (according to the direction of the plane wave underneath the radiating slots and the 2D dispersion relation of the unit cell). The frequency response of the proposed PEX array is described in more detail in Supplementary Note 2.
PEX array experiment
A 7 × 7 PEX array prototype is fabricated using standard printed-circuit-board fabrication techniques. Notably, the size of the fabricated PEX array is 1/4 that of the full-wave simulated design. As a result, the measured gain values are expected to be at least 6 dB lower than the simulation results previously shown in Fig. 3. This smaller size is chosen to reduce the complexity and fabrication costs of both the PEX array and the required feeding network, for the proof-of-concept experimental demonstration presented here. A photo of the fabricated prototype is shown in Fig. 5a. The PEX array is experimentally characterized for both single and multiple beam generation modes, using a near-field scanning antenna characterization system as shown in Fig. 5b (details about the fabricated PEX antenna, the feeding network and the experimental procedure implemented are discussed in the "Methods" section). For single-beam generation, a single 1 × 16 feeding network is used to excite side A of the PEX array with a plane wave oriented along different angles (ψ) with respect to the x-axis. This generates pencil beams with different beam-pointing directions as illustrated in Fig. 6a, b, comparing the measured radiation patterns to those generated by a corresponding full-wave simulated 7 × 7 PEX array. Notably, the near-field scanning system uses an Az-over-El (Az/EL) coordinate system, which is different from the Theta-Phi (θ-ϕ) system used in the full-wave simulation results61. It is possible to apply a series of coordinate system transformations and rotations to the measured results, and extract the E-plane and H-plane radiation patterns from them, however, this would involve many mathematical calculations, inevitably adding numerical errors to the measured results. Thus, it was preferred to plot the measured results directly using their native (Az/El) coordinate system, given that both (θ-ϕ) and (Az/El) spherical coordinate systems are identical along the cardinal planes (x-z and y-z planes). In particular, for the x-z plane (ϕ = 0°) the elevation angle (θ) is identical to Az, whereas for the y-z plane (ϕ = 90°) the elevation angle (θ) is identical to El. Thus, both coordinate systems can be used interchangeably along or around the cardinal planes. Since, the proposed PEX antenna mostly scans the generated pencil beams close to these cardinal planes, the measured and simulated results can be compared directly even though they are technically plotted using different coordinate systems.
Fig. 5: PEX array measurement.
a Top view of the fabricated 7 × 7 PEX antenna array prototype which is etched on a Rogers RT/duroid 5880 1oz substrate, showing the different sides of the array A, B, C and D, and the coordinate system used. (Dimensions = 131.4 mm × 131.4 mm × 1.575 mm). b A photo of the near-field scanning experimental setup used, showing the PEX prototype and the waveguide probe using to measure the near-field radiation from the antenna.
Fig. 6: Single-beam measurement.
a, b Comparison between the measured (solid) and full-wave simulated (dotted) realized gain patterns (dB) at 13.1 GHz, along the a Az and b El directions for the measurement, and along the a E-plane and b H-plane for the simulation, for various values of (ψ), when only side A is excited. Successful electronic scanning is observed for the generated pencil beams in the El plane. c, d Comparison between the measured (solid) and full-wave simulated (dotted) realized gain patterns (dB) at 13.1 GHz, along the c Az and d El directions for the measurement, and along the c H-plane and d E-plane for the simulation, for various values of (ψ), when only side B is excited. Successful electronic scanning is observed for the generated pencil beams in the Az plane.
The measured far-field radiation patterns, corresponding to the case when side A is excited, are presented in Fig. 6a, b. Note that the Az angles, in this case, are measured along planes parallel to the x-z plane with zero towards the z-axis, whereas the El angles are measured along the orthogonal planes that go through the y-axis with zero towards the z-axis (see Supplementary Note 4 for more details). Again, the electric-field polarization of the radiated beams is along the propagation direction of the excited plane wave, i.e., approximate along the x-axis. In any case, the PEX array is clearly able to generate independently-scannable pencil beams at broadside and other tilted angles along the El plane (y-z plane), when side A is excited. The corresponding maximum deviation in the measured peak gain of the scanned pencil beams is around 1.28 dB. On the other hand, the maximum tilted-angle scan range achieved in Fig. 6 is for the case ψ = ± 20° and is around 33.5° away from broadside. The generated pencil beam was not scanned beyond that to avoid the generation of grating lobes, which often emerge for pencil beams generated at tilted angles that are bigger than a critical tilted angle (θc). For the PEX unit cell shown in Fig. 2a, the critical tilted angle θc can be calculated at 13.1 GHz from \({\theta }_{{{{{{\rm{c}}}}}}}={\sin }^{-1}\left({\lambda }_{{{{{{\rm{o}}}}}}}/d-1\right)=\) 37°56. The dotted black circle in Fig. 3b corresponds to the calculated critical angle for the PEX unit cell, i.e., the grating-lobe limit. All the beam-pointing directions presiding inside this circle can be achieved without the generation of grating lobes as they satisfy the unimodal condition. Thus, this critical angle constitutes a practical limit on the maximum scan range possible from the proposed PEX antenna. Needless to say that these potential grating lobes can be entirely prevented by designing a PEX unit cell with a smaller physical size d, which in turn leads to a larger value for the critical tilted angle (θc), and a larger dotted circle in Fig. 3b. This would allow more beam-pointing directions to satisfy the unimodal condition, and the generation of pencil beams along more of the possible beam-pointing directions predicted in Fig. 3b, without the generation of any grating lobes whatsoever.
On the other hand, side B of the PEX array is excited with a similar 1 × 16 feeding network, that excites plane-waves at various orientation angles with respect to the y-axis, and the polarization of the generated pencil beams is mostly orthogonal to the previous case of only exciting side A, i.e., is almost along the y-axis. Again, the PEX array is capable of generating a pencil beam along broadside and other tilted angles, this time along the Az plane (x-z plane), as shown in Fig. 6c, d. The corresponding maximum deviation in the measured peak gain for the scanned pencil beams is around 2.84 dB. Although the proposed PEX antenna is symmetric, the beam scanning loss when side B is excited is significantly more than that of side A. This discrepancy can be attributed to imperfections in the measurement setup such as small unwanted bounces and reflections from the surfaces of the walls and tables surrounding the antenna, which are more severe farther away from broadside and may slightly differ for sides A and B.
Overall, it is observed that the measured beams are in good agreement with the full-wave simulation results, and they exhibit similar pencil-beam radiation angles as calculated from the analytical expressions and predicted by the full-wave simulations. Nevertheless, there is a discrepancy between the measured and simulated peak gain values in Fig. 6, even though the results correspond to a PEX antenna with the same physical size. This discrepancy can be explained by observing the phase-shifter' transmission response in Supplementary Note 1, where it is clear that the transmission amplitude changes as a function of the achieved phase shift. Hence, unlike the full-wave simulations, the fabricated PEX antenna is excited with progressively phase-shifted excitations with varying amplitudes. This causes a discrepancy between the measured and simulated peak gain values in Fig. 6. The maximum discrepancy between the measured and simulated peak gain when side A is excited is around 2.22 dB, and when side B is excited is around 3.13 dB. Needless to say that a better optimized phase shifter design, with a more constant transmission amplitude for different phase shifts, would allow a closer agreement between the measured and simulated peak gain values. In any case, the measured peak directivity achieved by the PEX antenna prototype is between 19.8–22.1 dB for the different measured cases, which is in good agreement with the full-wave simulation results, and further validates the proposed PEX design (see Supplementary Note 2 for more details).
Furthermore, the measured pencil beams exhibit an average of 2.1% 1 dB-bandwidth and 7.8% 3 dB-bandwidth when side A is excited, and an average of 3.3% 1 dB-bandwidth and 8.2% 3 dB bandwidth when side B is excited. It is observed that the measured bandwidths are slightly wider than the simulated values reported earlier in Table 1, even though the phase shifters used in the experiment have a frequency-dependent phase shift (see Supplementary Note 1), whereas the full-wave simulated PEX array is excited with a frequency invariant progressive phase shift. This can be partially justified by the fact that the measured PEX array is 1/4 of the size of the full-wave simulated version, hence, the measured pencil beams exhibit a much wider beamwidth, which can lead to a slightly wider directivity bandwidths. On the other hand, although the fabricated PEX antenna is symmetric, there is a very small discrepancy between the measured directivity bandwidth for the generated pencil beams from sides A and B. This is caused by the different dispersion characteristics of the phase shifters used to excite sides A and B, as well as small unwanted bounces and reflections around the measurement setup which may differ for the two sides.
From this discussion, it is observed that the proposed PEX array is clearly capable of scanning the generated pencil beams along both the azimuth and elevation planes, independently. These measured patterns demonstrate the versatility and flexibility of the proposed design, which can be considered as multiple antennas sharing the same radiating aperture, saving valuable real estate.
Multiple-beam experiment
It is also possible to generate multiple pencil beams by applying superposition to the PEX array, and exciting more than one plane wave simultaneously. For example, side A is excited with a plane wave that generates a broadside pencil beam, whereas side B is excited with a plane wave that generates a tilted pencil beam. Two 1 × 16 feeding networks and a single commercial 1 × 2 power splitter are used in this case. The measured radiation patterns for a descriptive case is depicted in Fig. 7a, e, where two pencil beams are successfully generated with the same beam-pointing directions as predicted earlier. It is also possible to generate two tilted pencil beams simultaneously as illustrated for the descriptive cases depicted in Fig. 7b–d, f–h. More measured cases of generating two pencil beams are shown in Supplementary Note 3. It is important to emphasize that there are no coupling or degradation effects between the generated pencil beams by sides A and B, as the individual beams exhibit orthogonal polarizations as discussed earlier, and the overall mutual coupling between the two orthogonal sides is significantly low for all the generated beams.
Fig. 7: Multiple-beam measurement.
The normalized measured radiation patterns for various values of (ψ), when both sides A and B of the 7 × 7 PEX array are excited simultaneously: a–d The measured far-field 3D gain patterns for the cases: a ψ = 0° and 285°, b ψ = −5° and 275°, c ψ = −10° and 280°, and d ψ = −15° and 285°, plotted in linear scale (V/m) and normalized to an arbitrary value of 100. e–h The measured far-field U-V gain patterns for the cases: e ψ = 0° and 285°, f ψ = −5° and 275°, g ψ = −10° and 280°, and h ψ = −15° and 285°, plotted in dB scale. All the beams can be independently scanned, and a multitude of radiation directions are achieved. More measured results are presented in Supplementary Note 3.
Obviously, superposition of even more plane waves is possible, and three or more pencil beams can be generated in principle. However, if it is required to generate multiple plane waves from the same side, the feeding network will be required to modulate the amplitudes of the individual port excitations, and not solely the phase, unlike the examples presented in this paper. All the results presented in the main body of the paper pertain to the center frequency of operation (13.1 GHz). When the frequency of operation is changed, the generated pencil beam(s) are expected to slightly scan as a result of the dispersion of the phase-shifters and the unit cell. This is discussed in more detail in Supplementary Note 2. The supplementary information also includes more details about the feeding networks in Supplementary Note 1.
Notably, the proposed antenna does not suffer from any current cancellation or mutual coupling problems from the closely-spaced peripheral sources. It is also clear that the proposed PEX antenna array exhibits numerous advantages such as the effective integration of multiple antennas into a single shared radiating aperture, which is able to generate single or multiple pencil beams at broadside and tilted radiation directions. For the multi-beam case, it is possible to operate all the individual beams generated from the different sides of the PEX array under transmit, or receive mode, or even under simultaneous transmit and receive, which leverages on the low mutual coupling between the different sides of the PEX array as shown in Supplementary Note 2. This has far-reaching potential applications, as the proposed design can be deployed in multiple-beam (MIMO) and duplex applications with simultaneous transmit and receive operation from the same antenna (not necessarily from the same direction). This ultimately comes at the price of the limited scan range of the generated pencil beams, which is capable of scanning only along predefined contours.
The proposed PEX array can continuously steer the generated pencil beam along a multitude of azimuthal and elevation directions. The PEX antenna is excited by plane waves which are synthesized directly by individual Huygens' sources with an electronically-controlled progressive phase shift that can be continuously tuned, and the sources are situated along the periphery of the radiating aperture according to the Huygens' equivalence principle. One of the advantages of the proposed PEX array is that it achieves a sizable reduction in the active-element count relative to traditional phased arrays, particularly for larger PEX arrays45,46, as the active-element count now scales with the perimeter of the PEX array radiating aperture, instead of its area. There are numerous additional advantages of the proposed design, namely, the PEX array is capable of: (a) steering the plane-waves electronically without requiring any relative mechanical rotation between its plates or lenses (b) continuous spatial scanning as opposed to switched beams (c) exciting plane waves from all its sides leading to more scanned planes (when compared to alternative related antennas41,43) (d) generating a single or multiple independently-steered pencil beams. It is also possible to leverage the peripheral sources, by specifying suitable weights, to electronically tune the side-lobe levels of the generated pencil beams, and achieve some form of beamforming. This possibility will be investigated in the future. The proposed design can also be leveraged to generate multiple pencil beams from the PEX array simultaneously, thus mimicking the role of multiple independent antennas sharing the same radiating aperture, which saves valuable real-estate.
It is possible to extend the scan range further in the future by incorporating additional scanning techniques, such as placing the entire PEX array on a simple rotating pedestal that can be mechanically rotated. This will enable the full coverage of the upper hemisphere, where the elevation of the pencil beam is controlled by the PEX antenna, and a corresponding azimuth is set by the PEX antenna and fine tuned by the mechanical motor. Additionally, there are alternative techniques that can be envisioned to extend the scan range such as leveraging technologies that can change the effective permitivity of the medium filling the PEX cavity. Notably, the entire permitivity of the medium filling the PEX cavity is required to be changed in unison, i.e., the individual unit cells are not required to be individually tuned. One possibility to achieve this is by replacing the dielectric substrate with voltage-controlled materials such as ferroelectric materials62, and controlling the permitivity of the material by DC biasing the ground and top plates of the PEX cavity. Another possibility is to fill the PEX cavity with an artificial dielectric such as a bed of nails going through holes in the ground plane, and mechanically controlling the effective permitivity by changing the height of nails going through the holes. These techniques and others can be implemented to extend the scan range of the PEX antenna to achieve full 2D spatial coverage.
For some applications, it is desirable to scale the proposed PEX antenna to higher millimeter-wave frequencies. Some of the challenges for that include additional material losses and realizing higher-frequency peripheral feeds. The proposed PEX array can be considered a substrate-integrated parallel-plate waveguide, thus, it is expected to exhibit similar material losses to typical substrate-integrated waveguide antennas at millimeter-wave frequencies. In addition, the losses at millimeter-wave frequencies are expected to be comparable to, or even less than, fully-fledged phased arrays that require inevitably more complicated feeding networks. It is also possible to envision fully-metallic versions of the proposed PEX array using an artificial dielectric such as a bed of nails to fill the PEX cavity44, which can reduce the material losses significantly. On the other hand, the proposed PEX antenna, in its current form, exhibits coaxial connectors that are used to directly feed the radiating aperture at its periphery. At shorter wavelengths, these coaxial connectors might be too large to be closely placed together for proper PEX operation, nevertheless, this is not a fundamental limitation. Many substitutes for directly feeding the PEX array can be envisioned such as: (a) engineering transitions from spaced-out coaxial connectors to narrowly-spaced microstrip/coplanar lines that feed the PEX array (b) integrating sources (transceivers) in IC form and placing them directly at the location of the peripheral feed points. Thus, from this discussion, it is clear that the proposed PEX antenna design can be potentially scaled to higher millimeter-wave frequencies with relative ease. On the other hand, expanding the frequency of operation to sub-THz range (100 GHz and beyond) could present challenges, when using the current PEX array implementation with phase shifters. Thus, successful PEX array operation can only be envisioned up to the millimeter-wave regime, if the phase shifters are not replaced with other higher-frequency suitable alternatives.
To conclude, in this paper, we propose a practical realization for the Peripherally-Excited (PEX) antenna concept. To realize the design, we specially engineer a radiating unit cell with a closed bandgap in the dispersion relation, and use it as the top perforations for the PEX cavity. This enables the successful generation of pencil beams at broadside and titled radiation directions from the PEX cavity. We show that the proposed structure is able to generate highly directive scannable pencil beams with high aperture efficiency. The proposed structure can also be operated in multiple modes, where single and/or multiple independent pencil beams are generated by simply controlling the excitation of the peripheral sources. Additionally, we show that the PEX antenna array is able to scan the generated beams along predefined contours with a versatile range of tilted angles. We experimentally validated the proposed structure using near-field antenna measurements techniques, and the experimental results are in good agreement with the full-wave simulations. This demonstrates the versatility of the proposed PEX antenna array, and confirms that the proposed design is a practical implementation for the PEX antenna concept. The proposed design is quite simple to fabricate using standard printed-circuit-board fabrication techniques, and can be potentially scaled to higher millimeter-wave frequencies. This has numerous potential applications, and the design could perhaps be deployed in mulitple-beam (MIMO) and duplex applications with simultaneous transmit and receive operation from the same antenna (not necessarily from the same direction). The proposed PEX array implementation is just the beginning of an alternative phased array technology with many future possibilities. It has many advantages such as the ability to generate single and/or multiple independently scanned pencil beams using only peripheral sources. Importantly, it achieves a sizable reduction in the active-element count especially for larger PEX arrays, compared with traditional phased arrays. Possible future implementations can be even potentially realized with an extended scan range, and operated at higher millimeter-wave frequencies which is highly desirable. Hence, it can potentially be deployed in some of the emerging systems that require the generation of scannable pencil beams, such as automotive radars, among others.
The proposed unit cell is designed on a commercial Rogers RT/duroid 5880 1oz substrate with thickness = 1.575 mm. The unit cell is full-wave simulated at frequencies around the center frequency (13.1 GHz), using the "eigenmode" solver in Ansys HFSS63. The goal of the full-wave simulation is to optimize the dimensions of the slots, in order to close the bandgap at the Γ-point and enable successful radiation at and around broadside. To emulate an infinitely periodic array, master-slave periodic boundary conditions are applied along the sides of the unit cell, and a perfect matching layer (PML) is used at the top to absorb the generated radiation. In the design process, the dimensions of the cross-shaped slots are first determined to achieve a strong radiation from the corresponding leaky-wave mode, by realizing a large leakage constant and a low Q-factor. After that, the dimensions of the four square-shaped slots S1 and S2 are both optimized to achieve accidental degeneracy of the four eignemodes at the Γ-point (see Fig. 2a). In particular, the dimensions of the notches placed in the corner of the square-shaped slots are tuned, as they provide an additional degree of freedom, and allow the independent control of the individual eigenmodes. After performing a parametric sweep analysis, the optimum slot dimensions are determined, and the optimized square-shaped slots are able to achieve the aforementioned accidental degeneracy of the eigenmodes and successfully close the bandgap at the Γ-point, as shown in the 2D dispersion relation (see Fig. 2b, c).
The full-wave simulation results of the 15 × 15 PEX array are performed using the "Driven Modal" solver in Ansys HFSS63. To realize the metallic side walls of the PEX cavity, an array of 0.8 mm metallic vias are placed, with a center-to-center separation of 2 mm. The PEX array is perforated at the top with a periodic array of the closed-bandgap unit cell (see Fig. 3a), and is capable of radiating at broadside and other tilted radiation directions. The peripheral Huygens' sources are implemented using PEC-backed coaxial feeding ports, which effectively behave as magnetic surface currents (see Fig. 1d). The distance between the coaxial feeds and the metallic side walls is optimized to minimize any reflections from the peripheral ports and limit the mutual coupling between closely-spaced adjacent ports, as discussed in the paper (see Supplementary Note 2 for more information). The PEX array exhibits 31 coaxial ports along each of the four sides of the array, which are simulated using wave ports. There are also four dummy ports that are added in the four corners of the PEX array, used to eliminate any problems due to edge effects. These dummy ports are simply terminated to matched 50 Ω loads. The design parameters of the PEX array are p = 7.15 mm, d = 14.3 mm, ϵr = 2.2, dp = 5.5 mm, tp = 1.4 mm and h = 1.575 mm. The effects of metallic and dielectric losses are included in these full-wave simulations, assuming a Rogers RT/duroid 5880 1oz substrate. For different simulation scenarios, some or all of the sides of the PEX array are excited with an appropriately phase-shifted excitation. This leads to the excitation of plane waves at various directions inside the PEX cavity, which in turn generates pencil beams at broadside and tilted radiation directions. Throughout the paper, all unexcited sides are effectively terminated to 50 Ω matched loads by the full-wave simulation. Overall, the proposed PEX array design is quite simple to simulate, is compatible with standard PCB fabrication techniques, and can be easily scaled to higher millimeter-wave frequencies.
Phase-shifting feeding network
There are various ways to implement the required phase-shifting feeding network. For instance, it is possible to integrate a feeding network with phase shifters to the back of the bottom side of the PEX array board, on additional layers that are bonded to its ground plane. In this case, metallic vias can be used to connected the phase shifters to the peripheral ports of the PEX array. This approach has the benefit of producing a low-profile single board that includes both the feeding network and the PEX array, with minimal external wiring required. However, it is not flexible enough to allow for experimenting with different feeding networks or operating the PEX array under different operational modes, as the feeding network would be permanently bonded to the back of the PEX array. In this paper, it is preferred to follow a more flexible and modular approach where the feeding network(s) are designed and fabricated on separate board(s) from the PEX array. SMA-to-SMP RF coaxial cables are used to connect the feeding network board(s) to the different sides of the PEX array, depending on the mode of operation. This approach is more convenient for the proof-of-concept experimental demonstration in this paper. To properly excite the PEX antenna array, the feeding network includes a phase shifter for each peripheral source (port), hence, a 1 × 16 phase-shifting feeding network is designed for this purpose (see Supplementary Note 1 for more details). Two sample boards are fabricated, where each of them is sufficiently capable of entirely feeding one side of the PEX array prototype which contains only 15 peripheral ports (the extra port on the feeding network is unused). Each feeding network is used to excite one side of the PEX array with an electronically-controlled progressive phase-shifted excitation, which excites a plane wave underneath the radiating slots along various directions. The design and performance of the developed 1 × 16 phase-shifting feeding network is described in more detail in Supplementary Note 1. For the multiple-beam experiment, an additional 1 × 2 commercial power splitter is used in conjunction with two 1 × 16 feeding network boards to allow the excitation of sides A and B of the PEX array simultaneously.
The 7 × 7 PEX antenna array prototype is designed on a double-sided Rogers RT/duroid 5880 1oz substrate with thickness = 1.575 mm, and was fabricated by Candor Industries Inc and assembled by V.U.nics Inc. The fabricated PEX array prototype exhibits 15 peripheral feeding ports along each of the four sides of the PEX array, and four dummy corner ports that are added to eliminate any problems with edge effects. Depending on the mode of operation (single or multiple-beam generation), one or more of these sides are excited with individually phase-shifted excitations, whereas the dummy corner ports are always terminated to matched loads. SMP coaxial connectors are used to excite the individual peripheral ports since they exhibit a small physical size, thus, they can be placed sufficiently close together, compared to other options such as standard SMA coaxial connectors which are slightly bigger in size.
The fabricated 7 × 7 PEX array prototype is experimentally characterized using a planar near-field scanning NearField Systems Inc. (NSI) antenna measurement system, which uses an Agilent Technologies N5244A vector network analyzer (VNA) (see Fig. 5b). The NSI system is able to convert the measured near-field data to far-field radiation patterns. In the experimental results presented in the paper, some sides of the PEX array are excited with an appropriately phase-shifted excitation, whereas the remaining unexcited sides are terminated to 50 Ω matched loads to absorb any remaining unradiated power. The PEX array is experimentally characterized for both single and multiple-beam generation modes. It is worth noting that the NSI measurement system uses an Az-over-El (Az/EL) coordinate system which is different from the Theta-Phi (θ-ϕ) system used in the full-wave simulation results61 (see Supplementary Note 4 for more details). Thus, it is very challenging to plot the E-planes and H-planes radiation patterns from the measured results similar to the full-wave simulations. Instead, the Az and El measured radiation patterns are presented in the paper (see Fig. 6a–d). An A-Info LB-10180 1–18 GHz wide-band horn antenna is used to calibrate the NSI measurement system, by implementing a standard gain comparison procedure, which allows the experimental characterization of the far-field realized gain patterns generated by the fabricated PEX antenna prototype. Additionally, the losses and reflections due to the feeding network and RF cables are calibrated out from the measured radiation patterns, by characterizing their performance separately, and removing their effect from the measured patterns using simple post processing techniques.
All key data generated and analyzed are included in this paper and its supplementary information. Additional data sets that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request.
The codes and simulation files that support the plots and data analysis within this paper are available from the corresponding author upon reasonable request.
Fong, B. H., Colburn, J. S., Ottusch, J. J., Visher, J. L. & Sievenpiper, D. F. Scalar and tensor holographic artificial impedance surfaces. IEEE Trans. Antennas Propag. 58, 3212–3221 (2010).
Smierzchalski, M., Casaletti, M., Ettorre, M., Sauleau, R. & Capet, N. Scalar metasurface antennas with tilted beam. In 2015 9th European Conference on Antennas and Propagation (EuCAP), 1–3 (IEEE, 2015).
Minatti, G. et al. Modulated metasurface antennas for space: Synthesis, analysis and realizations. IEEE Trans. Antennas Propag. 63, 1288–1300 (2015).
Article ADS MathSciNet Google Scholar
Minatti, G., Caminita, F., Casaletti, M. & Maci, S. Spiral leaky-wave antennas based on modulated surface impedance. IEEE Trans. Antennas Propag. 59, 4436–4444 (2011).
Nannetti, M., Caminita, F. & Maci, S. Leaky-wave based interpretation of the radiation from holographic surfaces. In 2007 IEEE Antennas and Propagation Society International Symposium 5813–5816 (IEEE, 2007).
Sievenpiper, D., Schaffner, J., Lee, J. J. & Livingston, S. A steerable leaky-wave antenna using a tunable impedance ground plane. IEEE Antennas Wirel. Propag. Lett. 1, 179–182 (2002).
Sievenpiper, D. F. Forward and backward leaky wave radiation with large effective aperture from an electronically tunable textured surface. IEEE Trans. Antennas Propag. 53, 236–247 (2005).
Podilchak, S. K., Freundorfer, A. P. & Antar, Y. M. M. Planar leaky-wave antenna designs offering conical-sector beam scanning and broadside radiation using surface-wave launchers. IEEE Antennas Wirel. Propag. Lett. 7, 155–158 (2008).
Podilchak, S. K., Freundorfer, A. P. & Antar, Y. M. M. Broadside radiation from a planar 2-D leaky-wave antenna by practical surface-wave launching. IEEE Antennas Wirel. Propag. Lett. 7, 517–520 (2008).
Zhao, T., Jackson, D. R., Williams, J. T., Yang, H. Y. D. & Oliner, A. A. 2-D periodic leaky-wave antennas-part I: metal patch design. IEEE Trans. Antennas Propag. 53, 3505–3514 (2005).
Zhao, T., Jackson, D. R. & Williams, J. T. 2-D periodic leaky-wave antennas-part II: slot design. IEEE Trans. Antennas Propag. 53, 3515–3524 (2005).
Lai, A., k. h. Leong, K. M. & Itoh, T. Leaky-wave steering in a two-dimensional metamaterial structure using wave interaction excitation. In 2006 IEEE MTT-S International Microwave Symposium Digest. 1643–1646 (2006).
Allen, C. A., Leong, K. M. K. H., Caloz, C. & Itoh, T. A two-dimensional edge excited metamaterial-based leaky wave antenna. In 2005 IEEE Antennas and Propagation Society International Symposium, vol. 2B, 320–323 (IEEE, 2005).
Allen, C. A., Leong, K. M. K. H. & Itoh, T. Design of a balanced 2D composite right-/left-handed transmission line type continuous scanning leaky-wave antenna. IET Microw. Antennas Propag. 1, 746–750 (2007).
Kraus, J. A backward angle-fire array antenna. IEEE Trans. Antennas Propag. 12, 48–50 (1964).
Kraus, J. D. Antennas (McGraw-Hill, 1950).
Conti, R., Toth, J., Dowling, T. & Weiss, J. The wire grid microstrip antenna. IEEE Trans. Antennas Propag. 29, 157–166 (1981).
Nakano, H., Oshima, I., Mimaki, H., Hirose, K. & Yamauchi, J. Center-fed grid array antennas. In IEEE Antennas and Propagation Society International Symposium 1995 Digest, vol. 4, 2010–2013 (IEEE, 1995).
Nakano, H. & Kawano, T. Grid array antennas. In IEEE Antennas and Propagation Society International Symposium Digest, vol. 1 236–239 (IEEE, 1997).
Chen, X., Wang, G. & Huang, K. A novel wideband and compact microstrip grid array antenna. IEEE Trans. Antennas Propag. 58, 596–599 (2010).
Alsath, M. G. N., Lawrance, L. & Kanagasabai, M. Bandwidth-enhanced grid array antenna for uwb automotive radar sensors. IEEE Trans. Antennas Propag. 63, 5215–5219 (2015).
Chen, M., Epstein, A. & Eleftheriades, G. V. Design and experimental verification of a passive Huygens' metasurface lens for gain enhancement of frequency-scanning slotted-waveguide antennas. IEEE Trans. Antennas Propag. 67, 4678–4692 (2019).
Dorrah, A. H. & Eleftheriades, G. V. Pencil-beam single-point-fed Dirac leaky-wave antenna on a transmission-line grid. IEEE Antennas Wirel. Propag. Lett. 16, 545–548 (2017).
Dorrah, A. H. & Eleftheriades, G. V. Two-dimensional center-fed transmission-line-grid antenna for highly efficient broadside radiation. Phys. Rev. Appl. 10, 024024 (2018).
Mailloux, R. J. Phased Array Antenna Handbook 2nd ed. (Artech House, 2005).
Rocca, P., Oliveri, G., Mailloux, R. J. & Massa, A. Unconventional phased array architectures and design methodologies: a review. Proc. IEEE 104, 544–560 (2016).
Lo, Y. T. & Lee, S. W. Antenna Handbook: Theory, Applications, and Design (Van Nostrand Reinhold, 1988).
Lo, Y. T. & Lee, S. W. A study of space-tapered arrays. IEEE Trans. Antennas Propag. 14, 22–30 (1966).
Haupt, R. L. Thinned arrays using genetic algorithms. IEEE Trans. Antennas Propag. 42, 993–999 (1994).
Redlich, R. Iterative least-squares synthesis of nonuniformly spaced linear arrays. IEEE Trans. Antennas Propag. 21, 106–108 (1973).
Trucco, A. & Murino, V. Stochastic optimization of linear sparse arrays. IEEE J. Ocean. Eng. 24, 291–299 (1999).
Nemit, J. T. Network approach for reducing the number of phase shifters in a limited scan phased array. US Patent 3803625, (1974).
Fante, R. L. Systems study of overlapped subarrayed scanning antennas. IEEE Trans. Antennas Propag. 28, 668–679 (1980).
Skobelev, S. P. Methods of constructing optimum phased-array antennas for limited field of view. IEEE Antennas Propag. Mag. 40, 39–50 (1998).
Mailloux, R. J. An overlapped subarray for limited scan application. IEEE Trans. Antennas Propag. 22, 487–489 (1974).
Mailloux, R. J. A low-sidelobe partially overlapped constrained feed network for time-delayed subarrays. IEEE Trans. Antennas Propag. 49, 280–291 (2001).
Abbaspour-Tamijani, A. & Sarabandi, K. An affordable millimeter-wave beam-steerable antenna using interleaved planar subarrays. IEEE Trans. Antennas Propag. 51, 2193–2202 (2003).
Milroy, W. W. The continuous transverse stub (CTS) array: Basic theory experiment and application. Proc. Antenna Appl. Symp. 2, 253–283 (1991).
Tekkouk, K., Hirokawa, J., Sauleau, R. & Ando, M. Wideband and large coverage continuous beam steering antenna in the 60-ghz band. IEEE Trans. Antennas Propag. 65, 4418–4426 (2017).
You, Q. et al. Wideband full-corporate-feed waveguide continuous transverse stub antenna array. IEEE Access 6, 76673–76681 (2018).
Cheng, Y. J., Hong, W., Wu, K. & Fan, Y. Millimeter-wave substrate integrated waveguide long slot leaky-wave antennas and two-dimensional multibeam applications. IEEE Trans. Antennas Propag. 59, 40–47 (2011).
Li, Y. B. et al. Dual-physics manipulation of electromagnetic waves by system-level design of metasurfaces to reach extreme control of radiation beams. Adv. Mater. Technol. 2, 1600196 (2017).
Ettorre, M., Sauleau, R. & Le Coq, L. Multi-beam multi-layer leaky-wave siw pillbox antenna for millimeter-wave applications. IEEE Trans. Antennas Propag. 59, 1093–1100 (2011).
Ruiz-García, J., Martini, E., Giovampaola, C. D., González-Ovejero, D. & Maci, S. Reflecting Luneburg lenses. IEEE Trans. Antennas Propag. 69, 3924–3935 (2021).
Dorrah, A. H. & Eleftheriades, G. V. Peripherally excited phased array architecture for beam steering with reduced number of active elements. IEEE Trans. Antennas Propag. 68, 1249–1260 (2020).
Oyesina, K. A. & Wong, A. M. H. Metasurface-enabled cavity antenna: beam steering with dramatically reduced fed elements. IEEE Antennas Wirel. Propag. Lett. 19, 616–620 (2020).
Dorrah, A. H. & Eleftheriades, G. V. Peripherally excited phased arrays: beam steering with reduced number of antenna elements. In 2019 IEEE Canadian Conference of Electrical and Computer Engineering (CCECE), 1–4 (IEEE, 2019).
Dorrah, A. H. & Eleftheriades, G. V. Peripherally excited phased arrays with practical active Huygens' sources and slot elements. In 2020 14th European Conference on Antennas and Propagation (EuCAP), 1–5 (IEEE, 2020).
Wong, A. M. H. & Eleftheriades, G. V. Active Huygens' metasurfaces for RF waveform synthesis in a cavity. In 2016 18th Mediterranean Electrotechnical Conference (MELECON) 1–5 (IEEE, 2016).
Wong, A. M. H. & Eleftheriades, G. V. Experimental demonstration of the Huygens' box: arbitrary waveform generation in a metallic cavity. In 2018 IEEE International Symposium on Antennas and Propagation USNC/URSI National Radio Science Meeting 1893–1894 (IEEE, 2018).
Oyesina, K. A., Aly, O. Z., Zhou, G. G. L. & Wong, A. M. H. Active Huygens' box: arbitrary synthesis of EM waves in metallic cavities. In 2019 International Applied Computational Electromagnetics Society Symposium (ACES), 1–2 (IEEE, 2019).
Wong, A. M. H. & Eleftheriades, G. V. Active Huygens' box: Arbitrary electromagnetic wave generation with an electronically controlled metasurface. IEEE Trans. Antennas Propag. 69, 1455–1468 (2021).
Wong, A. M. H. & Eleftheriades, G. V. A simple active Huygens source for studying waveform synthesis with Huygens metasurfaces and antenna arrays. In 2015 IEEE International Symposium on Antennas and Propagation USNC/URSI National Radio Science Meeting 1092–1093 (IEEE, 2015).
Memarian, M. & Eleftheriades, G. V. Dirac leaky-wave antennas for continuous beam scanning from photonic crystals. Nat. Commun. 6, 1–9 (2015).
Dorrah, A. H., Memarian, M. & Eleftheriades, G. V. Modal analysis and closure of the bandgap in 2D transmission-line grids. In 2016 IEEE MTT-S International Microwave Symposium (IMS), 1–4 (IEEE, 2016).
Balanis, C. A. Antenna Theroy: Analysis and Design (Wiley, 2005).
Harrington, R. F. Time-Harmonic Electromagnetic Fields (Wiley, 2001).
Huang, X., Lai, Y., Hang, Z. H., Zheng, H. & Chan, C. Dirac cones induced by accidental degeneracy in photonic crystals and zero-refractive-index materials. Nat. Mater. 10, 582–586 (2011).
Bhattacharyya, A. Theory of beam scanning for slot array antenna excited by slow wave. IEEE Antennas Propag. Mag. 57, 96–103 (2015).
Stein, S. On cross coupling in multiple-beam antennas. IRE Trans. Antennas Propag. 10, 548–557 (1962).
Masters, G. F. & Gregson, S. F. Coordinate system plotting for antenna measurements. In AMTA Annual Meeting & Symposium (IEEE, 2007).
Rao, J. B. L., Patel, D. P. & Krichevsky, V. Voltage-controlled ferroelectric lens phased arrays. IEEE Trans. Antennas Propag. 47, 458–468 (1999).
ANSYS. Electromagnetics Suite. 20.1.0.
This work was sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) and a Canada Research Chair (Tier 1). The authors would like to acknowledge the help provided by Professor Sean V. Hum and Nicolas Faria for providing access to their near-field antenna testing facility. The authors would also like to acknowledge the extremely useful discussions with Michael Chen, Amirmasoud Ohadi and Minseok Kim.
The Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON, M5S 3G4, Canada
Ayman H. Dorrah & George V. Eleftheriades
Ayman H. Dorrah
George V. Eleftheriades
A.H.D. performed formulation, analysis, simulations, physical design, experiments and generation of the results, and G.V.E. supervised all these stages. A.H.D. and G.V.E. contributed to conceiving the idea, and writing and editing the manuscript.
Correspondence to George V. Eleftheriades.
Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Dorrah, A.H., Eleftheriades, G.V. Experimental demonstration of peripherally-excited antenna arrays. Nat Commun 12, 6109 (2021). https://doi.org/10.1038/s41467-021-26404-7
|
CommonCrawl
|
Paper Count: 117
Search results for: Azizah Omar
117 Technology Adoption among Small and Medium Enterprises (SME's): A Research Agenda
Authors: Ramayah Thurasamy, Osman Mohamad, Azizah Omar, Malliga Marimuthu
This paper presents the research agenda that has been proposed to develop an integrated model to explain technology adoption of SMEs in Malaysia. SMEs form over 90% of all business entities in Malaysia and they have been contributing to the development of the nation. Technology adoption has been a thorn issue among SMEs as they require big outlay which might not be available to the SMEs. Although resource has been an issue among SMEs they cannot lie low and ignore the technological advancements that are taking place at a rapid pace. With that in mind this paper proposes a model to explain the technology adoption issue among SMEs.
Keywords: Technology adoption, integrated model, Small and Medium Enterprises (SME), Malaysia
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2202
116 The Impact of Website Personality on Consumers' Initial Trust towards Online Retailing Websites
Authors: Jasmine Yeap Ai Leen, T. Ramayah, Azizah Omar
E-tailing websites are often perceived to be static, impersonal and distant. However, with the movement of the World Wide Web to Web 2.0 in recent years, these online websites have been found to display personalities akin to 'humanistic' qualities and project impressions much like its retailing counterpart i.e. salespeople. This paper examines the personality of e-tailing websites and their impact on consumers- initial trust towards the sites. A total of 239 Internet users participated in this field experiment study which utilized 6 online book retailers- websites that the participants had not previously visited before. Analysis revealed that out of four website personalities (sincerity, competence, excitement and sophistication) only sincerity and competence are able to exert an influence in building consumers- trust upon their first visit to the website. The implications of the findings are further elaborated in this paper.
Keywords: E-commerce, e-tailing, initial trust, online trust, partial least squares, website personality.
115 Determination of Skills Gap between School-Based Learning and Laboratory-Based Learning in Omar Al-Mukhtar University
Authors: Aisha Othman, Crinela Pislaru, Ahmed Impes
This paper provides an identification of the existing practical skills gap between school-based learning (SBL) and laboratory based learning (LBL) in the Computing Department within the Faculty of Science at Omar Al-Mukhtar University in Libya. A survey has been conducted and the first author has elicited the responses of two groups of stakeholders, namely the academic teachers and students.
The primary goal is to review the main strands of evidence available and argue that there is a gap between laboratory and school-based learning in terms of opportunities for experiment and application of skills. In addition, the nature of experimental work within the laboratory at Omar Al-Mukhtar University needs to be reconsidered. Another goal of our study was to identify the reasons for students' poor performance in the laboratory and to determine how this poor performance can be eliminated by the modification of teaching methods. Bloom's taxonomy of learning outcomes has been applied in order to classify questions and problems into categories, and the survey was formulated with reference to third year Computing Department students. Furthermore, to discover students' opinions with respect to all the issues, an exercise was conducted. The survey provided questions related to what the students had learnt and how well they had learnt. We were also interested in feedback on how to improve the course and the final question provided an opportunity for such feedback.
Keywords: Bloom's taxonomy, e-learning, Omar Al-Mukhtar University.
114 Rock Slope Stabilization and Protection for Roads and Multi-Storey Structures in Jabal Omar, Saudi Arabia
Authors: Ibrahim Abdel Gadir Malik, Dafalla Siddig Dafalla, Abdelazim Ibrahim
Jabal Omar is located in the western side of Makkah city in Saudi Arabia. The proposed Jabal Omar Development project includes several multi-storey buildings, roads, bridges and below ground structures founded at various depths. In this study, geological mapping and site inspection which covered pre-selected areas were carried out within the easily accessed parts. Geological features; including rock types, structures, degree of weathering, and geotechnical hazards were observed and analyzed with specified software and also were documented in form of photographs. The presence of joints and fractures in the area made the rock blocks small and weak. The site is full of jointing; it was observed that, the northern side consists of 3 to 4 jointing systems with 2 random fractures associated with dykes. The southern part is affected by 2 to 3 jointing systems with minor fault and shear zones. From the field measurements and observations, it was concluded that, the Jabal Omar intruded by andesitic and basaltic dykes of different thickness and orientation. These dykes made the outcrop weak, highly deformed and made the rock masses sensitive to weathering.
Keywords: Rock, slope, stabilization, protection, Makkah.
113 A Genetic-Algorithm-Based Approach for Audio Steganography
Authors: Mazdak Zamani , Azizah A. Manaf , Rabiah B. Ahmad , Akram M. Zeki , Shahidan Abdullah
In this paper, we present a novel, principled approach to resolve the remained problems of substitution technique of audio steganography. Using the proposed genetic algorithm, message bits are embedded into multiple, vague and higher LSB layers, resulting in increased robustness. The robustness specially would be increased against those intentional attacks which try to reveal the hidden message and also some unintentional attacks like noise addition as well.
Keywords: Artificial Intelligence, Audio Steganography, DataHiding, Genetic Algorithm, Substitution Techniques.
112 Cooperative Movements in Malaysia: The Issue of Governance
Authors: Intan Waheedah Othman, Maslinawati Mohamad, Azizah Abdullah
Cooperative organizations in Malaysia are going through a phase of rapid growth. They are seen by the government as another crucial vehicle to drive and boost up the country-s economical development and growth. Hence, the issue of cooperative governance is of great importance. Unlike literatures on corporate governance for public listed companies-, literatures on governance for social enterprises, in particular the cooperative organizations are still at the early stage in Malaysia and very scant in number. This paper will look into current practices as well as issues and challenges related to cooperative governance. The need for a better solution towards forming best practices of cooperative governance framework appears imperative in deterring cases of mismanagement and fraud.
Keywords: Cooperative, Governance, Issues, Malaysia.
111 What Deter Academia to Share Knowledge within Research-Based University Status
Authors: S. Roziana, R. Azizah, A.R. Hamidah
This paper discusses the issues and challenge that academia faced in knowledge sharing at a research university in Malaysia. The partial results of interview are presented from the actual study. The main issues in knowledge sharing practices are university structure and designation and title. The academia awareness in sharing knowledge is also influenced by culture. Our investigation highlight that the concept of reciprocal relationship of sharing knowledge may hinder knowledge sharing awareness among academia. Hence, we concluded that further investigation could be carried out on the social interaction and trust culture among academia in sharing knowledge within research/ranking university environment.
Keywords: Knowledge sharing awareness, knowledge sharing practices, research university.
110 A Novel Digital Watermarking Technique Basedon ISB (Intermediate Significant Bit)
Authors: Akram M. Zeki, Azizah A. Manaf
Least Significant Bit (LSB) technique is the earliest developed technique in watermarking and it is also the most simple, direct and common technique. It essentially involves embedding the watermark by replacing the least significant bit of the image data with a bit of the watermark data. The disadvantage of LSB is that it is not robust against attacks. In this study intermediate significant bit (ISB) has been used in order to improve the robustness of the watermarking system. The aim of this model is to replace the watermarked image pixels by new pixels that can protect the watermark data against attacks and at the same time keeping the new pixels very close to the original pixels in order to protect the quality of watermarked image. The technique is based on testing the value of the watermark pixel according to the range of each bit-plane.
Keywords: Watermarking, LSB, ISB, Robustness.
109 User Acceptance of Educational Games: A Revised Unified Theory of Acceptance and Use of Technology (UTAUT)
Authors: Roslina Ibrahim, Azizah Jaafar
Educational games (EG) seem to have lots of potential due to digital games popularity and preferences of our younger generations of learners. However, most studies focus on game design and its effectiveness while little has been known about the factors that can affect users to accept or to reject EG for their learning. User acceptance research try to understand the determinants of information systems (IS) adoption among users by investigating both systems factors and users factors. Upon the lack of knowledge on acceptance factors for educational games, we seek to understand the issue. This study proposed a model of acceptance factors based on Unified Theory of Acceptance and Use of Technology (UTAUT). We use original model (performance expectancy, effort expectancy and social influence) together with two new determinants (learning opportunities and enjoyment). We will also investigate the effect of gender and gaming experience that moderate the proposed factors.
Keywords: educational games, games acceptance, user acceptance model, UTAUT
108 Sustainability Model for Rural Telecenter Using Business Intelligence Technique
Authors: Razak Rahmat, Azizah Ahmad, Rafidah Razak, Roshidi Din, Azizi Abas
Telecenter is a place where communities can access computers, the Internet, and other digital technologies to enable them to gather information, create, learn, and communicate with others. However, previous studies found that sustainability issues related to economic, political and institutional, social and technology is one of the major problem faced by the telecenter. Based on that problem this research is planning to design a possible solution on rural telecenters sustainability with the support of business intelligence (BI). The empirical study will be conducted through qualitative and quantitative method including interviews and observations with a range of stakeholders including ministry officers, telecenters managers and operators. Result from the data collection will be analyzed using causal modeling approach of SEM SmartPLS for the validity. The expected finding from this research is the Business Intelligent Requirement Model as a guild for sustainability of the rural telecenters.
Keywords: Rural ICT Telecenter (RICTT), Business Intelligence, Sustainability, Requirement Analysis Modal.
107 Graphical Password Security Evaluation by Fuzzy AHP
Authors: Arash Habibi Lashkari, Azizah Abdul Manaf, Maslin Masrom
In today's day and age, one of the important topics in information security is authentication. There are several alternatives to text-based authentication of which includes Graphical Password (GP) or Graphical User Authentication (GUA). These methods stems from the fact that humans recognized and remembers images better than alphanumerical text characters. This paper will focus on the security aspect of GP algorithms and what most researchers have been working on trying to define these security features and attributes. The goal of this study is to develop a fuzzy decision model that allows automatic selection of available GP algorithms by taking into considerations the subjective judgments of the decision makers who are more than 50 postgraduate students of computer science. The approach that is being proposed is based on the Fuzzy Analytic Hierarchy Process (FAHP) which determines the criteria weight as a linear formula.
Keywords: Graphical Password, Authentication Security, Attack Patterns, Brute force attack, Dictionary attack, Guessing Attack, Spyware attack, Shoulder surfing attack, Social engineering Attack, Password Entropy, Password Space.
106 A Prediction-Based Reversible Watermarking for MRI Images
Authors: Nuha Omran Abokhdair, Azizah Bt Abdul Manaf
Reversible watermarking is a special branch of image watermarking, that is able to recover the original image after extracting the watermark from the image. In this paper, an adaptive prediction-based reversible watermarking scheme is presented, in order to increase the payload capacity of MRI medical images. The scheme divides the image into two parts, Region of Interest (ROI) and Region of Non-Interest (RONI). Two bits are embedded in each embeddable pixel of RONI and one bit is embedded in each embeddable pixel of ROI. The experimental results demonstrate that the proposed scheme is able to achieve high embedding capacity. This is mainly caused by two reasons. First, the pixels that were excluded from data embedding due to overflow/underflow are used for data embedding. Second, large location map that need to be added to watermark data as overhead is eliminated and thus lower data embedding capacity is prevented. Moreover, the scheme provides good visual quality to the watermarked image.
Keywords: Medical image watermarking, reversible watermarking, Difference Expansion, Prediction-Error Expansion.
105 Attributes of Ethical Leadership and Ethical Guidelines in Malaysian Public Sector
Authors: M. Norazamina, A. Azizah, Y. Najihah Marha, A. Suraya
Malaysian Public Sector departments or agencies are responsible to provide efficient public services with zero corruption. However, corruption continues to occur due to the absence of ethical leadership and well-execution of ethical guidelines. Thus, the objective of this paper is to explore the attributes of ethical leadership and ethical guidelines. This study employs a qualitative research by analyzing data from interviews with key informers of public sector using conceptual content analysis (NVivo11). The study reveals eight attributes of ethical leadership which are role model, attachment, ethical support, knowledgeable, discipline, leaders' spirituality encouragement, virtue values and shared values. Meanwhile, five attributes (guidelines, communication, check and balance, concern on stakeholders and compliance) of ethical guidelines are identified. These identified attributes should become the ethical identity and ethical direction of Malaysian Public Sector. This could enhance the public trust as well as the international community trust towards the public sector.
Keywords: Check and balance, ethical guidelines, ethical leadership, public sector, spirituality encouragement .
104 Multi-level Metadata Integration System: XML, RDF and RuleML
Authors: Messaouda Fareh, Omar Boussaid, Rachid Challal
Our work is part of the heterogeneous data integration, with the definition of a structural and semantic mediation model. Our aim is to propose architecture for the heterogeneous sources metadata mediation, represented by XML, RDF and RuleML models, providing to the user the metadata transparency. This, by including data structures, of natures fundamentally different, and allowing the decomposition of a query involving multiple sources, to queries specific to these sources, then recompose the result.
Keywords: Mediator, Metadata, Query, RDF, RuleML, XML, Xquery.
103 A General Model for Amino Acid Interaction Networks
Authors: Omar Gaci, Stefan Balev
In this paper we introduce the notion of protein interaction network. This is a graph whose vertices are the protein-s amino acids and whose edges are the interactions between them. Using a graph theory approach, we identify a number of properties of these networks. We compare them to the general small-world network model and we analyze their hierarchical structure.
Keywords: interaction network, protein structure, small-world network.
102 Detecting Community Structure in Amino Acid Interaction Networks
Authors: Omar GACI, Stefan BALEV, Antoine DUTOT
In this paper we introduce the notion of protein interaction network. This is a graph whose vertices are the protein-s amino acids and whose edges are the interactions between them. Using a graph theory approach, we observe that according to their structural roles, the nodes interact differently. By leading a community structure detection, we confirm this specific behavior and describe thecommunities composition to finally propose a new approach to fold a protein interaction network.
Keywords: interaction network, protein structure, community structure detection.
101 Simulation of Tracking Time Delay Algorithm using Mathcad Package
Authors: Mahmud Hesain ALdwaik, Omar Hsiain Eldwaik
This paper deals with tracking and estimating time delay between two signals. The simulation of this algorithm accomplished by using Mathcad package is carried out. The algorithm we will present adaptively controls and tracking the delay, so as to minimize the mean square of this error. Thus the algorithm in this case has task not only of seeking the minimum point of error but also of tracking the change of position, leading to a significant improving of performance. The flowchart of the algorithm is presented as well as several tests of different cases are carried out.
Keywords: Tracking time delay, Algorithm simulation, Mathcad, MSE
100 Quranic Braille System
Authors: Abdallah M. Abualkishik, Khairuddin Omar
This article concerned with the translation of Quranic verses to Braille symbols, by using Visual basic program. The system has the ability to translate the special vibration for the Quran. This study limited for the (Noun + Scoon) vibrations. It builds on an existing translation system that combines a finite state machine with left and right context matching and a set of translation rules. This allows to translate the Arabic language from text to Braille symbols after detect the vibration for the Quran verses.
Keywords: Braille, Quran vibration, Finite State Machine.
99 Ontology-Based Approach for Temporal Semantic Modeling of Social Networks
Authors: Souâad Boudebza, Omar Nouali, Faiçal Azouaou
Social networks have recently gained a growing interest on the web. Traditional formalisms for representing social networks are static and suffer from the lack of semantics. In this paper, we will show how semantic web technologies can be used to model social data. The SemTemp ontology aligns and extends existing ontologies such as FOAF, SIOC, SKOS and OWL-Time to provide a temporal and semantically rich description of social data. We also present a modeling scenario to illustrate how our ontology can be used to model social networks.
Keywords: Ontology, semantic web, social network, temporal modeling.
98 Mass Transfer Modeling in a Packed Bed of Palm Kernels under Supercritical Conditions
Authors: I. Norhuda, A. K. Mohd Omar
Studies on gas solid mass transfer using Supercritical fluid CO2 (SC-CO2) in a packed bed of palm kernels was investigated at operating conditions of temperature 50 °C and 70 °C and pressures ranges from 27.6 MPa, 34.5 MPa, 41.4 MPa and 48.3 MPa. The development of mass transfer models requires knowledge of three properties: the diffusion coefficient of the solute, the viscosity and density of the Supercritical fluids (SCF). Matematical model with respect to the dimensionless number of Sherwood (Sh), Schmidt (Sc) and Reynolds (Re) was developed. It was found that the model developed was found to be in good agreement with the experimental data within the system studied.
Keywords: Mass Transfer, Palm Kernel, Supercritical fluid.
97 Isolation and Classification of Red Blood Cells in Anemic Microscopic Images
Authors: Jameela Ali Alkrimi, Loay E. George, Azizah Suliman, Abdul Rahim Ahmad, Karim Al-Jashamy
Red blood cells (RBCs) are among the most commonly and intensively studied type of blood cells in cell biology. Anemia is a lack of RBCs is characterized by its level compared to the normal hemoglobin level. In this study, a system based image processing methodology was developed to localize and extract RBCs from microscopic images. Also, the machine learning approach is adopted to classify the localized anemic RBCs images. Several textural and geometrical features are calculated for each extracted RBCs. The training set of features was analyzed using principal component analysis (PCA). With the proposed method, RBCs were isolated in 4.3secondsfrom an image containing 18 to 27 cells. The reasons behind using PCA are its low computation complexity and suitability to find the most discriminating features which can lead to accurate classification decisions. Our classifier algorithm yielded accuracy rates of 100%, 99.99%, and 96.50% for K-nearest neighbor (K-NN) algorithm, support vector machine (SVM), and neural network RBFNN, respectively. Classification was evaluated in highly sensitivity, specificity, and kappa statistical parameters. In conclusion, the classification results were obtained within short time period, and the results became better when PCA was used.
Keywords: Red blood cells, pre-processing image algorithms, classification algorithms, principal component analysis PCA, confusion matrix, kappa statistical parameters, ROC.
96 An Improved Genetic Algorithm to Solve the Traveling Salesman Problem
Authors: Omar M. Sallabi, Younis El-Haddad
The Genetic Algorithm (GA) is one of the most important methods used to solve many combinatorial optimization problems. Therefore, many researchers have tried to improve the GA by using different methods and operations in order to find the optimal solution within reasonable time. This paper proposes an improved GA (IGA), where the new crossover operation, population reformulates operation, multi mutation operation, partial local optimal mutation operation, and rearrangement operation are used to solve the Traveling Salesman Problem. The proposed IGA was then compared with three GAs, which use different crossover operations and mutations. The results of this comparison show that the IGA can achieve better results for the solutions in a faster time.
Keywords: AI, Genetic algorithms, TSP.
95 Arabic Character Recognition using Artificial Neural Networks and Statistical Analysis
Authors: Ahmad M. Sarhan, Omar I. Al Helalat
In this paper, an Arabic letter recognition system based on Artificial Neural Networks (ANNs) and statistical analysis for feature extraction is presented. The ANN is trained using the Least Mean Squares (LMS) algorithm. In the proposed system, each typed Arabic letter is represented by a matrix of binary numbers that are used as input to a simple feature extraction system whose output, in addition to the input matrix, are fed to an ANN. Simulation results are provided and show that the proposed system always produces a lower Mean Squared Error (MSE) and higher success rates than the current ANN solutions.
Keywords: ANN, Backpropagation, Gaussian, LMS, MSE, Neuron, standard deviation, Widrow-Hoff rule.
94 Minimizing Examinee Collusion with a Latin- Square Treatment Structure
Authors: M. H. Omar
Cheating on standardized tests has been a major concern as it potentially minimizes measurement precision. One major way to reduce cheating by collusion is to administer multiple forms of a test. Even with this approach, potential collusion is still quite large. A Latin-square treatment structure for distributing multiple forms is proposed to further reduce the colluding potential. An index to measure the extent of colluding potential is also proposed. Finally, with a simple algorithm, the various Latin-squares were explored to find the best structure to keep the colluding potential to a minimum.
Keywords: Colluding pairs, Scale for Colluding Potential, Latin-Square Structure, Minimization of Cheating.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 959
93 Clustering Methods Applied to the Tracking of user Traces Interacting with an e-Learning System
Authors: Larbi Omar, Elberrichi Zakaria
Many research works are carried out on the analysis of traces in a digital learning environment. These studies produce large volumes of usage tracks from the various actions performed by a user. However, to exploit these data, compare and improve performance, several issues are raised. To remedy this, several works deal with this problem seen recently. This research studied a series of questions about format and description of the data to be shared. Our goal is to share thoughts on these issues by presenting our experience in the analysis of trace-based log files, comparing several approaches used in automatic classification applied to e-learning platforms. Finally, the obtained results are discussed.
Keywords: Classification, , e-learning platform, log file, Trace.
92 A Systematic Approach for Design a Low-Cost Mobility Assistive Device for Elderly People
Authors: Omar Salah, Ahmed A. Ramadan, Salvatore Sessa, Ahmed A. Abo-Ismail
Walking and sit to stand are activities carried out by all the people many times during the day, but physical disabilities due to age and diseases create needs of assistive devices to help elderly people during their daily life. This study aims to study the different types and mechanisms of the assistive devices. We will analyze the limitations and the challenges faced by the researchers in this field. We will introduce the Assistive Device developed at the Egypt-Japan University of Science and Technology, named E-JUST Assistive Device (EJAD). EJAD will be a low cost intelligent assistive device to help elders in walking and sit-to-stand activities.
Keywords: Active walker, Assistive robotics, Standing Assistance, Walking Assistance
91 Miniaturized Wideband Single-Feed Shorted-Edge Stacked Patch Antenna for C-Band Applications
Authors: Abdelheq Boukarkar, Omar Guermoua
In this paper, we propose a miniaturized and wideband patch antenna for C-band applications. The antenna miniaturization is obtained by loading shorting vias along one patch edge. At the same time, the wideband performance is achieved by combining two resonances using one feed line. The measured results reveal that the antenna covers the frequency band 4.32 GHz to 6.52 GHz (41%) with a peak gain and a peak efficiency of 5.5 dBi and 87%, respectively. The antenna occupies a relatively small size of only 26 x 22 x 5.6 mm3, making it suitable for compact wireless devices requiring a stable unidirectional gain over a wide frequency range.
Keywords: Miniaturized antennas, patch antennas, stable gain, wideband antennas.
90 New Scheme in Determining nth Order Diagrams for Cross Multiplication Method via Combinatorial Approach
Authors: Sharmila Karim, Haslinda Ibrahim, Zurni Omar
In this paper, a new recursive strategy is proposed for determining $\frac{(n-1)!}{2}$ of $n$th order diagrams. The generalization of $n$th diagram for cross multiplication method were proposed by Pavlovic and Bankier but the specific rule of determining $\frac{(n-1)!}{2}$ of the $n$th order diagrams for square matrix is yet to be discovered. Thus using combinatorial approach, $\frac{(n-1)!}{2}$ of the $n$th order diagrams will be presented as $\frac{(n-1)!}{2}$ starter sets. These starter sets will be generated based on exchanging one element. The advantages of this new strategy are the discarding process was eliminated and the sign of starter set is alternated to each others.
Keywords: starter sets, permutation, exchanging one element, determinant
89 Data Migration Methodology from Relational to NoSQL Databases
Authors: Mohamed Hanine, Abdesadik Bendarag, Omar Boutkhoum
Currently, the field of data migration is very topical. As the number of applications developed rapidly, the ever-increasing volume of data collected has driven the architectural migration from Relational Database Management System (RDBMS) to NoSQL (Not Only SQL) database. This very recent technology is important enough in the field of database management. The main aim of this paper is to present a methodology for data migration from RDBMS to NoSQL database. To illustrate this methodology, we implement a software prototype using MySQL as a RDBMS and MongoDB as a NoSQL database. Although this is a hard engineering work, our results show that the proposed methodology can successfully accomplish the goal of this study.
Keywords: Data Migration, MySQL, RDBMS, NoSQL, MongoDB.
88 Kano's Model for Clinical Laboratory
Authors: Khaled N. El-Hashmi, Omar K.Gnieber
The clinical laboratory has received considerable recognition globally due to the rapid development of advanced technology, economic demands and its role in a patient's treatment cycle. Although various cross-domain experiments and practices with respect to clinical laboratory projects are ready for the full swing, the customer needs are still ambiguous and debatable. The purpose of this study is to apply Kano's model and customer satisfaction matrix to categorize service quality attributes in order to see how well these attributes are able to satisfy customer needs. The result reveals that ten of the 26 service quality attributes have greater impacts on highly increasing customer's satisfaction and should be taken in consideration firstly.
Keywords: Clinical laboratory, Customer satisfaction matrix, Kano's Model, Quality Attributes, Voice of Customer.
|
CommonCrawl
|
Corporate Finance & Accounting
Guide to Accounting
Corporate Finance & Accounting Financial Ratios
What is the Formula for Weighted Average Cost of Capital (WACC)?
By Shobhit Seth
Reviewed By Margaret James
As the majority of businesses run on borrowed funds, the cost of capital becomes an important parameter in assessing a firm's potential of net profitability. Analysts and investors use weighted average cost of capital (WACC) to assess an investor's returns on an investment in a company.
What is WACC?
Companies often run their business using the capital they raise through various sources. They include raising money through listing their shares on the stock exchange (equity), or by issuing interest-paying bonds or taking commercial loans (debt). All such capital comes at a cost, and the cost associated with each type varies for each source.
WACC is the average after-tax cost of a company's various capital sources, including common stock, preferred stock, bonds, and any other long-term debt. In other words, WACC is the average rate a company expects to pay to finance its assets.
Since a company's financing is largely classified into two types—debt and equity—WACC is the average cost of raising that money, which is calculated in proportion to each of the sources.
The Formula for WACC
WACC=(EV×Re)+(DV×Rd×(1−Tc))where:E=Market value of the firm's equityD=Market value of the firm's debtV=E+DRe=Cost of equityRd=Cost of debtTc=Corporate tax rate\begin{aligned} &\text{WACC} = \left ( \frac{ E }{ V} \times Re \right ) + \left ( \frac{ D }{ V} \times Rd \times ( 1 - Tc ) \right ) \\ &\textbf{where:} \\ &E = \text{Market value of the firm's equity} \\ &D = \text{Market value of the firm's debt} \\ &V = E + D \\ &Re = \text{Cost of equity} \\ &Rd = \text{Cost of debt} \\ &Tc = \text{Corporate tax rate} \\ \end{aligned}WACC=(VE×Re)+(VD×Rd×(1−Tc))where:E=Market value of the firm's equityD=Market value of the firm's debtV=E+DRe=Cost of equityRd=Cost of debtTc=Corporate tax rate
How to Calculate WACC
WACC is calculated by multiplying the cost of each capital source (debt and equity) by its relevant weight, and then adding the products together to determine the value.
In the above formula, E/V represents the proportion of equity-based financing, while D/V represents the proportion of debt-based financing.
WACC formula is the summation of two terms:
(EV×Re)\left ( \frac{ E }{ V} \times Re \right )(VE×Re)
(DV×Rd×(1−Tc))\left ( \frac{ D }{ V} \times Rd \times ( 1 - Tc ) \right )(VD×Rd×(1−Tc))
The former represents the weighted value of equity-linked capital, while the latter represents the weighted value of debt-linked capital.
Equity and Debt Components of WACC Formula
It's a common misconception that equity capital has no concrete cost that the company must pay after it has listed its shares on the exchange. In reality, there is a cost of equity.
The shareholders' expected rate of return is considered a cost from the company's perspective. That's because if the company fails to deliver this expected return, shareholders will simply sell off their shares, which will lead to a decrease in share price and the company's overall valuation. The cost of equity is essentially the amount that a company must spend in order to maintain a share price that will keep its investors satisfied and invested.
One can use the CAPM (capital asset pricing model) to determine the cost of equity. CAPM is a model that established the relationship between the risk and expected return for assets and is widely followed for the pricing of risky securities like equity, generating expected returns for assets given the associated risk and calculating costs of capital.
The debt-linked component in the WACC formula, [(D/V) * Rd * (1-Tc)], represents the cost of capital for company-issued debt. It accounts for interest a company pays on the issued bonds or commercial loans taken from bank.
Example of How to Use WACC
Let's calculate the WACC for retail giant Walmart (WMT).
In October 2018, the risk-free rate as represented by the annual return on a 20-year treasury bond was 3.3 percent. Beta value for Walmart stood at 0.51. Meanwhile, the average market return, represented by average annualized total return for the S&P 500 index over the past 90 years, was 9.8 percent.
The total shareholder equity for Walmart for the 2018 fiscal year was $77.87 billion (E), and the long term debt stood at $36.83 billion (D). The total for overall capital for Walmart comes to:
V=E+D=$114.7 billion\begin{aligned} &V = E + D = \$114.7 \text{ billion} \\ \end{aligned}V=E+D=$114.7 billion
The equity-linked cost of capital for Walmart is:
(E/V)×Re=77.87114.7×6.615%=0.0449\begin{aligned} &( E / V ) \times Re = \frac{ 77.87 }{ 114.7 } \times 6.615\% = 0.0449 \\ \end{aligned}(E/V)×Re=114.777.87×6.615%=0.0449
The debt component is:
(D/V)×Rd×(1−Tc)=36.83114.7×6.5%×(1−21%)=0.0165\begin{aligned} ( D / V) \times Rd \times ( 1 - Tc ) &= \frac{ 36.83 }{ 114.7 } \times 6.5\% \times ( 1 - 21\% ) \\ &= 0.0165 \\ \end{aligned}(D/V)×Rd×(1−Tc)=114.736.83×6.5%×(1−21%)=0.0165
Using the above two computed figures, WACC for Walmart can be calculated as:
0.0449+0.016=0.0609 or 6.1%\begin{aligned} &0.0449 + 0.016 = 0.0609 \text{ or } 6.1\% \\ \end{aligned}0.0449+0.016=0.0609 or 6.1%
On average, Walmart is paying around 6.1% per annum as the cost of overall capital raised via a combination of debt and equity.
The above example is a simple illustration to calculate WACC. One may need to compute it in a more elaborate manner if the company is having multiple forms of capital with each having a different cost.
For instance, if the preferred shares are trading at a different price than common shares, if the company issued bonds of varying maturity are offering different returns, or if the company has a (combination of) commercial loan(s) at different interest rate(s), then each such component needs to be accounted for separately and added together in proportion of the capital raised.
Walmart. "2018 Annual Report," page 57. Accessed July 29, 2020.
Investors Need a Good WACC
How to Calculate the Required Rate of Return (RRR)
What's the formula for calculating WACC in Excel?
How to Determine the Proper Weights of Costs of Capital
What Is the Difference Between WACC and IRR?
What To Know for an Investment Banking Interview
Financing: What It Means and Why It Matters
Financing is the process of providing funds for business activities, making purchases, or investing.
How to Calculate the Weighted Average Cost of Capital – WACC
The weighted average cost of capital (WACC) is a calculation of a firm's cost of capital in which each category of capital is proportionately weighted.
Composite Cost of Capital Definition
Composite cost of capital is a company's cost to finance its business, determined by and commonly referred to as "weighted average cost of capital" (WACC).
Capital Funding: What Lenders and Equity Holders Give Businesses
Capital funding is the money that lenders and equity holders provide to a business so it can run both its day-to-day operations and make longer-term purchases and investments.
Internal Rate of Return (IRR)
The internal rate of return (IRR) is a metric used in capital budgeting to estimate the return of potential investments.
|
CommonCrawl
|
distance between two points on a sphere calculator
calculate distance between two points in my homebrew world with a custom radius for Dungeons and Dragons overland travel [2] 2020/10/09 10:40 Male / 30 years old level / A teacher / A researcher / Very / Purpose of use To see how far my boyfriend is away from me. When unqualified, "the" distance generally means the shortest distance between two points. Java program to calculate the distance between two points. d = √ 25+1+9 Apply formula: Meracalculator is a free online calculator's website. brightness_4 $ \text{Distance } = \sqrt{ (x_2 -x_1)^2 + (y_2 - y_1)^2} $ (x 1,y 1) (x 2,y 2) Calculate. The Great-Circle Distance formula – wikipedia.org. First, subtract y2 - y1 to find the vertical … Alternatively, the angles and could correspond to geographical longitude and latitude respectively. For example, there are an infinite number of paths between two points on a sphere but, in general, only a single shortest path. Writing code in comment? For great circles (on the sphere) and geodesics (on the ellipsoid), the distance is the shortest surface distance between two points. Distance between two points can be calculated in two ways. This tool can measure two types of distance types, the first is straight line distance also known as Rhumb line distance. This method uses spherical triangles and measures the sides and angles of each to calculate the distance between points. ), Distance of chord from center when distance between center and another equal length chord is given, Minimum number of points to be removed to get remaining points on one side of axis, Steps required to visit M points in order on a circular ring of N points, Program to find equation of a plane passing through 3 points, Sum of series till N-th term whose i-th term is i^k – (i-1)^k, Polygon Clipping | Sutherland–Hodgman Algorithm, Sum of Manhattan distances between all pairs of points, Program for Point of Intersection of Two Lines, Check whether a given point lies inside a triangle or not, Write a program to print all permutations of a given string, Set in C++ Standard Template Library (STL), Write Interview code. The Great Circle distance formula computes the shortest distance path of two points on the surface of the sphere. I am interested in finding the point that is closest to the great circle line AB. Example: Calculate the Euclidean distance between the points (3, 3.5) and (-5.1, -5.2) in 2D space. Distance between two points calculator uses coordinates of two points `A(x_A,y_A)` and `B(x_B,y_B)` in the two-dimensional Cartesian coordinate plane and find the length of the line segment `\overline{AB}`. Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below. The radius r value for this spherical Earth formula is approximately ~6371 km. Ask Question Asked 2 months ago. The matrix m gives the distances between points (we divided by 1000 to get distances in KM). Please use ide.geeksforgeeks.org, generate link and share the link here. New York (40.6892° N, 74.0445° W) is 5574.8 km. The euclidean distance calculator will evaluate the distance between the two points. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready. If you need to calculate this, you can actually get surprisinglyfar by just using MySQL! Below is the implementation of the above formulae: edit In a 3 dimensional plane, the distance between points (X1, Y1, Z1) and (X2, Y2, Z2) are given. How to enter numbers: Enter any integer, decimal or fraction. Geodesics on the sphere are circles on the sphere whose centers coincide with the center of the sphere, and are called great circles. Attention reader! The haversine formula is a re-formulation of the spherical law of cosines, but the formulation in terms of haversines is more useful for small angles and distances. See your article appearing on the GeeksforGeeks main page and help other Geeks. As per wikipedia,The haversine formula determines the great-circle distance between two points on a sphere given their longitudes and latitudes. d = √ [(7-2) 2 + (4-5) 2 + (6-3) 2] Example: This uses the Haversine formula … Don't stop learning now. It is important for use in navigation. It is important for use in navigation. In addition, the azimuth looking from Point B to Point A will not be the converse (90 degrees minus the azimuth) of the azimuth looking from Point A to Point B. How to check if two given line segments intersect? > How does one calculate the straight line distance between two points on a circle if the radius and arc length are known? The Haversine formula calculates the shortest distance between two points on a sphere using their latitudes and longitudes measured along the surface. d = √ [(5) 2 + (-1) 2 + (3) 2] The haversine formula is a re-formulation of the spherical law of cosines, but the formulation in terms of haversines is more useful for small angles and distances. (Try this with a string on a globe.) That means, when applies this to calculate distance of two locations on Earth, the formula assumes that the Earth is spherical. The haversine can be expressed in trignometric function as: The haversine of the central angle (which is d/r) is calculated by the following formula: where r is the radius of earth(6371 km), d is the distance between two points, is latitude of the two points and is longitude of the two points respectively. acknowledge that you have read and understood our, GATE CS Original Papers and Official Keys, ISRO CS Original Papers and Official Keys, ISRO CS Syllabus for Scientist/Engineer Exam, Program to find line passing through 2 Points, Program to calculate distance between two points, Program to calculate distance between two points in 3 D, Program for distance between two points on earth, Haversine formula to find distance between two points on a sphere, Maximum occurred integer in n ranges | Set-2, Maximum value in an array after m range increment operations, Print modified array after multiple array range increment operations, Constant time range add operation on an array, Segment Tree | Set 2 (Range Minimum Query), Segment Tree | Set 1 (Sum of given range), Persistent Segment Tree | Set 1 (Introduction), Closest Pair of Points using Divide and Conquer algorithm. Check whether triangle is valid or not if sides are given. Z1 = 3, Z2= 6, Solution: To find the angular distance between two points on a sphere, suppose that the two points have a right ascension (RA) of and, and a declination (DEC) of and. Haversine Formula – Calculate geographic distance on earth. Distance Between Two Points = The distance formula is derived from the Pythagorean theorem. I also have a matrix V containing 10,000+ other points on the sphere (lat and long). The shortest distance between two points is the length of a so-called geodesic between the points. close, link turf distance module. Distance From To: Calculate distance between two addresses, cities, states, zipcodes, or locations Enter a city, a zipcode, or an address in both the Distance From and the Distance To address inputs. One of the most common ways to calculate distances using latitude and longitude is the haversine formula, which is used to measure distances on a sphere. If you have radius rr, sphere center C= (xc,yc,yc), two points on sphere P= (x1,y1,y1) and P2= (x2,y2,y2), then angle between directions from center to these points is angle between vectors: V1 = P1 - C V2 = P2 - C is (using dot product and arccosine) angle = acos(((x1-cx)* (x2-cx) + (y1-cy)* (y2-cy) + … We use cookies to ensure you have the best browsing experience on our website. Important in navigation, it is a special case of a more general formula in spherical trigonometry, the law of haversines, that relates the sides and angles of spherical triangles. This is not the exact measurement because the This online calculator will calculate the 3 unknown values of a sphere given any 1 known variable including radius r, surface area A, volume V and circumference C. It will also give the answers for volume, surface area and circumference in terms of PI π. By using our site, you Calculate the distance between two points on a sphere (earth) going through the earth? I have the latitude and longitude of two points on the surface of a sphere, A(lat1, lon1) and B (lat2, lon2). Active 2 months ago. d = √ [(x2-x1)2 + (y2-y1)2 + (z2-z1)2] Closest Pair of Points | O(nlogn) Implementation, Convex Hull | Set 1 (Jarvis's Algorithm or Wrapping), Line Clipping | Set 1 (Cohen–Sutherland Algorithm), How to check if given four points form a square, Ways to choose three points with distance between the most distant points <= L, Number of Integral Points between Two Points, Check whether it is possible to join two points given on circle such that distance between them is k, Maximum distance between two points in coordinate plane using Rotating Caliper's Method, Prime points (Points that split a number into two primes), Count of obtuse angles in a circle with 'k' equidistant points between 2 given points, Find the point on X-axis from given N points having least Sum of Distances from all other points, Distance between end points of Hour and minute hand at given time, Hammered distance between N points in a 2-D plane, Find the maximum possible distance from origin using given points, Find the integer points (x, y) with Manhattan distance atleast N, Find integral points with minimum distance from given set of integers using BFS, Find points at a given distance on a line of given slope, Legendre's formula (Given p and n, find the largest x such that p^x divides n! Y1 = 5, Y2 = 4 For example, there are an infinite number of paths between two points on a sphere but, in general, only a single shortest path. In the case of the sphere, the geodesic is a segment of a great circle containing the two points. The shortest path distance is a straight line. Find the horizontal and vertical distance between the points. Learn more Accept. The haversine formula determines the great-circle distance between two points on a sphere given their longitudes and latitudes.Important in navigation, it is a special case of a more general formula in spherical trigonometry, the law of haversines, that relates the sides and angles of spherical triangles.. Distance Between Two Points Calculator This calculator determines the distance (also called metric) between two points in a 1D, 2D, 3D and 4D Euclidean, Manhattan, and Chebyshev spaces. d = √35 It easy to measure distances between two points in Scribble Maps using our drawing tools. In spaces with curvature, straight lines are replaced by geodesics. Enter the two gps coordinates in latitude and longitude format below, and our distance calculator will show you the distances between coordinates. To make calculations easier meracalculator has developed 100+ calculators in math, physics, chemistry and health category. For curved or more complicated surfaces, the so-called metric can be used to compute the distance between two points by integration. Note, you could have just plugged the coordinates into the formula, and arrived at the same solution.. Notice the line colored green that shows the same exact mathematical equation both up above, using the pythagorean theorem, and down below using the formula. The haversine formula is a very accurate way of computing distances between two points on the surface of a sphere using the latitude and longitude of the two points. Experience. formula assumes that the Earth is a perfect sphere when in fact it is an oblate spheroid. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. Click Calculate Distance, and the tool will place a marker at each of the two addresses on the map along with a line between them. In the case of the sphere, the geodesic is a segment of a great circle containing the two points. The code has been written in five different formats using standard values, taking inputs through scanner class, command line arguments, while loop and, do while loop, creating a separate class. It's an online Geometry tool requires coordinates of 2 points in the two-dimensional Cartesian coordinate plane. If you have two different latitude – longitude values of two different point on earth, then with the help of Haversine Formula, you can easily compute the great-circle distance (The shortest distance between two points on the surface of a Sphere).The term Haversine was coined by Prof. James Inman in 1835. If you nay doubts related to the information that we shared do leave a comment here at the end of the post. MySQL 5.7 introduced ST_Distance_Sphere, which is a native function to calculate the distance between two points (on Earth). How it works: Just type numbers into the boxes below and the calculator will automatically calculate the distance between those 2 points. The angle at the center of the sphere separating the two points … By using this website, you agree to our Cookie Policy. The given distance between two points calculator is used to find the exact length between two points (x1, y1) and (x2, y2) in a 2d geographical coordinate system. turf.distance(from, to, [units=kilometers]) Calculates the distance between two Point|points in degrees, radians, miles, or kilometers. Solving d by applying the inverse haversine or by using the inverse sine function, we get: The distance between Big Ben in London (51.5007° N, 0.1246° W) and The Statue of Liberty in DEPRECATED - replaced with @turf/distance turf-distance. The shortest distance between two points is the length of a so-called geodesic between the points. First, we need a way to calculate the angle between the two points from the centre of the sphere, followed by an equation to calculate the distance between two points … Clear. The shortest distance between two points on the surface of a sphere is an arc, not a line. Free distance calculator - Compute distance between two points step-by-step. d = Sqrt 35. For you convenience we have created this simple tool above to help you measure distances. This website uses cookies to ensure you get the best experience. The distance between two points is the length of the path connecting them. The haversine formula is a very accurate way of computing distances between two points on the surface of a sphere using the latitude and longitude of the two points. Please write to us at [email protected] to report any issue with the above content. Viewed 54 times 2. Haversine formula to find distance between two points on a sphere Last Updated: 20-11-2018 The Haversine formula calculates the shortest distance between two points on a sphere using their latitudes and longitudes measured along the surface. Well the straight line distance is just the base of an isosceles triangle with two radii forming the two equal sides. Calculate the distance between 2 points in 3 dimensions for the given details. GPS Coordinates 1 Fractions should be entered with a forward such as '3/4' for the fraction $$ \frac{3}{4} $$. The distance between two points in Euclidean space is the length of a straight line between them, but on the sphere there are no straight lines. a sphere of radius R is part of a great circle lying in a plane intersecting the sphere surface and containing the points A and B and the point C at the sphere center. I can see the formula and name of the term for the distance between any two points on a sphere when going around the surface of the sphere is the great-circle distance, or orthodromic distance. X1 = 2, X2 =7 Distance Calculator Coordinate Distance Calculator calculates the distance between two gps coordinates. How to check if a given point lies inside or outside a polygon? The distance between two points on the three dimensions of the xyz-plane can be calculated using the distance formula. Euclidean Distance Calculator Enter the euclidean coordinates of two points into the calculator. For rhumb lines, the distance is measured along the rhumb line passing through the two points, which is not, in general, the shortest surface distance between them. Correspond to geographical longitude and latitude respectively calculate the distance formula: type! Two gps coordinates best experience that we shared do leave a comment here at the of... The three dimensions of the sphere, and our distance calculator Enter the euclidean coordinates of two on! Our website when unqualified, `` the '' distance generally means the shortest distance between two points into the below! I am interested in finding the point that is closest to the information that we shared leave... Actually get distance between two points on a sphere calculator by just using MySQL is spherical of two points whose centers coincide with the DSA Paced. The horizontal and vertical distance between two points is the length of a so-called geodesic between the points on... Types of distance types, the haversine formula calculates the distance between the points (,... Leave a comment here at the end of the sphere, the geodesic is a native function to the! An arc, not a line you need to calculate this, you agree our! As per wikipedia, the first is straight line distance is just distance between two points on a sphere calculator base of an isosceles with. Geodesics on the sphere, the geodesic is a free online calculator ' s an online Geometry requires... Whose centers coincide with the above content best experience using this website uses to... Euclidean coordinates of 2 points in the case of the sphere the Earth is spherical along the surface of sphere! Distance formula computes the shortest distance path of two locations on Earth ) going the! You get the best experience our Cookie Policy doubts related to the great circle distance.! Self Paced Course at a student-friendly price and become industry ready of distance types, the metric! Free online calculator ' s an online Geometry tool requires coordinates of 2 points of the post circle containing two. Cookie Policy to Enter numbers: Enter any integer, decimal or fraction 3! We shared do leave a comment here at the end of the sphere whose centers coincide with the Self. The distance between the points when unqualified, `` the '' distance generally the! Use ide.geeksforgeeks.org, generate link and share the link here assumes that the Earth is.! Alternatively, the haversine formula determines the great-circle distance between two points by integration used to the... The horizontal and vertical distance between two points on a sphere using their and... So-Called geodesic between the points into the calculator will automatically calculate the distance between 2 points in case. The distances between coordinates surfaces, the haversine formula calculates the shortest distance two! Calculate the distance between two points on a sphere calculator between two points is the implementation of the xyz-plane can be calculated in two...., decimal or fraction long ) of all the important DSA concepts with the DSA Self Paced at... ( we divided by 1000 to get distances in km ) nay related., which is a free online calculator ' s website free distance calculator coordinate distance calculator show! Distance of two locations on Earth, the geodesic is a native function to calculate the straight line distance known. In math, physics, chemistry and health category Maps using our drawing tools closest to the great line. Longitude and latitude respectively Paced Course at a student-friendly price and become industry ready Compute distance between the points that... On a globe. issue with the center of the sphere, the geodesic is a of. Means the shortest distance between two points on the surface how does one calculate the euclidean distance Enter. V containing 10,000+ other points on a circle if the radius r for. Calculated in two ways at contribute @ geeksforgeeks.org to report any issue with the DSA Paced! Curvature, straight lines are replaced by geodesics finding the point that closest. Using this website, you can actually get surprisinglyfar by just using MySQL the! Below, and our distance calculator calculates the distance between two points on a globe. length! Is a free online calculator ' s website per wikipedia, the geodesic is a online... Industry ready the two-dimensional Cartesian coordinate plane easier meracalculator has developed 100+ in. Their longitudes and latitudes V containing 10,000+ other points on a sphere using latitudes... Comment here at the end of the path connecting them a polygon calculate the distance between two in. The DSA Self Paced Course at a student-friendly price and become industry ready two of... An arc, not a line radius r value for this spherical Earth formula is approximately ~6371 km here. Appearing on the sphere are circles on the `` Improve article '' button below surfaces, the distance between two points on a sphere calculator that... How to check if a given point lies inside or outside a polygon,. Longitude and latitude respectively an online Geometry tool requires coordinates of 2 points in 3 dimensions for the given.... Is an arc, not a line the haversine formula determines the great-circle distance between the.. Calculate the straight line distance is just distance between two points on a sphere calculator base of an isosceles triangle with two radii forming the two coordinates... Calculated using the distance between two points given point lies inside or outside a polygon to... The shortest distance between two points on the surface of a so-called between! Curvature, straight lines are replaced by geodesics it ' s website two radii forming the two in. R value for this spherical Earth formula is approximately ~6371 km 100+ in... Sphere given their longitudes and latitudes close, link brightness_4 code outside a polygon become... Requires coordinates of two points on the surface of a so-called geodesic between the.. Surfaces, the first is straight line distance is just the base of an isosceles triangle two. Our distance calculator will evaluate the distance between two points on the GeeksforGeeks main page and help other Geeks and! Vertical distance between two points on a sphere ( Earth ) going through the Earth generate! Longitudes and latitudes so-called geodesic between the points ( on Earth, the formula assumes the. Great circle containing the two equal sides and arc length are known Enter any integer, decimal or fraction other. You have the best experience is just the base of an isosceles triangle two. Online Geometry tool requires coordinates of 2 points in 3 dimensions for the given.., and our distance calculator distance between two points on a sphere calculator the shortest distance between two points on a sphere is arc. The radius and arc length are known path of two points on a sphere given their longitudes and.! Find anything incorrect by clicking on the sphere, the formula assumes that the Earth contribute @ geeksforgeeks.org to any... How to check if two given line segments intersect just type numbers into the will... By integration and vertical distance between 2 points to ensure you have best. Haversine formula determines the great-circle distance between two points 2 points in Scribble Maps using drawing. Physics, chemistry and health category the surface of a great circle distance formula computes the shortest between. Calculations easier meracalculator has developed 100+ distance between two points on a sphere calculator in math, physics, chemistry and health category a... Centers coincide with the above content not a line string on a sphere using their latitudes and longitudes along... And longitudes measured along the surface of a so-called geodesic between the points ( we by. Chemistry and health category great circles geographical longitude and latitude respectively not if are. Dsa concepts with the above formulae: edit close, link brightness_4 code that the is. Sphere using their latitudes and longitudes measured along the surface of a geodesic! Decimal or fraction share the link here on a sphere given their longitudes and latitudes for the details! St_Distance_Sphere, which is a segment of a great circle containing the two on... Or not if sides are given: Enter any integer, decimal or fraction path connecting them related! Distance generally means the shortest distance between two points is the length of a great circle line.. Two radii forming the two points is the length of a great circle line AB calculate this, can... Line segments intersect Enter the two points be calculated in two ways whose centers coincide the! Coordinates 1 calculate the distance between two gps coordinates 1 calculate the distance between the two gps coordinates 1 the. Or more complicated surfaces, the haversine formula calculates the shortest distance path of two in! End of the above content find the horizontal and vertical distance between two points on a sphere using their and... Calculator Enter the euclidean coordinates of two points step-by-step the formula assumes that the Earth hold of all the DSA! Center of the sphere, the formula assumes that the Earth gps coordinates: just numbers... 2 points two gps coordinates in latitude and longitude format below, and our calculator... And angles of each to calculate the distance between the points ( we divided by 1000 get! We divided by 1000 to get distances in km ) above formulae edit! Approximately ~6371 km spherical Earth formula is approximately ~6371 km a globe. tool measure! Calculator coordinate distance calculator will evaluate the distance between two points between those 2 in... The shortest distance between two points into the boxes below and the calculator that means, applies... In finding the point that is closest to the information that we shared do leave a comment at. The GeeksforGeeks main page and help other Geeks the Earth is spherical if! Just using MySQL curvature, straight lines are replaced by geodesics student-friendly price and become ready. 1000 to get distances in km ) are replaced by geodesics distance formula when this! Case of the sphere please use ide.geeksforgeeks.org, generate link and share the link here end the. Of the sphere you need to calculate the distance between points please use ide.geeksforgeeks.org, generate link share!
Ramones - Blitzkrieg Bop Chords, Custom Drawer Fronts, Cheap Vinyl Windows, Elsa Frozen 2 Wig, Remove Plastic Tile Glue From Wall, Remove Plastic Tile Glue From Wall, St Olaf Financial Aid, Fluval 406 Pre-filter Sponge, Fluval 406 Pre-filter Sponge, Browning 9mm Luger, Daughters Piano Chords, Pella Window Screens Home Depot,
2020 distance between two points on a sphere calculator
|
CommonCrawl
|
Journal of NeuroEngineering and Rehabilitation
Classification of Parkinson's disease and essential tremor based on balance and gait characteristics from wearable motion sensors via machine learning techniques: a data-driven approach
Sanghee Moon ORCID: orcid.org/0000-0003-3931-14581,2,
Hyun-Je Song3,
Vibhash D. Sharma4,
Kelly E. Lyons4,
Rajesh Pahwa4,
Abiodun E. Akinwuntan2,5 &
Hannes Devos2
Journal of NeuroEngineering and Rehabilitation volume 17, Article number: 125 (2020) Cite this article
Parkinson's disease (PD) and essential tremor (ET) are movement disorders that can have similar clinical characteristics including tremor and gait difficulty. These disorders can be misdiagnosed leading to delay in appropriate treatment. The aim of the study was to determine whether balance and gait variables obtained with wearable inertial motion sensors can be utilized to differentiate between PD and ET using machine learning. Additionally, we compared classification performances of several machine learning models.
This retrospective study included balance and gait variables collected during the instrumented stand and walk test from people with PD (n = 524) and with ET (n = 43). Performance of several machine learning techniques including neural networks, support vector machine, k-nearest neighbor, decision tree, random forest, and gradient boosting, were compared with a dummy model or logistic regression using F1-scores.
Machine learning models classified PD and ET based on balance and gait characteristics better than the dummy model (F1-score = 0.48) or logistic regression (F1-score = 0.53). The highest F1-score was 0.61 of neural network, followed by 0.59 of gradient boosting, 0.56 of random forest, 0.55 of support vector machine, 0.53 of decision tree, and 0.49 of k-nearest neighbor.
This study demonstrated the utility of machine learning models to classify different movement disorders based on balance and gait characteristics collected from wearable sensors. Future studies using a well-balanced data set are needed to confirm the potential clinical utility of machine learning models to discern between PD and ET.
Parkinson's disease (PD) and essential tremor (ET) are common movement disorders characterized by the presence of tremor [1]. Although ET has traditionally been considered a mono-symptomatic disorder presenting with tremor, increasing evidence suggests that ET is a complex disorder with involvement of other motor and non-motor symptoms [2]. Both PD and ET can share clinical features including motor symptoms such as bradykinesia (slow movement), gait impairment and dystonia (involuntary muscle contraction), and non-motor symptoms such as cognitive impairments, sleep disturbances, depression, and anxiety [3, 4]. Diagnosis of these disorders can be challenging for clinicians due to overlapping symptoms, and these disorders are frequently confused and misdiagnosed. A past study reported that about a third of patients with PD or dystonia were misdiagnosed with ET [5]. Since misdiagnosis can prevent or delay appropriate medical care and worsen patients' quality of life, accurate differentiation between PD and ET is important to provide optimal care.
Clinical observation of balance and gait impairments can play a major role in classifying different conditions and monitoring the progression of PD and ET. Subtle changes in gait have even been found to occur before a clinical diagnosis of PD [6, 7], Alzheimer's disease [8], or multiple sclerosis [9], suggesting gait as a potential biomarker for neurological disorders. Balance and gait impairments are more prominent and clinically observable in PD than in ET. However, there is growing evidence suggesting gait abnormalities in patients with ET [10]. Previous studies showed balance and gait abnormalities such as decreased cadence [11, 12], decreased gait speed [12], increased double support [11, 12], abnormalities in tandem gait [13,14,15], and postural instability in ET [11, 16]. These abnormalities in ET are also commonly found in PD, which contribute to misdiagnosis of the two movement disorders [5].
New technology such as video analysis, radar, sonar, and wireless sensors has emerged to assist in differential diagnosis of PD and ET [17]. For example, ultra-band wireless sensors and smartphone accelerometers have been used to detect tremor in people with PD and ET [18, 19]. These new technological advances also enable objective assessment of balance and gait through numerous devices such as body-worn inertial motion unit (IMU) sensors, 3-dimentional motion capturing systems, force plates, gait walkways, and smartphones. Many movement disorder clinics and research laboratories have started to implement these technological devices in their practices [20], particularly IMU sensors to evaluate balance and gait in PD [21, 22] . Subsequently, a vast amount of complex and non-linear data from technological devices are available for clinicians and researchers that require advanced statistical analyses.
Machine learning is widely employed to analyze large data sets produced from movement disorder clinics and research laboratories [23]; for example, to discriminate motor symptoms [24], estimate tremor severity [25], and progression of disease [26]. Among various machine learning techniques, neural network (NN) models have been utilized most due to their superior performance compared to traditional analytic methods such as logistic regression (LR) [27]. Previously, NNs have been employed in balance and gait studies to process signals from wearable devices in PD [28,29,30,31]. In addition, studies used NNs to successfully discern PD from ET using surface electromyography data [32] and assess tremor severity in PD [33]. Deep learning NN have also been used as an advanced classification method to characterize PD severity [34] and movement quality in PD [35]. Other machine learning algorithms such as support vector machine (SVM) and k-nearest neighbor (kNN) have been used to differentiate between PD and ET based on IMU sensors, but they mainly investigated upper body tremors [36,37,38,39]. To our knowledge, no study has utilized machine learning techniques to differentiate between PD and ET based on data collected from balance and gait characteristics from wearable IMU sensors.
Therefore, this data-driven study primarily aimed to examine whether balance and gait characteristics obtained from IMU sensors can distinguish between PD and ET via machine learning. Additionally, we aimed to compare and evaluate different machine learning classification performances for differential diagnosis between PD and ET.
This retrospective database study includes a total of 1468 people tested at the Parkinson's Disease and Movement Disorder Clinic of the University of Kansas Medical Center between January 3, 2017 and December 11, 2018. We excluded people if they were diagnosed with both PD and ET (n = 29) and/or if they had a history of deep brain stimulation surgery (n = 468). For those who visited the clinic more than once during the study period (n = 628), we only included the data from their first visits. Additionally, we excluded people with no data recorded due to technical error of the measuring device (n = 65), leaving a total of 567 people with PD or ET in the study. Among those, 524 participants were clinically diagnosed with PD (age = 66.73 ± 9.17, disease duration = 8.20 ± 5.11 years), whereas 43 were diagnosed with ET (age = 66.98 ± 9.84, disease duration = 13.83 ± 13.79 years).
Protocol and materials
Participants wore six IMU sensors (Opal, APDM, Inc., Portland, OR, USA) (Fig. 1). Two wrist sensors were bilaterally placed on the dorsal side of the wrist and two foot sensors were bilaterally mounted to the instep (dorsal side of metatarsus) of each foot. The sternum sensor was mounted on the sternum of the chest and the lumbar sensor was mounted to the posterior side at the level of the L5 region. All six sensors were firmly tightened to the designated locations using straps during testing.
IMU sensor locations
The instrumented stand and walk (iSAW) test was administered (Fig. 2). During the iSAW test, participants were instructed to stand still for 30 s, walk straight for 7 m at a comfortable speed after hearing a beep, turn 180° around at the end of 7-m marker, then walk back to the start point. The iSAW test is a reliable and valid balance and gait measure for clinical use [40,41,42]. All participants wore a gait belt during the iSAW test with standby assistance from the examiner.
iSAW test procedure
The IMU utilized in the study contained two accelerometers (range: ± 16 g and ± 200 g, resolution: 14 and 17.5 bits, sample rate: 128 Hz), a gyroscope (range: ± 2000°/s, resolution: 16 bits, sample rate: 128 Hz), and a magnetometer (range: ± 8 G, resolution: 12 bits, sample rate: 128 Hz). A total of 130 balance and gait features were automatically computed using the Mobility Lab software (APDM, Inc., Portland, OR, USA) [42, 43]. Pre-processed balance and gait features by the Mobility Lab software were found to be accurate compared with the 3-dimensional motion tracking system result [44].
Pre-processing data set
A total of 130 balance and gait features were automatically computed by the Mobility Lab software. Of those, 48 features with clinical relevance were included in the study based on (1) a recent review [45] that showed clinical balance and gait parameters most distinctly representing balance and gait and (2) the clinical expertise of the authors (Table 1 and Supplementary Table 1) [46]. The detailed graphical information about balance and gait features can be found in the APDM company website (https://www.apdm.com/mobility/). The ratio between PD participants (n = 524, 92.5%) and ET participants (n = 43, 7.5%) was highly imbalanced in the data set. To mitigate the effect derived from the imbalanced data, we utilized a Synthetic Minority Over-sampling Technique (SMOTE) [47], an oversampling approach to create synthetic minority class examples. SMOTE works by synthesizing new samples using data points in the minority data set. Briefly, this statistical algorithm selects data points that are close in the feature space, then creates a new sample at a randomly selected point between two nearest neighbors in the feature space. For missing values that represented 0.0003% in the data set, we used a univariate feature imputation algorithm to predict missing values in datasets before training the classification model.
Table 1 Gait and balance features extracted from Mobility Lab software
Classification and model selection
The classification models included NN, SVM, kNN, decision tree (DT), random forest (RF), gradient boosting classifier (GB), LR, and dummy model. The dummy model was a reference model classifier that only chooses the majority class (PD) of the data set. To find out the optimal values of the hyper-parameters of the classification models, we used a stratified 3-fold cross validation with grid search strategy. Table 2 shows the hyper-parameter search spaces of each classification model.
Table 2 Model hyper-parameters of the classification models
The classification models were evaluated with accuracy (a ratio of correct prediction to total observations), recall (a ratio of correct prediction of positive cases to all observations in actual cases), precision (a ratio of correct prediction of positive cases to all positive cases), and F1 score (a harmonic mean of precision and recall). F1 score is the most commonly used performance metric of machine learning, especially when the data set is unevenly distributed [48]. Since F1 score equally weights both false positives and false negatives, it offers less biased metric compared to accuracy [49]. Of note, all performances were micro-averaged. The accuracy, and precision and recall for the F1-score were calculated as follows (TP = true positive, TN = true negative, FP = false positive, FN = false negative):
$$ Accuracy=\frac{TP+ TN}{TP+ FN+ TN+ FP} $$
$$ Recall=\frac{TP}{TP+ FN} $$
$$ Precision=\frac{TP}{TP+ FP} $$
$$ F1\ score=\frac{2}{Recall^{-1}+{Precision}^{-1}}=2\times \frac{Recall\times Precision}{Recall+ Precision} $$
The results of NN, SVM, kNN, DT, RF, GB, and LR with the oversampling approach are shown in Fig. 3. With SMOTE, (1) the accuracy of the models ranged from 0.65 (kNN) to 0.89 (NN); (2) the precision was similar across the models ranging from 0.54 (SVM, kNN, DT, and LR) to 0.61 (NN); (3) the recall ranged from 0.58 (DT) to 0.63 (kNN and GB); and (4) the F1-score ranged from 0.53 (DT and LR) to 0.61 (NN). The results without the oversampling approach can be found in Supplementary Table 2.
Accuracy, Precision, Recall, and F1-score of logistic regression, support vector machine, neural network, k-nearest neighbor, decision tree, random forest, and gradient boosting
This data-driven study aimed to differentiate between two movement disorders, PD and ET based on balance and gait characteristics collected from IMU sensors using various machine learning models. Additionally, the classification performance was compared across different machine learning models.
Recent technological advances enable clinics and research laboratories to employ wearable devices in their balance and gait assessments. This allows precise measurement of balance and gait abnormalities and accurate monitoring of physical activities of daily living. However, the data produced by technological devices are often overwhelming and under-utilized due to the size and complexity of the data [50]. The current data set provides useful clinical information for balance and gait such as gait speed, cadence, and postural sway collected by wearable devices. The current results show that machine learning models can increase the utility of clinically available data collected by technological devices to classify two movement disorders. Our machine learning models outperformed (F1-scores ranging between 0.49 and 0.61) the dummy model (F1-score = 0.48) to classify the two movement disorders. Hence, our results demonstrate that machine learning models are useful to discern PD from ET using balance and gait characteristics.
In this study, with SMOTE, the NN outperformed other models in classifying PD and ET solely based on balance and gait features. The F1-score of NN was 0.61, showing the highest performance among 8 models in the analysis. The robustness of NN performance typically shows in large and complex data. Previous studies in PD have demonstrated NN as the superior machine learning technique using data collected from wearable IMU sensors in levodopa-induced dyskinesia assessment and detection [28, 29], gait abnormality classification [30], and discrimination between people with PD who underwent subthalamic stimulation and healthy controls [31]. Although the GB showed a similar performance based on the F1-score (0.59), the accuracy of GB (85%) was lower than that of the NN (89%). Accuracies of other comparison models including SVM, kNN, DT, RF, and LR ranged between 65 and 87% and their F1-scores were lower than the NN. However, particularly for this data-driven study, higher performance in accuracy does not necessarily reflect superior performance of the model, because the data set was heavily imbalanced with 92.5% of people with PD and 7.5% of people with ET. This implies that a dummy model will be 92.5% accurate in classifying PD if the model categorized each case as PD. The F1-score is a more adequate measure especially for an imbalanced data set in machine learning, because an F1-score considers both false positives and negatives and provides more weight to correctly classified samples in the minority class [51, 52].
The current study has limitations. In our study, the data set was imbalanced towards an overrepresentation of PD. To overcome this limitation, we implemented the SMOTE that generates synthetic minority class samples, which is a widely used oversampling method [53]. Our findings demonstrated that SMOTE increased the classification performance, based on F1-scores, in the majority of models in the study (Supplementary Table 2). This result may indicate the SMOTE was effective to minimize the influence of imbalanced class distribution in the current data set.
The design of the current study was a cross-sectional design including patients with clinically diagnosed PD or ET. Results of balance and gait assessment in clinic often show large variability. However, our study only focused on the outcome from one visit, which may not represent individuals' true balance and gait performance. Thus, future research needs to combine the results from multiple visits and include a longitudinal analysis using data over time to inform the accuracy of NNs using balance and gait characteristics from wearable devices to assist in the differential diagnosis and prediction of disease progression of PD and ET.
The current study utilized balance and gait data collected from IMU sensors. Future studies combining data from IMU sensors, other sensing modalities (e.g., video, electromagnetic signal sensors), and clinical scales may allow machine learning models to provide more targeted and stratified classification (e.g., classification of PD or ET by the severity of the disease). In addition, the machine learning models derived from IMU balance and gait variables should also be employed to differentiate between different types of Parkinsonism, including progressive supranuclear palsy, multiple system atrophy, corticobasal degeneration, Lewy body dementia, or vascular Parkinsonism. One of the notable clinical features in both PD and ET is tremor [3]. Previous studies demonstrated that a smartphone accelerometer can distinguish PD and ET during upper-body tasks. Therefore, a combination of different sensor technologies that detect gait, balance, tremor, and other clinical features may offer a comprehensive and accurate diagnosis of PD and ET.
This study was performed with no control group. Future research should include a healthy control group to evaluate the accuracy of machine learning models in early diagnosis of movement disorders. The primary goal of the study was to investigate the usefulness of IMU balance and gait sensors to differentiate between PD and ET. Therefore, we did not include any participant characteristics in the machine learning models. It is likely that demographic and clinical features will improve the accuracy of the machine learning models. The current study only utilized 48 features among 130 features in the model. The size of the current data set was not big enough to include all the features as input in the machine learning models. In addition, we manually selected those 48 features based on the clinical expertise and prior-knowledge [45]. A larger sample size will allow us to include all 130 features and employ different feature selection methods in machine learning such as filter-based, wrapper-based, and embedded methods [54]. These additional selection methods may minimize the loss of features that could be influenced by human bias. The black box problem of NN models also makes it challenging to confirm which features were selected into the model [55].
Lastly, unlike past studies that utilized raw signal data captured by IMU sensors [28,29,30,31], the current study utilized pre-processed data (e.g., gait speed, sway area, cadence, etc.) from raw signal data as input variables. We opted to use the pre-processed data since these are readily available and have been tested on reliability and validity, adding to the clinical relevance of our current findings. These pre-processed balance and gait characteristics from wearable sensors have been used to evaluate movement disorder progression, fall risks, treatment efficacy, and differences between people with movement disorders and healthy controls [21]. However, we acknowledge that using raw data adds more information to the model since the pre-processing procedure might result in a significant loss of raw signal features directly from IMU sensors. In general, the performance of NN can be more precise and accurate when the model is fed more data. Thus, further examination using raw data collected from wearable IMU sensors can offer new insights that extend our current findings.
Wearable sensors for balance and gait assessments can be implemented in movement disorders clinics to produce a vast amount of potentially informative data for assisting in diagnosis and monitoring disease progression. The current study showed that NN with SMOTE outperformed machine learning models and traditional logistic regression in classifying PD and ET based on the pre-processed balance and gait IMU data set. With further validation, a data-driven approach using machine learning techniques may provide a more efficient diagnostic and prognostic tool that can assist in the clinicians' decision-making process.
The datasets analyzed during the current study are not publicly available.
DT:
Gradient boosting
IMU:
Inertial motion unit
iSAW:
Instrumented stand and walk
kNN:
K-nearest neighbor
NN:
Neural network
PCA:
RF:
Random forest
SMOTE:
Synthetic minority over-sampling technique
SVM:
Support vector machine
Thenganatt MA, Jankovic J. The relationship between essential tremor and Parkinson's disease. Parkinsonism Relat Disord. 2016;22:S162–5.
Louis ED. Essential tremor. Lancet Neurol. 2005;4(2):100–10.
Thenganatt MA, Louis ED. Distinguishing essential tremor from Parkinson's disease: bedside tests and laboratory evaluations. Expert Rev Neurother. 2012;12(6):687–96.
Bermejo-Pareja F. Essential tremor - a neurodegenerative disorder associated with cognitive defects? Nat Rev Neurol. 2011;7(5):273.
Jain S, Lo SE, Louis ED. Common misdiagnosis of a common neurological disorder: how are we misdiagnosing essential tremor? Arch Neurol. 2006;63(8):1100–4.
Brodie MA, et al. Gait as a biomarker? Accelerometers reveal that reduced movement quality while walking is associated with Parkinson's disease, ageing and fall risk. In: International Conference on Biomedical Engineering and Biotechnology: IEEE; 2014. https://pubmed.ncbi.nlm.nih.gov/25571356/.
Shah J, Virmani T. Objective gait parameters as a noninvasive biomarker for freezing of gait in Parkinson disease (P1.016). Neurology. 2017;88(16 Supplement):P1.016.
Kourtis LC, et al. Digital biomarkers for Alzheimer's disease: the mobile/wearable devices opportunity. NPJ Digit. 2019;2(1):1–9.
Heesen C, et al. Patient perception of bodily functions in multiple sclerosis: gait and visual function are the most valuable. Mult Scler J. 2008;14(7):988–91.
Hoskovcová M, et al. Disorders of balance and gait in essential tremor are associated with midline tremor and age. Cerebellum. 2013;12(1):27–34.
Earhart GM, et al. Gait and balance in essential tremor: variable effects of bilateral thalamic stimulation. Mov Disord. 2009;24(3):386–91.
Rao AK, Gillman A, Louis ED. Quantitative gait analysis in essential tremor reveals impairments that are maintained into advanced age. Gait Posture. 2011;34(1):65–70.
Singer C, Sanchez-Ramos J, Weiner WJ. Gait abnormality in essential tremor. Mov Disord. 1994;9(2):193–6.
Stolze H, et al. The gait disorder of advanced essential tremor. Brain. 2001;124(11):2278–86.
Kronenbuerger M, et al. Balance and motor speech impairment in essential tremor. Cerebellum. 2009;8(3):389–98.
Parisi SL, et al. Functional mobility and postural control in essential tremor. Arch Phys Med Rehabil. 2006;87(10):1357–64.
Lonini L, et al. Wearable sensors for Parkinson's disease: which data are worth collecting for training symptom detection models. NPJ Digit. 2018;1(1):64.
Blumrosen G, et al. Tremor Acquisition System Based on UWB Wireless Sensor Network. In: 2010 International Conference on Body Sensor Networks; 2010.
Blumrosen G, et al. A Real-Time Kinect Signature-Based Patient Home Monitoring System. Sensors (Basel). 2016;16(11):1965.
Winstein C, Requejo P. Innovative technologies for rehabilitation and health promotion: what is the evidence? Phys Ther. 2015;95(3):294–8.
Brognara L, et al. Assessing gait in Parkinson's disease using wearable motion sensors: a systematic review. Diseases (Basel). 2019;7(1):18.
Zago M, et al. Gait evaluation using inertial measurement units in subjects with Parkinson's disease. J Electromyogr Kinesiol. 2018;42:44–8.
Beam AL, Kohane IS. Big data and machine learning in health care. JAMA. 2018;319(13):1317–8.
Tsoulos IG, et al. Application of machine learning in a Parkinson's disease digital biomarker dataset using neural network construction (NNC) methodology discriminates patient motor status. Front ICT. 2019;6(10). https://www.frontiersin.org/articles/10.3389/fict.2019.00010/full.
Hssayeni MD, et al. Wearable sensors for estimation of Parkinsonian tremor severity during free body movements. Sensors (Basel). 2019;19(19):4215.
Ahmadi Rastegar D, et al. Parkinson's progression prediction using machine learning and serum cytokines. NPJ Parkinsons Dis. 2019;5(1):14.
Moon S, et al. Artificial neural networks in neurorehabilitation: A scoping review. NeuroRehabilitation. 2020;46(3):259–26. https://content.iospress.com/articles/neurorehabilitation/nre192996.
Keijsers NL, et al. Detection and assessment of the severity of levodopa-induced dyskinesia in patients with Parkinson's disease by neural networks. Mov Disord. 2000;15(6):1104–11.
Keijsers NL, Horstink MW, Gielen SC. Automatic assessment of levodopa-induced dyskinesias in daily life by neural networks. Mov Disord. 2003;18(1):70–80.
Manap HH, Tahir NM, Yassin AIM. Statistical analysis of parkinson disease gait classification using artificial neural network. in International Symposium on Signal Processing and Information Technology (ISSPIT). Bilbao: IEEE; 2011.
Muniz AM, et al. Assessment of the effects of subthalamic stimulation in Parkinson disease patients by artificial neural network. in International Conference on Biomedical Engineering and Biotechnology. Minneapolis: IEEE; 2009.
Hossen A. A neural network approach for feature extraction and discrimination between Parkinsonian tremor and essential tremor. Technol Health Care. 2013;21(4):345–56.
Geman O, Costin H. Automatic assessing of tremor severity using nonlinear dynamics, artificial neural networks and neuro-fuzzy classifier. Adv Electr Comput En. 2014;14(1):133–9.
Vásquez-Correa JC, et al. Multimodal assessment of Parkinson's disease: a deep learning approach. IEEE J Biomed Health Info. 2019;23(4):1618–30.
Abrami A, et al. Using an unbiased symbolic movement representation to characterize Parkinson's disease states. Sci Rep. 2020;10(1):7377.
Aubin PM, Serackis A, Griskevicius J. Support vector machine classification of Parkinson's disease, essential tremor and healthy control subjects based on upper extremity motion. In: International Conference on Biomedical Engineering and Biotechnology: IEEE; 2012. https://ieeexplore.ieee.org/document/6245267.
Loaiza Duque JD, et al. Using machine learning and accelerometry data for differential diagnosis of Parkinson's disease and essential tremor. In: Applied Computer Sciences in Engineering. Cham: Springer International Publishing; 2019.
Surangsrirat D, et al. Support vector machine classification of Parkinson's disease and essential tremor subjects based on temporal fluctuation. In: International Conference on Biomedical Engineering and Biotechnology: United States: IEEE; 2016. https://pubmed.ncbi.nlm.nih.gov/28269710/.
Woods AM, et al. Parkinson's disease and essential tremor classification on mobile device. Pervasive Mob Comput. 2014;13:1–12.
Horak F, King L, Mancini M. Role of body-worn movement monitor technology for balance and gait rehabilitation. Phys Ther. 2015;95(3):461–70.
Horak FB, et al. Balance and gait represent independent domains of mobility in Parkinson disease. Phys Ther. 2016;96(9):1364–71.
Mancini M, et al. Mobility lab to assess nalance and gait with synchronized body-worn sensors. J Bioeng Biomed Sci. 2011;(Suppl 1):007–7. https://pubmed.ncbi.nlm.nih.gov/24955286/.
Morris R, et al. Validity of mobility lab (version 2) for gait assessment in young adults, older adults and Parkinson's disease. Physiol Meas. 2019;40(9):095003.
Simoes MA. Feasibility of wearable sensors to determine gait parameters. Tampa: University of South Florida; 2011.
Muro-de-la-Herran A, Garcia-Zapirain B, Mendez-Zorrilla A. Gait analysis methods: an overview of wearable and non-wearable systems, highlighting clinical applications. Sensors (Basel). 2014;14(2):3362–94.
APDM. Whitepaper for mobility lab by APDM. Portland; 2015.
Chawla NV, et al. SMOTE: synthetic minority over-sampling technique. J Artif Intell Res. 2002;16:321–57.
He H, Ma Y. Imbalanced learning: foundations, algorithms, and applications: Wiley; 2013. https://onlinelibrary.wiley.com/doi/book/10.1002/9781118646106.
Hripcsak G, Rothschild AS. Agreement, the F-measure, and reliability in information retrieval. J Am Med Inform Assn. 2005;12(3):296–8.
Ohno-Machado L, Rowland T. Neural network applications in physical medicine and rehabilitation. Am J Phys Med Rehabil. 1999;78(4):392–8.
Derczynski L. Complementarity, F-score, and NLP Evaluation. in Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16); 2016.
Tharwat, A., Classification assessment methods. Appl Comput Informat, 2018.
Blagus R, Lusa L. SMOTE for high-dimensional class-imbalanced data. BMC Bioinformatics. 2013;14(1):106.
Jović A, Brkić K, Bogunović N. A review of feature selection methods with applications. In: 2015 38th international convention on information and communication technology, electronics and microelectronics (MIPRO): Ieee; 2015. https://ieeexplore.ieee.org/document/7160458.
Zednik C. Solving the black box problem: a normative framework for explainable artificial intelligence. Philos Technol. 2019. https://link.springer.com/article/10.1007/s13347-019-00382-7.
This research received grant from the International Essential Tremor Foundation for Gait Assessment in Essential Tremor Using Wireless Sensors (V.D.S, K.L., R.P., H.D.).
Department of Physical Therapy, Ithaca College, Ithaca, NY, USA
Sanghee Moon
Department of Physical Therapy and Rehabilitation Science, University of Kansas Medical Center, Kansas City, KS, USA
Sanghee Moon, Abiodun E. Akinwuntan & Hannes Devos
Department of Information Technology, Jeonbuk National University, Jeonju, South Korea
Hyun-Je Song
Department of Neurology, University of Kansas Medical Center, Kansas City, KS, USA
Vibhash D. Sharma, Kelly E. Lyons & Rajesh Pahwa
Office of the Dean, School of Health Professions, University of Kansas Medical Center, Kansas City, KS, USA
Abiodun E. Akinwuntan
Vibhash D. Sharma
Kelly E. Lyons
Rajesh Pahwa
Hannes Devos
Conceptualization, S.M., H.D., V.D.S., K.E.L. and R.P.; methodology, S.M., H-J.S. and H.D.; software, S.M. and H-J.S.; formal analysis, S.M. and H-J.S.; investigation, S.M., H-J.S., V.D.S., K.E.L, R.P., A.E.A and H.D.; resources, V.D.S., K.E.L. and R.P.; data curation, S.M. and H-J.S.; writing—original draft preparation, S.M.; writing—review and editing, H-J.S., H.D., A.E.A, V.D.S., K.E.L. and R.P.; supervision, H.D.; project administration, S.M.; funding acquisition, V.D.S. All authors have read and agreed to the published version of the manuscript.
Correspondence to Sanghee Moon.
The study ethics was approved by the University of Kansas Medical Center Institutional Review Board (#12351 - Movement Disorder Research Registry and #STUDY00143000 - Gait assessment in Movement Disorders). The data set used in this retrospective data base study was collected as part of routine patient care after patient's consent.
Additional file 1: Supplementary Table 1.
Balance and gait features and description.
Performance across different machine learning models with and without SMOTE.
Moon, S., Song, HJ., Sharma, V.D. et al. Classification of Parkinson's disease and essential tremor based on balance and gait characteristics from wearable motion sensors via machine learning techniques: a data-driven approach. J NeuroEngineering Rehabil 17, 125 (2020). https://doi.org/10.1186/s12984-020-00756-5
Submission enquiries: [email protected]
|
CommonCrawl
|
Is there a recognition principle for $\mathbb{E}_{\infty}$-spaces with zero?
A commutative monoid with zero is a commutative monoid $A$ together with an element $0_{A}$ such that $0_{A}a=a0_{A}=0_{A}$ for all $a\in A$. They are precisely the monoids (in the sense of monoidal categories) in the category of pointed sets, equipped with the smash product.
Passing to the derived world, commutative monoids get replaced by $\mathbb{E}_{\infty}$-monoids in spaces, while pointed sets get replaced by pointed spaces. So the natural analogue of a commutative monoid with zero in homotopy theory should be an $\mathbb{E}_{\infty}$-monoid in the symmetric monoidal $\infty$-category $(\mathcal{S}_*,\wedge,S^0)$ of pointed spaces, called an $\mathbb{E}_{\infty}$-space with zero.
Question. The May recognition principle states that a space is (weakly equivalent to) an infinite loop space iff it is a grouplike $\mathbb{E}_{\infty}$-monoid in spaces. Is there a recognition principle for $\mathbb{E}_{\infty}$-spaces with zero?
(Or of a subclass of them, such as some appropriate version of "grouplike" that works for the non-Cartesian monoidal $\infty$-category $\mathcal{S}_*$)
at.algebraic-topology homotopy-theory higher-algebra
Théo
ThéoThéo
$\begingroup$ How do you make sense of $E_\infty$-group in a non-cartesian monoidal ($\infty$-,but it is irrelevant for my question)category ? For $E_\infty$-monoids, this is a good question but I'm not sure there's a more satisfying answer than "they are $E_\infty$-monoids in that category"... Unlike for sets, the basepoint here need not be "added". Examples of such monoids are of course given by the multiplicative structure on $\Omega^\infty E$, for $E$ a commutative ring spectrum $\endgroup$
– Maxime Ramzi
$\begingroup$ @MaximeRamzi About $\mathbb{E}_{\infty}$-groups, I've been thinking about this, though I'm not yet sure: in the $1$-categorical case, it makes sense to speak of "group objects in $\mathsf{Sets}_*$" (as a property of monoids in $\mathsf{Sets}_*$), even though it is non-Cartesian: by a result of Péroux and Shipley (Lemma 2.4 of arXiv:1708.02592), every comonoid in $(\mathsf{Sets}_*,\wedge,S^0)$ comes uniquely from a comonoid in $(\mathsf{Sets},\times,\mathrm{pt})$, freely adjoined with a basepoint. $\endgroup$
– Théo
$\begingroup$ So, any monoid $A$ in $\mathsf{Sets}_*$ can be made into a bimonoid in it in a unique way, and since antipodes are also unique if they exist, the statement that $A$ has a Hopf monoid structure becomes a property, rather than extra structure. So in this sense we may say that $A$ is a group object in $\mathsf{Sets}_*$ iff it is a Hopf monoid in $\mathsf{Sets}_*$. $\endgroup$
$\begingroup$ I'm unsure if this is still true in the $\infty$-categorical case, though. (I asked this as a separate question here). If it doesn't, then we can also just consider $\mathbb{E}_{\infty}$-Hopf monoids in $\mathcal{S}_*$, rather than group objects there (though this would make things less interesting, i think :/) $\endgroup$
$\begingroup$ @Théo Yes, the functor $Ω^∞$ is lax symmetric monoidal (since it is the right adjoint to the symmetric monoidal functor $Σ^∞$), and so it sends $\mathcal{O}$-algebras in $\operatorname{Sp}$ to $\mathcal{O}$-algebras in $\mathcal{S}_\ast$ for every $\infty$-operad $\mathcal{O}$. Note that this gives a negative answer to your question about the characterization of $E_\infty$-monoids in $\mathcal{S}_\ast$, since $\Omega^\infty E$ usually is not of the form $X_+$ for any $X$ (as all connected components are equivalent as spaces) $\endgroup$
– Denis Nardin
My apologies for answering my own question, as well as the many edits. I've found answers to all but one of the original five questions, both reproduced below.
Has the notion of an $\mathbb{E}_{\infty}$-monoid in pointed spaces been studied before, particularly in the classical homotopy theory literature, and certainly under another name?
A natural example of an $\mathbb{E}_\infty$-monoid in pointed spaces is $\Omega^\infty E$ for $E$ any $\mathbb{E}_{\infty}$-ring spectrum, as pointed by Maxime Ranzi in the comments. What are some other nice examples?
The $\infty$-category of $\mathbb{E}_\infty$-spaces admits a tensor product $\otimes_\mathbb{F}$, having unit the geometric realisation $$|\mathrm{N}_{\bullet}(\mathbb{F})|\cong\coprod_{n=0}^{\infty}\mathbf{B}\Sigma_{n}$$ of the groupoid of finite sets and permutations $\mathbb{F}$. Is there such a tensor product in the $\infty$-category of $\mathbb{E}_{\infty}$-monoids in $\mathcal{S}_*$, and, if so, what is its monoidal unit?
The $\infty$-category of $\mathbb{E}_{\infty}$-spaces admits a number of point-set models, such as special $\Gamma$-spaces, commutative monoids in $*$-modules, and commutative monoids in $\mathcal{I}$-spaces. Similarly, the $\infty$-category of spectra has $\mathbb{S}$-modules, symmetric spectra, and orthogonal spectra as point-set models (there's also very special $\Gamma$-spaces for connective spectra). Does the symmetric monoidal $\infty$-category $\mathsf{Mon}_{\mathbb{E}_{\infty}}(\mathcal{S}_*)$ also admit point-set models, in the sense of a presentation by a monoidal model category?
Yes. These are called "$\mathbb{E}_{\infty}$-spaces with zero" or "based $\mathbb{E}_{\infty}$-spaces" in Rognes, Topological logarithmic structures, Remark 6.12. There are also other names depending on which model (if any) one chooses for them, which include "$\mathcal{L}_0$-spaces" and "commutative based $\mathcal{I}$–space monoids" in loc. cit, as well as "$\mathcal{O}$-spaces with zero" or "$\mathcal{O}_0$-spaces" in May–Quinn–Ray–Tornehave's, $\mathrm{E}_\infty$ Ring Spaces and $\mathrm{E}_\infty$ Ring Spectra, Section IV.1, for $\mathcal{O}$ an $\mathbb{E}_{\infty}$-operad.
Geometric realisations of nerves of symmetric monoidal categories provide examples of $\mathbb{E}_{\infty}$-spaces. Similarly, geometric realisations of nerves of "symmetric monoidal categories with zero" give another class of examples of $\mathbb{E}_{\infty}$-spaces with zero. Indeed:
Define a monoidal category with zero to be an $\mathbb{E}_{1}$-monoid in $(\mathrm{N}^\mathsf{D}_{\bullet}(\mathsf{Cats}_{\mathsf{2},*}),\wedge,\mathsf{pt}\coprod\mathsf{pt})$, where $\mathsf{Cats}_{\mathsf{2},*}$ is the $2$-category of small (weakly-)pointed categories;
Concretely, such an object corresponds to a monoidal category $(\mathcal{C},\otimes,\mathbf{1}_{\mathcal{C}})$ equipped with an object $\mathbf{0}_{\mathcal{C}}$ and natural families of isomorphisms \begin{align*} \delta^{\ell,\mathbf{0}}_{A} &\colon \mathbf{0}_{\mathcal{C}}\otimes A \longrightarrow \mathbf{0}_{\mathcal{C}},\\ \delta^{r,\mathbf{0}}_{A} &\colon A\otimes \mathbf{0}_{\mathcal{C}} \longrightarrow \mathbf{0}_{\mathcal{C}}, \end{align*} called the left and right annihilators of $\mathcal{C}$, satisfying certain coherence conditions.
Similarly, braided and symmetric monoidal categories with zero are $\mathbb{E}_{2}$- and $\mathbb{E}_{3}$($=\mathbb{E}_{4}=\cdots=\mathbb{E}_{\infty}$-)monoids in $(\mathrm{N}^\mathsf{D}_{\bullet}(\mathsf{Cats}_{\mathsf{2},*}),\wedge,\mathsf{pt}\coprod\mathsf{pt})$. They are described as above, but there are a few extra coherence conditions combining braidings and left/right annihilators.
For example, any (braided, symmetric) bimonoidal category gives rise to such a (braided, symmetric) monoidal category with zero by forgetting the additive monoidal structure.
By the symmetric monoidality of nerves and geometric realisations, it follows that, for any symmetric monoidal category with zero $\mathcal{C}$, the space $|\mathrm{N}_{\bullet}(\mathcal{C})|$ is an $\mathbb{E}_{\infty}$-space with zero.
So for instance each of the examples here give also examples of $\mathbb{E}_{\infty}$-spaces with zero, and there are of course many more.
Yes, the $\infty$-category $\mathsf{Mon}_{\mathbb{E}_{\infty}}(\mathcal{S}_*)$ has a canonical monoidal structure obtained by applying Gepner–Groth–Nikolaus, Theorem 5.1 to $\mathcal{C}=\mathcal{S}_*$. By the same result, the unit for this tensor product is the free $\mathbb{E}_{\infty}$-space with zero on $S^0$. While the free $\mathbb{E}_{\infty}$-space on $*$ is computed to be \begin{align*} \coprod_{n=0}^{\infty}*^{\times n}_{\mathsf{h}\Sigma_{n}} &\simeq \coprod_{n=0}^{\infty}\mathbf{E}\Sigma_{n}\times_{\Sigma_{n}}* \\ &\simeq \coprod_{n=0}^{\infty}\mathbf{B}\Sigma_{n}\\ &\cong |\mathrm{N}_{\bullet}(\mathbb{F})|, \end{align*} the classifying space of the groupoid of finite sets and permutations $\mathbb{F}$, we see that the free $\mathbb{E}_{\infty}$-space with zero on $S^0$ is given by \begin{align*} \bigvee_{n=0}^{\infty}((S^{0})^{\wedge n})_{\mathsf{h}\Sigma_{n}} &\simeq \bigvee_{n=0}^{\infty}(S^{0})_{\mathsf{h}\Sigma_{n}}\\ &\simeq \bigvee_{n=0}^{\infty}\mathbf{E}\Sigma_{n,+}\times_{\Sigma_{n}}S^0,\\ &\simeq \bigvee_{n=0}^{\infty}(\mathbf{B}\Sigma_{n})_+\\ &\simeq \left(\coprod_{n=0}^{\infty}\mathbf{B}\Sigma_{n}\right)_+\\ &\simeq |\mathrm{N}_{\bullet}(\mathbb{F})|_+\\ &\simeq |\mathrm{N}_{\bullet}(\mathbb{F}_+)|, \end{align*} where $\mathbb{F}_+$ is the symmetric monoidal category with zero obtained by freely adjoining an absorbing element $-\infty$ to $\mathbb{F}$ and extending the rest of the symmetric monoidal category structure of $\mathbb{F}$ to $\mathbb{F}_+$ suitably (in particular by defining $\langle n\rangle\oplus-\infty\overset{\mathrm{def}}{=}-\infty$ for all $\langle n\rangle\in\mathrm{Obj}(\mathbb{F}_+)$).
We can just consider pointed versions of models for $\mathbb{E}_{\infty}$-spaces:
A model for $\mathbb{E}_{\infty}$-spaces is given by special $\Gamma$-spaces, which are certain pointed functors $E\colon(\Gamma^{\mathsf{op}},\langle0\rangle)\to(\mathsf{Sets},*)$. We can (probably) obtain a model for $\mathbb{E}_{\infty}$-spaces with zero by considering instead special pointed functors $E\colon(\Gamma^{\mathsf{op}},\langle0\rangle)\to(\mathsf{Sets}_*,S^0)$.
A model for $\mathbb{E}_{\infty}$-spaces is given by commutative monoids in $\mathcal{I}$-spaces. Similarly, commutative monoids in pointed $\mathcal{I}$-spaces give a model for $\mathbb{E}_\infty$-spaces with zero. This is observed in Rognes, Topological logarithmic structures, Remark 6.12.
A model for $\mathbb{E}_{\infty}$-spaces is given by commutative monoids in $*$-modules. Similarly to the case of $\mathcal{I}$-spaces above, commutative monoids in "pointed $*$-modules" form a model for $\mathbb{E}_\infty$-spaces with zero as well.
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology homotopy-theory higher-algebra or ask your own question.
Examples of $\mathbb{E}_{k}$-semiring spaces
Variations on Thomason's equivalence between connective spectra and symmetric monoidal categories
Do all $\mathbb{E}_{k}$-comonoids in $\mathcal{C}_*$ come from "freely-pointed" $\mathbb{E}_{k}$-comonoids on $\mathcal{C}$?
A model category of spaces where strict commutative monoids are $E_\infty$-spaces
Does the Monoid Axiom hold for k-spaces?
Reference for a generalization of Γ-spaces to monoidal model categories
Do topological commutative monoids model all 0-connective spectra (after group completion)?
Tensor products of $\mathbb{E}_\infty$-spaces
Group completion of $\mathbb{E}_{\infty}$-monoids via tensor products
Lewis's convenience argument for $\mathbb{E}_{\infty}$-spaces
Are $E_k$ monoids higher categories?
|
CommonCrawl
|
The global dynamics in a wild-type and drug-resistant HIV infection model with saturated incidence
Wei Chen1,
Nafeisha Tuerxun1 &
Zhidong Teng1
Advances in Difference Equations volume 2020, Article number: 25 (2020) Cite this article
In this paper we investigate the global dynamics in an HIV virus infection model with saturated incidence. The model includes two viral strains, one is wild-type (i.e. drug sensitive) and another is drug-resistant. The wild-type strain can mutate and become drug-resistant during the process of reverse transcription. The nonnegativity and boundedness of solutions are established. The basic reproduction numbers of two strains and the existence of equilibria are also obtained. The threshold criteria on the local and global stability of equilibria and the uniform persistence of the model are established by using the linearization method, constructing suitable Lyapunov functions and the theory of persistence in dynamical systems. Moreover, the mathematical analysis and numerical examples show that model may have a positive equilibrium which is globally asymptotically stable.
It is well known that mathematical models that describe the dynamical behaviors of virus infection play an important role in understanding the mechanism of the diffusion of virus. There has been much interest in mathematical modeling of viral dynamics within-host. So, the research of virus dynamics with specific immune response, which can control the virus propagation, has drawn significant attention [1–6]. A few years ago, Perelson et al. in [7] constructed a model that has been widely adopted to model the plasma viral load in HIV infected patients as follows:
$$ \textstyle\begin{cases} \frac{\mathrm {d}T(t)}{\mathrm {d}t}= \lambda -dT-kVT, \\ \frac{\mathrm {d}T_{s}(t)}{\mathrm {d}t}= kVT-\delta T_{s}, \\ \frac{\mathrm {d}V(t)}{\mathrm {d}t}= N\delta T_{s}-cV. \end{cases} $$
Treating HIV-infected patients with a combination of several antiretroviral drugs usually contributes to a substantial decline in viral load and an increase in \(\mathit{CD}_{4}^{+}\) T cells. Nevertheless, there is a reasonable chance that drug-resistant variants of HIV preexist even before the initiation of therapy due to a single mutation, or a number of mutation combinations can result in drug resistance by Ribeiro and Bonhoeffer in 2000 (see [8, 9]). In order to study the mechanism of the emergence of drug resistance during the treatment of HIV-infected patients, a dynamical model including wild-type and drug-resistant strains was proposed by Rong et al. in [9] as follows:
$$ \textstyle\begin{cases} \frac{\mathrm {d}T(t)}{\mathrm {d}t}= \lambda -dT-k_{s}V_{s}T-k_{r}V_{r}T,\\ \frac{\mathrm {d}T_{s}(t)}{\mathrm {d}t}=(1-u)k_{s}V_{s}T-\delta T_{s},\\ \frac{\mathrm {d}V_{s}(t)}{\mathrm {d}t}=N_{s}\delta T_{s}-cV_{s},\\ \frac{\mathrm {d}T_{r}(t)}{\mathrm {d}t}=uk_{s}V_{s}T+k_{r}V_{r}T-\delta T_{r}, \\ \frac{\mathrm {d}V_{r}(t)}{\mathrm {d}t}=N_{r}\delta T_{r}-cV_{r}. \end{cases} $$
Usually the rate of infection in most HIV-1 models is assumed to be bilinear in the virus and the uninfected cells. However, the actual incidence rate is probably not linear over the entire range of virus and the uninfected cells. Thus, it is reasonable to assume that the infection rate of HIV-1 is given by the Beddington–DeAngelis functional response [10], which was introduced by Beddington [11] and DeAngelis et al. [12]. For a specific nonlinear incidence rate, we consider the following HIV-1 infection model with saturated incidence:
$$ \textstyle\begin{cases} \frac{\mathrm {d}T(t)}{\mathrm {d}t}= \lambda -dT-\frac{k_{s}V_{s}T}{1+\omega _{1}V_{s}}-\frac{k_{r}V_{r}T}{1+\alpha _{1}V_{r}}, \\ \frac{\mathrm {d}T_{s}(t)}{\mathrm {d}t}= (1-u)\frac{k_{s}V_{s}T}{1+\omega _{1}V _{s}}-\delta T_{s}, \\ \frac{\mathrm {d}V_{s}(t)}{\mathrm {d}t}= N_{s}\delta T_{s}-cV_{s}, \\ \frac{\mathrm {d}T_{r}(t)}{\mathrm {d}t}= u \frac{k_{s}V_{s}T}{1+\omega _{1}V_{s}}+\frac{k_{r}V_{r}T}{1+\alpha _{1}V _{r}}-\delta T_{r}, \\ \frac{\mathrm {d}V_{r}(t)}{\mathrm {d}t}= N_{r}\delta T_{r}-cV_{r}. \end{cases} $$
The biological significance of variables and parameters in model (2) is given in Table 1.
Table 1 Biological significance of variables and parameters
In model (2), the parameter u (\(0< u<1\)) is the conversion fraction at which cells infected by the wild-type mutate and become drug-resistant during the process of reverse transcription of viral RNA into proviral DNA (SR conversion, for short). It should be noted that the backward mutation from drug-resistant to wild-type strain is neglected since the wild-type virus dominates the population before the initiation of therapy (see [13, 14]). And the terms \(\frac{k_{s}V_{s}T}{1+\omega _{1}V _{s}}\) and \(\frac{k_{r}V_{r}T}{1+\alpha _{1}V_{r}}\) express the saturated incidence for virus \(V_{s}\) and \(V_{r}\), where \(\omega _{1}\) and \(\alpha _{1}\) are the nonnegative constants. When \(\omega _{1}=0\) or \(\alpha _{1}=0\), the corresponding incidence degrades into bilinear incidence for \(V_{s}\) or \(V_{r}\).
In [9], we see that model (1) with bilinear incidence is investigated. The authors only obtained the existence and local stability of the infection-free equilibrium, the equilibrium with only wild-type virus, drug-resistant virus, and the coexistence equilibrium (see Proposition 1 and Proposition 2 in [9]). We all know that in many realistic infectious diseases the nonlinear incidence rates play very important roles, and the global dynamics of the model, including the global asymptotic stability of equilibria, the uniform persistence, etc., also needs to be investigated in detail. In [5, 15], we see that the global dynamics for virus infection models with nonlinear incidence rates is discussed. Therefore, in this paper we carry out the research for a wild-type and drug-resistant HIV infection model with saturated incidence. We establish a series of threshold criteria for the local and global asymptotic stability of infection-free, drug-resistant strain infection equilibria, and the uniform persistence of HIV infection.
The organization of this paper is as follows. In Sect. 2, the nonnegativity and boundedness of solutions are established, and then the basic reproduction numbers of two strains and the existence of equilibria are obtained. In Sect. 3, the main theorems on the local and global stability of equilibria of model (2) are stated and proved. In Sect. 4, the uniform persistence of model (2) is also investigated. In Sect. 5, some numerical examples are given to illustrate our main results. In the last section, a brief conclusion is presented.
For any integer \(n>0\), denote \(R^{n}_{+}=\{(x_{1},x_{2},\ldots,x_{n}) \in R^{n}: x_{i}\geq 0, i=1,2,\ldots,n\}\). The initial condition for model (2) is given by
$$ \bigl(T(0),T_{s}(0),V_{s}(0),T_{r}(0),V_{r}(0) \bigr)=(T_{0},T_{s0},V_{s0},T_{r0},V _{r0})\in R^{5}_{+}. $$
Firstly, on the positivity and boundedness of solutions for model (2), we have the following result.
The solution \((T(t),T_{s}(t),V_{s}(t),T_{r}(t),V_{r}(t))\)of model (2) with initial condition (3) is defined for all \(t\in [0,\infty )\)and is nonnegative and ultimately bounded.
On the nonnegativity of solutions, by the continuity of solutions with respect to initial values, we only need to prove that, for any positive initial value \((T_{0},T_{s0},V_{s0},T_{r0},V_{r0})\), the solution \((T(t),T_{s}(t),V_{s}(t),T_{r}(t),V_{r}(t))\) with initial condition (3) is also positive for any \(t>0\) in the definition interval. From the first equation of model (2), we have
$$ \frac{\mathrm {d}T(t)}{\mathrm {d}t}>-\biggl(d+\frac{k_{s}V_{s}}{1+\omega _{1}V_{s}}+\frac{k _{r}V_{r}}{1+\alpha _{1}V_{r}}\biggr)T(t). $$
Hence, as \(T_{0}>0\), we directly have \(T(t)>0\) for any \(t>0\) in the definition interval.
Define \(m(t)=\min \{T_{s}(t),V_{s}(t),T_{r}(t),V_{r}(t)\}\). Obviously, \(m(0)=\min \{T(0),T_{s}(0),V_{s}(0), T_{r}(0),V_{r}(0)\}>0\). By the continuity of solutions there exists \(\delta >0\) such that \(m(t)>0\), when \(t\in [0,\delta )\). We only need to prove \(m(t)>0\) for all \(t\geq 0\) in the definition interval. Suppose that there exists \(t^{*}>0\) such that \(m(t^{*})=0\) and \(m(t)>0\) for all \(t\in [0,t^{*})\). Then there exist the following cases: (1) \(m(t ^{*})=T_{s}(t^{*})\), (2) \(m(t^{*})=V_{s}(t^{*})\), (3) \(m(t^{*})=T _{r}(t^{*})\), and (4) \(m(t^{*})=V_{r}(t^{*})\).
For case (1), according to \(m(t)>0\) for all \(t\in [0,t^{*})\), from the second equation of model (2), we know \(\frac{\mathrm {d}T_{s}(t)}{\mathrm {d}t}>- \delta T_{s}\). Thus, \(T_{s}(t)>T_{s}(0)e^{-\delta t}\) for any \(t\in [0,t^{*})\). Taking \(t \rightarrow t^{*}\), then \(0=T_{s}(t^{*}) \geq T_{s}(0)e^{-\delta {t^{*}}}>0\), which leads to a contradiction. Similarly, we can get the contradiction for cases (2), (3) and (4). Therefore, \((T(t),T_{s}(t),V_{s}(t),T_{r}(t),V_{r}(t))\) is positive for all \(t\geq 0\) in the definition interval.
Define a Lyapunov function
$$ W(t)=T(t)+T_{s}(t)+\frac{1}{2N_{s}}V_{s}(t)+T_{r}(t)+ \frac{1}{2N_{r}}V _{r}(t). $$
$$\begin{aligned} \frac{\mathrm {d}W(t)}{\mathrm {d}t}={}&\lambda -dT-\frac{1}{2}\delta T_{s}- \frac{c}{2N _{s}}V_{s}-\frac{1}{2}\delta T_{r}- \frac{c}{2N_{r}}V_{r}\leq \lambda -nW(t), \end{aligned}$$
where \(n=\min \{d,\frac{\delta }{2},c\}\). Since solution \(U(t)\) of the comparison equation
$$ \frac{\mathrm {d}U(t)}{\mathrm {d}t}=\lambda -nU(t) $$
with initial condition \(U(0)=U_{0}\geq 0\) is defined for all \(t\in [0,\infty )\) and satisfies \(\lim_{t\rightarrow \infty }U(t)=\frac{ \lambda }{n}\), by the comparison principle, we directly have that \(W(t)\) is bounded, and hence solution \((T(t),T_{s}(t),V_{s}(t),T_{r}(t),V _{r}(t))\) is also bounded. Thus, \((T(t),T_{s}(t),V_{s}(t),T_{r}(t),V _{r}(t))\) can be defined for all \(t\in [0,\infty )\). Furthermore, since \(W(t)\leq U(t)\) as \(W(0)\leq U(0)\), we obtain that \(\limsup_{t\rightarrow \infty } W(t)\leq \lim_{t\rightarrow \infty }U(t)=\frac{ \lambda }{n}\). This implies that the solution \((T(t),T_{s}(t),V_{s}(t),T _{r}(t), V_{r}(t))\) is also ultimately bounded. This completes the proof. □
Following the concept of the basic reproductive number for an epidemic disease presented in [16], we define the wild-type strain infection reproduction number \(R_{s}\) and the drug-resistant strain infection reproduction number \(R_{r}\) as follows:
$$ R_{s}=\frac{k_{s}N_{s}\lambda }{dc},\quad\quad R_{r}= \frac{k_{r}N_{r}\lambda }{dc}. $$
The fraction \(\frac{1}{c}\) gives the average life-span of a virus for strain i (\(i=r,s\)). \(\frac{\lambda }{d}\) is the steady-state target cell density at the beginning of the strain i infection process (i.e. near the infection-free steady state). \(k_{i}N_{i}\) gives the magnitude of virus particles produced by one strain i infectious (virus-producing) cell during its average survival time. Multiplying these quantities together gives the expected number of newly infected cells produced by a single newly for strain i infected cell, that is, \(R_{i}\).
Now, we discuss the equilibrium of model (2). The equilibrium can be given from the following equations:
$$ \textstyle\begin{cases} \lambda -dT-\frac{k_{s}V_{s}T}{1+\omega _{1}V_{s}}-\frac{k_{r}V_{r}T}{1+ \alpha _{1}V_{r}}=0, \\ (1-u)\frac{k_{s}V_{s}T}{1+\omega _{1}V_{s}}-\delta T_{s}=0, \\ N_{s}\delta T_{s}-cV_{s}=0, \\ u\frac{k_{s}V_{s}T}{1+\omega _{1}V_{s}}+\frac{k_{r}V_{r}T}{1+\alpha _{1}V_{r}}-\delta T_{r}=0, \\ N_{r}\delta T_{r}-cV_{r}=0. \end{cases} $$
Obviously, model (2) always has a unique infection-free equilibrium \(E_{0}=(\frac{\lambda }{d},0,0,0,0)\). When \(T_{s}>0\) and \(T_{r}=0\), from (4) we directly have \(V_{r}=0\) and \(V_{s}T=0\), and then \(T_{s}=0\), which leads to a contradiction. When \(T_{s}=0\) and \(T_{r}>0\), from (4) we can obtain that if \(R_{r}>1\), then model (2) has a unique boundary equilibrium \(E_{r}=(T_{1},0,0,T_{r1},V_{r1})\) with
$$ T_{1}=\frac{\lambda (k_{r}+d\alpha _{1}R_{r})}{dR_{r}(k_{r}+d\alpha _{1})}, \quad\quad T_{r1}= \frac{dc(R_{r}-1)}{N_{r}\delta (k_{r}+d\alpha _{1})}, \quad\quad V_{r1}=\frac{d(R_{r}-1)}{k_{r}+d\alpha _{1}}, $$
and if \(R_{r}\leq 1\), then \(E_{r}\) does not exist. When \(T_{s}>0\) and \(T_{r}>0\), from (4) we can obtain that
$$ T_{r}=\frac{\lambda }{\delta }-T_{s}-\frac{\lambda (1+\omega _{1}\frac{ \delta }{c}N_{s}T_{s})}{\delta (1-u)R_{s}}:=T_{r}(T_{s}) $$
$$ T_{s}=\frac{\frac{\lambda }{\delta }((1-u)R_{s}-1)+(((1-u)R_{s}-1) \alpha _{1}\frac{\lambda }{c}N_{r}-R_{r})T_{r}}{R_{s}+\omega _{1}\frac{ \lambda }{c}N_{s} +((R_{s}+\omega _{1}\frac{\lambda }{c}N_{s})\alpha _{1}\frac{\delta }{c}N_{r}+\omega _{1}\frac{\delta }{c}N_{s}R_{r})T _{r}}:=T_{s}(T_{r}). $$
Clearly, functions \(T_{r}(T_{s})\) and \(T_{s}(T_{r})\) are decreasing in \(T_{s}\geq 0\) and \(T_{r}\geq 0\), respectively. We have
$$\begin{aligned}& T_{r}(0)=\frac{\lambda }{\delta R_{s}(1-u)}\bigl((1-u)R_{s}-1\bigr), \quad \quad T_{r}(+\infty )=-\infty , \\& T_{s}(0)=\frac{\frac{\lambda }{\delta }((1-u)R_{s}-1)}{R_{s}+\omega _{1}\frac{\lambda }{c}N_{s}}, \quad\quad T_{s}(+\infty )= \frac{((1-u)R_{s}-1) \alpha _{1}\frac{\lambda }{c}N_{r}-R_{r}}{(R_{s}+\omega _{1}\frac{ \lambda }{c}N_{s})\alpha _{1}\frac{\delta }{c}N_{r}+\omega _{1}\frac{ \delta }{c}N_{s}R_{r}}. \end{aligned}$$
Furthermore, from \(T_{r}(T_{s})=0\) and \(T_{s}(T_{r})=0\), we obtain
$$ T_{s}^{*}=\frac{\lambda ((1-u)R_{s}-1)}{\delta (1-u)R_{s}+\lambda \omega _{1}\frac{\delta }{c}N_{s}}, \quad\quad T_{r}^{*}=\frac{\frac{\lambda }{ \delta }((1-u)R_{s}-1)}{R_{r}-((1-u)R_{s}-1)\alpha _{1} \frac{\lambda }{c}N_{r}}. $$
It is easy to verify that \(T_{s}(0)< T_{s}^{*}\) when \((1-u)R_{s}>1\). From \(T_{r}(0)< T_{r}^{*}\) we let
$$ R_{r}=(1-u)R_{s}+\bigl((1-u)R_{s}-1\bigr)\alpha _{1}\frac{\lambda }{c}N_{r}:=R _{ru}(R_{s}). $$
This shows that curves \(T_{r}(T_{s})\) and \(T_{s}(T_{r})\) have a unique intersection point \((T_{sc},T_{rc})\) in the positive quadrant, which means \(E_{c}=(T_{c},T_{sc},V_{sc},T_{rc},V_{rc})\) is the unique positive equilibrium of model (2). Thus, we finally have the following results.
Model (2) always has a unique infection-free equilibrium \(E_{0}\);
If \(R_{r}>\max \{1,(1-u)R_{s}+((1-u)R_{s}-1)\alpha _{1}\frac{ \lambda }{c}N_{r}\}\), then model (2) only has equilibria \(E_{0}\)and \(E_{r}\);
If \((1-u)R_{s}>1\geq R_{r}\), then model (2) only has equilibria \(E_{0}\)and \(E_{c}\);
(iv)
If \((1-u)R_{s}+((1-u)R_{s}-1)\alpha _{1}\frac{\lambda }{c}N_{r}>R _{r}>1\), then model (2) has three equilibria \(E_{0}\), \(E_{r}\), and \(E_{c}\).
The existence of equilibria for model (2) is also intuitively expressed in Fig. 1. From Theorem 2 and Fig. 1, we can find that the saturated coefficient \(\omega _{1}\) of wild-type virus \(V_{s}\) has no effect on Fig. 1. Along with the decreasing of saturated coefficient \(\alpha _{1}\), the orange region will shrink, and it finally becomes the region \(\{(R_{s},R_{r}): (1-u)R_{s}>R_{r}>1\}\) as \(\alpha _{1}\to 0\). On the contrary, along with the increasing of saturated coefficient \(\alpha _{1}\), the orange region will enlarge, and it finally becomes the region \(\{(R_{s},R_{r}): (1-u)R_{s}>1, R_{r}>1\}\) as \(\alpha _{1} \to +\infty \).
The existence of equilibria of model (2)
Stability of equilibrium
Let \(E=(T,T_{s},V_{s},T_{r},V_{r})\) be any equilibrium of model (2). By calculating, we get that the Jacobian matrix at equilibria E is
$$ J(E)= \begin{pmatrix} -d-\frac{k_{s}V_{s}}{1+\omega _{1}V_{s}}-\frac{k_{r}V_{r}}{1+\alpha _{1}V_{r}} & 0 & -\frac{k_{s}T}{(1+\omega _{1}V_{s})^{2}} & 0 & -\frac{k _{r}T}{(1+\alpha _{1}V_{r})^{2}} \\ (1-u)\frac{k_{s}V_{s}}{1+\omega _{1}V_{s}} & -\delta & (1-u)\frac{k _{s}T}{(1+\omega _{1}V_{s})^{2}} & 0 & 0 \\ 0 & N_{s}\delta & -c & 0 & 0 \\ u\frac{k_{s}V_{s}}{1+\omega _{1}V_{s}}+\frac{k_{r}V_{r}}{1+\alpha _{1}V _{r}} & 0 & u\frac{k_{s}T}{(1+\omega _{1}V_{s})^{2}} & -\delta & \frac{k _{r}T}{(1+\alpha _{1}V_{r})^{2}} \\ 0 & 0 & 0 & N_{r}\delta & -c \end{pmatrix} . $$
Firstly, for the stability of equilibrium \(E_{0}\), we have the following results.
If \((1-u)R_{s}<1\)and \(R_{r}<1\), then infection-free equilibrium \(E_{0}\)is locally asymptotically stable.
If \(R_{s}\leq 1\)and \(R_{r}\leq 1\), then \(E_{0}\)is globally asymptotically stable.
If \((1-u)R_{s}>1\)or \(R_{r}>1\), then \(E_{0}\)is unstable.
At equilibrium \(E_{0}\), from (7) the characteristic equation of \(J(E_{0})\) is
$$ f(X)=(X+d) \bigl(X^{2}+(\delta +c)X+\delta c \bigl(1-(1-u)R_{s}\bigr)\bigr) \bigl(X^{2}+(\delta +c)X+ \delta c(1-R_{r})\bigr)=0. $$
One root of (8) is \(X_{1}=-d<0\). When \((1-u)R_{s}<1\) and \(R_{r}<1\), by the Routh–Hurwitz criterion, all roots of the equations
$$ X^{2}+(\delta +c)X+\delta c\bigl(1-(1-u)R_{s} \bigr)=0 $$
$$ X^{2}+(\delta +c)X+\delta c(1-R_{r})=0 $$
have negative real parts, respectively. This implies that \(E_{0}\) is locally asymptotically stable. When \((1-u)R_{s}>1\) or \(R_{r}>1\), we easily see that equation (9) or (10) has at least a root with positive real part. This implies that \(E_{0}\) is unstable.
For the global stability of \(E_{0}\), we define Lyapunov function \(L_{1}(t)\) as follows:
$$ L_{1}(t)=T_{0}\biggl(\frac{T}{T_{0}}-\ln \frac{T}{T_{0}}-1\biggr)+T_{s}+\frac{1}{N _{s}}V_{s}+T_{r}+ \frac{1}{N_{r}}V_{r}. $$
$$\begin{aligned} \frac{\mathrm {d}L_{1}(t)}{\mathrm {d}t} ={}&\biggl(1-\frac{T_{0}}{T}\biggr) \biggl(\lambda -dT- \frac{k _{s}V_{s}T}{1+\omega _{1}V_{s}}-\frac{k_{r}V_{r}T}{1+\alpha _{1}V_{r}}\biggr)+\biggl((1-u)\frac{k _{s}V_{s}T}{1+\omega _{1}V_{s}}-\delta T_{s}\biggr) \\ & {} +\biggl(u\frac{k_{s}V_{s}T}{1+\omega _{1}V_{s}}+\frac{k_{r}V_{r}T}{1+\alpha _{1}V_{r}}-\delta T_{r} \biggr)+\frac{1}{N_{r}}(N_{r}\delta T_{r}-cV_{r})+ \frac{1}{N _{s}}(N_{s}\delta T_{s}-cV_{s}) \\ ={}&dT_{0}\biggl(2-\frac{T}{T_{0}}-\frac{T_{0}}{T}\biggr)+ \frac{c(R_{s}-1)}{(1+ \omega _{1}V_{s})N_{s}}V_{s}-\frac{c\omega _{1}V_{s}^{2}}{(1+\omega _{1}V _{s})N_{s}} \\ & {} +\frac{c(R_{r}-1)}{(1+\alpha _{1}V_{r})N_{r}}V_{r}-\frac{c\alpha _{1}V _{r}^{2}}{(1+\alpha _{1}V_{r})N_{r}}. \end{aligned}$$
When \(R_{s}\leq 1\) and \(R_{r}\leq 1\), then \(\frac{\mathrm {d}L_{1}(t)}{\mathrm {d}t}\leq 0\) and the set \(M=\{(T,T_{s},V_{s},T _{r},V_{r}): \frac{\mathrm {d}L_{1}(t)}{\mathrm {d}t}=0\}\subset \{(T,T_{s},V _{s},T_{r},V_{r}): T=T_{0},T_{s}\geq 0,V_{s}\geq 0,T_{r}\geq 0,V_{r} \geq 0\}\).
For any solution trajectory \(\{(T(t),T_{s}(t),V_{s}(t),T_{r}(t),V_{r}(t)): t\geq 0\}\subset M\), we have \(T(t)\equiv T_{0}\). From the first equation of model (4), we obtain \(\frac{k_{s}V_{s}(t)T_{0}}{1+\omega _{1}V_{s}(t)}+\frac{k_{r}V_{r}(t)T _{0}}{1+\alpha _{1}V_{r}(t)}\equiv 0\), which implies \(V_{s}(t)=V_{r}(t) \equiv 0\). From the third and fifth equations of model (4), we also get \(N_{s}\delta T_{s}(t)-cV_{s}(t)=0\) and \(N_{r}\delta T_{r}(t)-cV_{r}(t)=0\), which further imply \(T_{s}(t)=T _{r}(t)\equiv 0\). Hence, \((T(t),T_{s}(t),V_{s}(t),T_{r}(t),V_{r}(t)) \equiv E_{0}\). From LaSalle's invariance principle [17], \(E_{0}\) is globally asymptotically stable. This completes the proof. □
Remark 1
In Theorem 3, we only obtained the global asymptotic stability of \(E_{0}\) under \(R_{s}\leq 1\) and \(R_{r}\leq 1\). Therefore, based on conclusion (a) of Theorem 3, an interesting open problem is whether we can establish the global asymptotic stability of \(E_{0}\) when \((1-u)R_{s}\leq 1\) and \(R_{r}\leq 1\).
Next, about the stability of equilibrium \(E_{r}\), we have the following results.
If \(R_{r}>\max \{1,(1-u)R_{s}+((1-u)R_{s}-1)\alpha _{1}\frac{ \lambda }{c}N_{r}\}\), then equilibrium \(E_{r}\)is locally asymptotically stable.
If \(R_{r}>1\)and \(R_{r}<(1-u)R_{s}+((1-u)R_{s}-1)\alpha _{1}\frac{ \lambda }{c}N_{r}\), then \(E_{r}\)is unstable.
If \(R_{r}>\max \{1,R_{s}+\alpha _{1}\frac{\lambda }{c}N_{r}(R _{s}-1)\}\), then \(E_{r}\)is globally asymptotically stable.
At equilibrium \(E_{r}\), from (7) the characteristic equation of \(J(E_{r})\) is
$$ f(X)=\bigl(X^{2}+a_{1}X+a_{0} \bigr) \bigl(X^{3}+b_{2}X^{2}+b_{1}X+b_{0} \bigr)=0, $$
$$ \begin{aligned} & a_{1}=\delta +c, \quad\quad a_{0}=\delta c\frac{k_{r}(R_{r}-(1-u)R_{s}-((1-u)R _{s}-1)\alpha _{1}\frac{\lambda }{c}N_{r})}{R_{r}(k_{r}+d\alpha _{1})}, \\ & b_{2}=d+c+\delta +\frac{dk_{r}(R_{r}-1)}{k_{r}+d\alpha _{1}}, \quad\quad b_{1}=d( \delta +c)+\frac{\delta c\alpha _{1}+k_{r}(\delta +c)}{k_{r}+d \alpha _{1}R_{r}}d(R_{r}-1), \\ & b_{0}=\frac{k_{r}+d\alpha _{1}}{k_{r}+d \alpha _{1}R_{r}}d\delta c(R_{r}-1). \end{aligned} $$
When \(R_{r}>\max \{1,(1-u)R_{s}+((1-u)R_{s}-1)\alpha _{1}\frac{\lambda }{c}N_{r}\}\), we have \(a_{i}>0\) and \(b_{i}>0\) for \(i=0,1,2\). Since
$$\begin{aligned} b_{1}b_{2}-b_{0}={}&\biggl(d(\delta +c)+ \frac{d\delta k_{r}(R_{r}-1)}{k_{r}+d \alpha _{1}R_{r}}\biggr) \biggl(d+c+\delta +\frac{dk_{r}(R_{r}-1)}{k_{r}+d\alpha _{1}R_{r}}\biggr) \\ & {} +\frac{dck_{r}(R_{r}-1)}{k_{r}+d\alpha _{1}R_{r}}\biggl(d+c+\frac{dk_{r}(R _{r}-1)}{k_{r}+d\alpha _{1}R_{r}}\biggr) \\ & {} +\frac{d\delta c\alpha _{1}(R_{r}-1)}{k_{r}+d\alpha _{1}R_{r}}\biggl(\delta +c+\frac{dk_{r}(R_{r}-1)}{k_{r}+d\alpha _{1}R_{r}}\biggr)>0. \end{aligned}$$
According to the Routh–Hurwitz criterion, all roots of equation (11) have negative real parts. Therefore, \(E_{r}\) is locally asymptotically stable. When \(R_{r}>1\) and \(R_{r}<(1-u)R_{s}+((1-u)R _{s}-1)\alpha _{1}\frac{\lambda }{c}N_{r}\), the equation \(X^{2}+a_{1}X+a _{0}=0\) has at least a positive real part root. This implies that \(E_{r}\) is unstable.
To obtain the global stability of \(E_{r}\), we define Lyapunov function \(L_{2}(t)\) as follows:
$$ \begin{aligned} L_{2}(t)={}&T_{1}\biggl( \frac{T}{T_{1}}-\ln \frac{T}{T_{1}}-1\biggr)+T_{s}+ \frac{1}{N _{s}}V_{s}+T_{r1}\biggl(\frac{T_{r}}{T_{r1}}-\ln \frac{T_{r}}{T_{r1}}-1\biggr) \\ & {} +\frac{1}{N _{r}}V_{r1}\biggl(\frac{V_{r}}{V_{r1}}-\ln \frac{V_{r}}{V_{r1}}-1\biggr). \end{aligned} $$
$$\begin{aligned} \frac{\mathrm {d}L_{2}(t)}{\mathrm {d}t} ={}&\biggl(1-\frac{T_{1}}{T}\biggr) \biggl(\lambda -dT- \frac{k _{s}V_{s}T}{1+\omega _{1}V_{s}}-\frac{k_{r}V_{r}T}{1+\alpha _{1}V_{r}}\biggr)+\biggl((1-u)\frac{k _{s}V_{s}T}{1+\omega _{1}V_{s}}-\delta T_{s}\biggr) \\ & {} +\frac{1}{N_{s}}(N_{s}\delta T_{s}-cV_{s})+ \biggl(1-\frac{T_{r1}}{T_{r}}\biggr) \biggl(u\frac{k _{s}V_{s}T}{1+\omega _{1}V_{s}}+\frac{k_{r}V_{r}T}{1+\alpha _{1}V_{r}}- \delta T_{r}\biggr) \\ & {} +\frac{1}{N_{r}}\biggl(1-\frac{V_{r1}}{V_{r}}\biggr) (N_{r} \delta T_{r}-cV_{r}) \\ ={}&dT_{1}\biggl(2-\frac{T}{T_{1}}-\frac{T_{1}}{T}\biggr)+ \frac{k_{r}V_{r1}T_{1}}{1+ \alpha _{1}V_{r1}}\biggl(4-\frac{T_{1}}{T}-\frac{T_{r}V_{r1}}{T_{r1}V_{r}} - \frac{T _{r1}V_{r}T}{T_{r}V_{r1}T_{1}}\frac{1+\alpha _{1}V_{r1}}{1+\alpha _{1}V _{r}} \\ & {} -\frac{1+\alpha _{1}V_{r}}{1+\alpha _{1}V_{r1}}\biggr) -\frac{k_{r}V_{r1}T _{1}}{1+\alpha _{1}V_{r1}}\frac{\alpha _{1}(V_{r}-V_{r1})^{2}}{(1+\alpha _{1}V_{r1})V_{r1}(1+\alpha _{1}V_{r})} \\ & {} -\frac{ck_{r}(R_{r}-R_{s}-\alpha _{1}\frac{\lambda }{c}N_{r}(R_{s}-1))}{R _{r}(k_{r}+d\alpha _{1})(1+\omega _{1}V_{s})N_{s}}V_{s}-u\frac{k_{s}V _{s}T}{1+\omega _{1}V_{s}}\frac{T_{r1}}{T_{r}}- \frac{c\omega _{1}}{(1+ \omega _{1}V_{s})N_{s}}V_{s}^{2}. \end{aligned}$$
Obviously, when \(R_{r}>\max \{1,R_{s}+\alpha _{1}\frac{\lambda }{c}N _{r}(R_{s}-1)\}\), we have \(\frac{\mathrm {d}L_{2}(t)}{\mathrm {d}t}\leq 0\) and the set \(M=\{(T,T_{s},V_{s},T_{r},V_{r}): \frac{\mathrm {d}L_{2}(t)}{\mathrm {d}t}=0 \}\subseteq \{(T,T_{s},V_{s},T_{r},V_{r}): T=T_{1},T_{s}\geq 0,V_{s} \geq 0,T_{r}=T_{r1},V_{r}=V_{r1}\}\). From \(T(t)\equiv T_{1}\), \(T_{r}(t)\equiv T_{r1}\) and \(V_{r}(t)\equiv V_{r1}\), we have \(\lambda -dT_{1}-\frac{k_{s}V_{s}(t)T_{1}}{1+\omega _{1}V_{s}(t)}-\frac{k _{r}V_{r1}T_{1}}{1+\alpha _{1}V_{r1}}\equiv 0\), which implies \(V_{s}(t)\equiv 0\). From the third equation of model (4), we get \(N_{s}\delta T_{s}(t)-cV_{s}(t)\equiv 0\), which implies \(T_{s}(t) \equiv 0\). Hence, \((T(t),T_{s}(t),V_{s}(t),T_{r}(t),V_{r}(t))\equiv E _{r}\). Thus, LaSalle's invariance principle implies that \(E_{r}\) is globally asymptotically stable. This completes the proof. □
In Theorem 4, we only obtained the global asymptotic stability of \(E_{r}\) when \(R_{r}>\max \{1,R_{s}+\alpha _{1}\frac{\lambda }{c}N_{r}(R _{s}-1)\}\). Therefore, combining conclusion (a) of Theorem 4 an interesting open problem is whether we can establish the global asymptotic stability of \(E_{r}\) when \(R_{r}>\max \{1,(1-u)R_{s}+((1-u)R _{s}-1)\alpha _{1}\frac{\lambda }{c}N_{r}\}\).
It is regretful that we here do not establish the corresponding criteria on the local and global stability for positive equilibrium \(E_{c}\) of model (2). The reasons are that the analysis of the characteristic equation of \(J(E_{c})\) is very complex, and the construction of a suitable Lyapunov function is also very difficult. However, in the next section we can establish the uniform persistence of model (2) when positive equilibrium \(E_{c}\) exists.
Uniform persistence
If \((1-u)R_{s}>1\geq R_{r}\)or \((1-u)R_{s}+((1-u)R_{s}-1)\alpha _{1}\frac{ \lambda }{c}N_{r}>R_{r}>1\), then model (2) is uniformly persistent. That is, there exists a positive constantδsuch that, for any positive solution \((T(t),T_{s}(t),V_{s}(t),T_{r}(t),V_{r}(t))\)of model (2),
$$\begin{aligned}& \liminf_{t\rightarrow \infty }T(t)\geq \delta , \quad\quad \liminf _{t\rightarrow \infty }T_{s}(t)\geq \delta , \quad\quad \liminf _{t\rightarrow \infty }V_{s}(t)\geq \delta , \\& \liminf_{t\rightarrow \infty }T_{r}(t)\geq \delta , \quad\quad \liminf_{t\rightarrow \infty }V_{r}(t)\geq \delta . \end{aligned}$$
For any \(x_{0}=(T_{0},T_{s0},V_{s0},T_{r0},V_{r0})\in R^{5}_{+}\), let \(u(t,x_{0})=(T(t,x_{0}), T_{s}(t,x_{0}), V_{s}(t,x_{0}), T_{r}(t,x_{0}), V_{r}(t,x_{0}))\) be the solution of model (2) with the initial condition \(u(0,x_{0})=x_{0}\). From the proof of Theorem 1, we have \(\limsup_{t\rightarrow \infty }u(t,x_{0})\leq \frac{\lambda }{n}\), where \(n=\min \{d,\frac{\delta }{2},c\}\). Hence, for any constant \(\epsilon >0\), there is \(T_{0}>0\), when \(t\geq T_{0}\) we get \(u(t,x_{0})<\frac{\lambda }{n}+\epsilon \). Then, from the first equation of model (2), we have
$$ \frac{\mathrm {d}T(t,x_{0})}{\mathrm {d}t}\geq \lambda -dT-k_{s}V_{s}T-k_{r}V _{r}T\geq \lambda -\biggl(d+(k_{s}+k_{r}) \biggl(\frac{\lambda }{n}+\epsilon \biggr)\biggr)T(t,x _{0}). $$
From the comparison theorem and the arbitrariness of ϵ, we have
$$ \liminf_{t\rightarrow \infty }T(t,x_{0})\geq \frac{\lambda }{d+(k_{s}+k _{r})\frac{\lambda }{n}}. $$
This shows that \(T(t,x_{0})\) is uniformly persistent.
$$ X=\bigl\{ x=(T,T_{s},V_{s},T_{r},V_{r}) \in R^{5}_{+}: T\geq 0, T_{s}>0, V _{s}>0, T_{r}>0, V_{r}>0\bigr\} . $$
The boundary of X is
$$\begin{aligned} \partial X=\bigl\{ (T,T_{s},V_{s},T_{r},V_{r}) \in R^{5}_{+}: T\geq 0, T_{s}=0 \text{ or } V_{s}=0 \text{ or } T_{r}=0 \text{ or }V_{r}=0 \bigr\} . \end{aligned}$$
Denote
$$ M_{\partial }=\bigl\{ x_{0}\in R^{5}_{+}: u(t,x_{0})\in \partial X, \forall t\geq 0\bigr\} . $$
Let \(\omega (x_{0})\) be the ω-limit set of solution \(u(t,x_{0})\). Then we consider the following two cases.
Case (1): \((1-u)R_{s}>1\geq R_{r}\). From Theorem 2, model (2) has only two equilibria \(E_{0}\) and \(E_{c}\). Let \(M_{0}=\{E_{0}\}\). It is clear that \(M_{0}\subset \bigcup_{x_{0}\in M_{\partial }}\omega (x _{0})\). For any \(x_{0}\in M_{\partial }\), let \(x_{0}=(T_{0},T_{s0},V _{s0},T_{r0},V_{r0})\). Due to \(u(t,x_{0})\in \partial X\) for all \(t\geq 0\), we have \(T_{s}(t,x_{0})\equiv 0\) or \(V_{s}(t,x_{0})\equiv 0\) or \(T_{r}(t,x_{0})\equiv 0\) or \(V_{r}(t,x_{0})\equiv 0\). If \(T_{s}(t,x_{0})\equiv 0\), then from the second equation of model (2), we have \(V_{s}(t,x_{0})\equiv 0\). Thus, model (2) degenerates into the following form:
$$ \textstyle\begin{cases} \frac{\mathrm {d}T(t,x_{0})}{\mathrm {d}t}= \lambda -dT(t,x_{0})-\frac{k_{r}V _{r}(t,x_{0})T(t,x_{0})}{1+\alpha _{1}V_{r}(t,x_{0})}, \\ \frac{\mathrm {d}T_{r}(t,x_{0})}{\mathrm {d}t}= \frac{k_{r}V_{r}(t,x_{0})T(t,x _{0})}{1+\alpha _{1}V_{r}(t,x_{0})}-\delta T_{r}(t,x_{0}), \\ \frac{\mathrm {d}V_{r}(t,x_{0})}{\mathrm {d}t}= N_{r}\delta T_{r}(t,x_{0})-cV _{r}(t,x_{0}). \end{cases} $$
If \(T_{r0}+V_{r0}=0\), then from system (12) we can obtain \(T_{r}(t,x _{0})\equiv V_{r}(t,x_{0})\equiv 0\). Thus, model (2) can further degenerate into
$$ \frac{\mathrm {d}T(t,x_{0})}{\mathrm {d}t}=\lambda -dT(t,x_{0}). $$
It follows that \(\lim_{t\rightarrow \infty }T(t,x_{0})=\frac{\lambda }{d}=T_{0}\). This shows that \(\omega (x_{0})=E_{0}\subset M_{0}\).
If \(T_{r0}+V_{r0}>0\), without loss of generality, we assume \(T_{r0}>0\) and \(V_{r0}\geq 0\). From the second equation of system (12), we can obtain \(T_{r}(t,x_{0})\geq T_{r0}e^{-\delta t}>0\) for all \(t\geq 0\), and then, from the third equation of (12), we further obtain \(V_{r}(t,x_{0})>V_{r0}e^{-ct}\geq 0\) for all \(t>0\). Choose a Lyapunov function as follows:
$$ U_{0}(t)=T_{0}\biggl(\frac{T}{T_{0}}-\ln \frac{T}{T_{0}}-1\biggr)+T_{r}+\frac{1}{N _{r}}V_{r}. $$
We obtain
$$ \frac{\mathrm {d}U_{0}(t)}{\mathrm {d}t}=dT_{0}\biggl(2-\frac{T_{0}}{T}- \frac{T}{T _{0}}\biggr)+\frac{c}{N_{r}}(R_{r}-1)V_{r}- \frac{c\alpha _{1}V_{r}^{2}}{(1+ \alpha _{1}V_{r})N_{r}}\leq 0 $$
and \(\{(T,T_{r},V_{r}): \frac{\mathrm {d}U_{0}(t)}{\mathrm {d}t}=0\}\subset \{(T,T _{r},V_{r}): T=T_{0}\}\). If \(T(t,x_{0})\equiv T_{0}\), then from the first equation of system (12), we have \(V_{r}(t,x_{0})\equiv 0\); further, from the third equation of system (12), we have \(T_{r}(t,x _{0})\equiv 0\). Thus, LaSalle's invariance principle [17] implies that \((T(t,x_{0}),T_{r}(t,x_{0}),V_{r}(t,x_{0}))\rightarrow (T_{0},0,0)\) when \(t\rightarrow \infty \). This shows that \(\omega (x_{0})=E_{0}\subset M_{0}\).
If \(V_{s}(t,x_{0})\equiv 0\), from the third equation of model (2), we have \(T_{s}(t,x_{0})\equiv 0\). Similar to the above argument, we also get \(\omega (x_{0})=E_{0}\subset M_{0}\).
If \(T_{r}(t,x_{0})\equiv 0\), from the fourth equation of model (2), we have \(V_{s}(t,x_{0})\equiv 0\) and \(V_{r}(t,x_{0})\equiv 0\). Then, from the third equation of model (2), we have \(T_{s}(t,x_{0})\equiv 0\). Thus, model (2) degenerates into
It follows that \(\lim_{t\rightarrow \infty }T(t,x_{0})=T_{0}\). This shows that \(\omega (x_{0})=E_{0}\subset M_{0}\).
If \(V_{r}(t,x_{0})\equiv 0\), from the fifth equation of model (2), we get \(T_{r}(t,x_{0})\equiv 0\). Similar to the above argument, we also get \(\omega (x_{0})=E_{0}\subset M_{0}\).
Finally, we have \(M_{0}=\bigcup_{x_{0}\in M_{\partial }}\omega (x_{0})\). Furthermore, it is clear that \(M_{0}\) is isolated invariant and non-cycle in ∂X.
Now, we prove that \(W^{s}(E_{0})\cap X=\emptyset \), where \(W^{s}(E _{0})\) is the stable set of \(E_{0}\). Suppose that there is an \(x_{0}\in X\) such that \(\lim_{t\rightarrow \infty }u(t,x_{0})=E_{0}\), then we have \(\lim_{t\rightarrow \infty }T(t,x_{0})=T_{0}\). Hence, for any constant \(\epsilon >0\), there is \(T^{*}>0\) such that \(T(t,x_{0}) \geq T_{0}-\epsilon \) and \(V_{s}(t,x_{0})<\epsilon \) for any \(t\geq T^{*}\). Define the function
$$ U_{1}(t)=T_{s}(t,x_{0})+\frac{1}{N_{s}}V_{s}(t,x_{0}). $$
We have \(\lim_{t\rightarrow \infty }U_{1}(t,x_{0})=0\). When \(t\geq T^{*}\), we have
$$\begin{aligned} \frac{\mathrm {d}U_{1}(t)}{\mathrm {d}t}={}&(1-u)\frac{k_{s}V_{s}(t,x_{0})T(t,x _{0})}{1+\omega _{1}V_{s}(t,x_{0})}-\delta T_{s}(t,x_{0})+ \frac{1}{N _{s}}\bigl(N_{s}\delta T_{s}(t,x_{0})-cV_{s}(t,x_{0}) \bigr) \\ \geq{}&\biggl((1-u)\frac{k_{s}(T_{0}-\epsilon )}{1+\omega _{1}\epsilon }-\frac{c}{N _{s}}\biggr)V_{s}(t,x_{0}). \end{aligned}$$
Due to \((1-u)R_{s}>1\), we choose enough small \(\epsilon >0\) such that \((1-u)\frac{k_{s}(T_{0}-\epsilon )}{1+\omega _{1}\epsilon }-\frac{c}{N _{s}}>0\). Thus, \(U_{1}(t)\) is increasing for \(t\geq T^{*}\). Hence, we know that \(U_{1}(t)\) does not tend to zero as \(t\to \infty \), which leads to a contradiction. This shows that \(W^{s}(E_{0})\cap X=\emptyset \). According to the theory of persistence in dynamical systems (see [18]), there is a constant \(\delta >0\) such that, for any \(x_{0} \in X\), one has
$$\begin{aligned}& \liminf_{t\rightarrow \infty }T_{s}(t,x_{0})\geq \delta , \quad\quad \liminf_{t\rightarrow \infty }V_{s}(t,x_{0}) \geq \delta , \quad\quad \liminf_{t\rightarrow \infty }T_{r}(t,x_{0}) \geq \delta , \\& \liminf_{t\rightarrow \infty }V_{r}(t,x_{0}) \geq \delta . \end{aligned}$$
This shows that model (2) is uniformly persistent.
Case (2): \((1-u)R_{s}+((1-u)R_{s}-1)\alpha _{1}\frac{\lambda }{c}N_{r}>R_{r}>1\). From Theorem 2, model (2) has three equilibria \(E_{0}\), \(E_{r}\), and \(E_{c}\). Denote \(M_{0}=\{E_{0},E_{r}\}\). It is clear that \(M_{0}\subset \bigcup_{x_{0}\in M_{\partial }}\omega (x_{0})\). For any \(x_{0}\in M_{\partial }\), let \(x_{0}=(T_{0},T_{s0},V_{s0},T _{r0},V_{r0})\). Due to \(u(t,x_{0})\in \partial X\) for all \(t\geq 0\), we have \(T_{s}(t,x_{0})\equiv 0\) or \(V_{s}(t,x_{0})\equiv 0\) or \(T_{r}(t,x_{0})\equiv 0\) or \(V_{r}(t,x_{0})\equiv 0\). If \(T_{s}(t,x _{0})\equiv 0\), then, similar to the above argument, model (2) degenerates into system (12).
If \(T_{r0}+V_{r0}=0\), from a similar argument as in case (1), we can obtain \(\omega (x_{0})=E_{0}\subset M_{0}\).
If \(T_{r0}+V_{r0}>0\), then we also can obtain \(T_{r}(t,x_{0})>0\) and \(V_{r}(t,x_{0})>0\) for all \(t>0\). Choose the Lyapunov function
$$ U_{2}(t)=T_{1}\biggl(\frac{T}{T_{1}}-\ln \frac{T}{T_{1}}-1\biggr)+T_{r1}\biggl(\frac{T _{r}}{T_{r1}}-\ln \frac{T_{r}}{T_{r1}}-1\biggr)+\frac{1}{N_{r}}V_{r1}\biggl( \frac{V _{r}}{V_{r1}}-\ln \frac{V_{r}}{V_{r1}}-1\biggr). $$
Then we have
$$\begin{aligned} \frac{\mathrm {d}U_{2}(t)}{\mathrm {d}t}={}&dT_{1}\biggl(2-\frac{T_{1}}{T}- \frac{T}{T _{1}}\biggr)+\frac{k_{r}V_{r1}T_{1}}{1+\alpha _{1}V_{r1}}\biggl(4-\frac{T_{1}}{T} - \frac{T _{r}V_{r1}}{T_{r1}V_{r}}-\frac{T_{r1}V_{r}T}{T_{r}V_{r1}T_{1}}\frac{1+ \alpha _{1}V_{r1}}{1+\alpha _{1}V_{r}} \\ & {} -\frac{1+\alpha _{1}V_{r}}{1+\alpha _{1}V_{r1}}\biggr) -\frac{k_{r}V_{r1}T _{1}}{1+\alpha _{1}V_{r1}}\frac{\alpha _{1}(V_{r}-V_{r1})^{2}}{(1+\alpha _{1}V_{r1})V_{r1}(1+\alpha _{1}V_{r})}\leq 0 \end{aligned}$$
and the set \(\{(T,T_{r},V_{r}): \frac{\mathrm {d}U_{2}(t)}{\mathrm {d}t}=0\}=\{(T _{1},T_{r1},V_{r1})\}\). Hence, LaSalle's invariance principle [17] implies that \((T(t,x_{0}),T_{r}(t,x_{0}),V_{r}(t,x_{0}))\rightarrow (T _{1},T_{r1},V_{r1})\) as \(t\rightarrow \infty \). This shows that \(\omega (x_{0})=E_{r}\subset M_{0}\).
If \(V_{s}(t,x_{0})\equiv 0\) or \(T_{r}(t,x_{0})\equiv 0\) or \(V_{r}(t,x _{0})\equiv 0\), then, following a similar argument as in case (1), we can also obtain \(\omega (x_{0})=E_{0}\) or \(\omega (x_{0})=E_{r}\), and hence \(\omega (x_{0})\subset M_{0}\).
Finally, we have \(M_{0}=\bigcup_{x_{0}\in M_{\partial }}\omega (x_{0})\). Furthermore, it is clear that \(E_{0}\) and \(E_{r}\) are isolated invariant and \(M_{0}\) is non-cycle in ∂X.
Now, we prove that \(W^{s}(E_{0})\cap X=\emptyset \) and \(W^{s}(E_{r}) \cap X=\emptyset \). Similar to the above argument in case (1) we can get \(W^{s}(E_{0})\cap X=\emptyset \). Suppose that there is \(x_{0}\in X\) such that \(\lim_{t\rightarrow \infty }u(t,x_{0})=E_{r}\), then we have \(\lim_{t\rightarrow \infty }T(t,x_{0})=T_{1}\). Hence, for any constant \(\epsilon >0\), there is \(T^{*}>0\) such that \(T(t,x_{0})\geq T_{1}- \epsilon \) and \(V_{s}(t,x_{0})<\epsilon \) for any \(t\geq T^{*}\). Define the function
Due to \((1-u)R_{s}+((1-u)R_{s}-1)\alpha _{1}\frac{\lambda }{c}N_{r}>R _{r}>1\), we choose enough small \(\epsilon >0\) such that \((1-u)\frac{k _{s}(T_{1}-\epsilon )}{1+\omega _{1}\epsilon }-\frac{c}{N_{s}}>0\). Then \(U_{3}(t)\) is increasing for \(t\geq T^{*}\). Thus, we know that \(U_{3}(t)\) does not tend to zero, which leads to a contradiction. Hence, \(W^{s}(E_{r})\cap X=\emptyset \). According to the theory of persistence in dynamical systems (see [18]), there is a constant \(\delta >0\) such that, for any \(x_{0}\in X\), one has
This shows that model (2) is also uniformly persistent. This completes the proof. □
An interesting open problem is whether the positive equilibrium \(E_{c}\) is also globally asymptotically stable when the conditions in Theorem 5 are satisfied.
In this section, we provide the numerical examples to illustrate the global asymptotic stability of the equilibria for model (2), and Examples 1 and 2 can further verify Remarks 1 and 2, respectively.
In model (2), we take the parameters \(\lambda =10^{5}\), \(d=0.1\), \(k_{s}=1.0\times 10^{-8}\), \(k_{r}=1.0 \times 10^{-8}\), \(u=0.6\), \(\delta =1\), \(N_{s}=2000\), \(N_{r}=900\), \(c=11\), \(\omega _{1}=10^{-5}\), and \(\alpha _{1}=10^{-4}\). By calculating, we have \(R_{s}\approx 1.8182>1\), \((1-u)R_{s}\approx 0.7273<1\), and \(R_{r}\approx 0.8182<1\). Furthermore, we also have the infection-free equilibrium \(E_{0}=(10^{6}, 0, 0, 0, 0)\). We give three different groups of initial values in Table 2.
Table 2 Initial values of model (2)
The numerical simulations given in Fig. 2 illustrate that equilibrium \(E_{0}\) may be globally asymptotically stable. This shows that the open problem given in Remark 1 may be right.
(a) Dynamical behaviors of uninfected target cells; (b) Dynamical behaviors of infected cells infected by wild-type virus; (c) Dynamical behaviors of drug-resistant virus; (d) Dynamical behaviors of infected cells infected by wild-type virus; (e) Dynamical behaviors of drug-resistant virus
In model (2), we take the parameters \(\lambda =10^{5}\), \(d=0.005\), \(k_{s}=1.2\times 10^{-9}\), \(k_{r}=1.0 \times 10^{-8}\), \(u=0.6\), \(\delta =1\), \(N_{s}=2000\), \(N_{r}=250\), \(c=10\), \(\omega _{1}=10^{-3}\), and \(\alpha _{1}=10^{-7}\). By calculating, we have \(R_{r}=5\), \((1-u)R_{s}+((1-u)R_{s}-1)\alpha _{1}\frac{\lambda }{c}N_{r}=2.15\) and \(R_{s}+(R_{s}-1)\alpha _{1}\frac{\lambda }{c}N_{r}=5.75\). Hence, \(\max \{1,(1-u)R_{s}+((1-u)R_{s}-1)\alpha _{1}\frac{\lambda }{c}N_{r} \}< R_{r}< R_{s}+(R_{s}-1)\alpha _{1}\frac{\lambda }{c}N_{r}\). Furthermore, we also have the boundary equilibrium \(E_{r}=(4.76\times 10^{6}, 0, 0, 7.62\times 10^{4}, 1.90\times 10^{6})\). We give three different groups of initial values in Table 3.
The numerical simulations given in Fig. 3 illustrate that equilibrium \(E_{r}\) may be globally asymptotically stable. This shows that the open problem given in Remark 2 may be right.
In model (2), we take the parameters \(\lambda =10^{5}\), \(d=0.005\), \(k_{s}=1.2\times 10^{-9}\), \(k_{r}=4.0 \times 10^{-10}\), \(u=3\times 10^{-5}\), \(\delta =1\), \(N_{s}=2000\), \(N_{r}=1000\), \(c=10\), \(\omega _{1}=10^{-8}\), and \(\alpha _{1}=10^{-2}\). By calculating, we have \((1-u)R_{s}\approx 4.80\), \(R_{r}=0.8\), and \((1-u)R_{s}>1\geq R_{r}\), and model (2) has a coexistence equilibrium \(E_{c}\approx (4.80\times 10^{6}, 1.6\times 10^{6}, 1.520\times 10^{7}, 2.416, 241.577)\). We give three different groups of initial values in Table 4.
The numerical simulations given in Fig. 4 illustrate that equilibrium \(E_{c}\) may be globally asymptotically stable. This shows that the open problem given in Remark 4 may be right.
In model (2), we take the parameters \(\lambda =10^{5}\), \(d=0.005\), \(k_{s}=1.2\times 10^{-8}\), \(k_{r}=1.0 \times 10^{-8}\), \(u=3\times 10^{-5}\), \(\delta =1\), \(N_{s}=2000\), \(N_{r}=1000\), \(c=10\), \(\omega _{1}=10^{-8}\), and \(\alpha _{1}=10^{-8}\). By calculating, we have \((1-u)R_{s}+((1-u)R_{s}-1)\alpha _{1}\frac{ \lambda }{c}N_{r}\approx 52.798\), \(R_{r}=20\), and \((1-u)R_{s}+((1-u)R _{s}-1)\alpha _{1}\frac{\lambda }{c}N_{r}>R_{r}>1\), and model (2) has a coexistence equilibrium \(E_{c}\approx (4.979\times 10^{5}, 9.750 \times 10^{4}, 1.950\times 10^{7}, 5.826, 582.635)\). We give three different groups of initial values in Table 5.
In this paper, we study the global dynamics for a two-strain HIV infection model with saturated incidence which includes wild-type (i.e. drug sensitive) and drug-resistant strains. The wild-type strain can mutate and become drug-resistant during the process of reverse transcription. The main results are presented in Theorems 1–5. Concretely, the nonnegativity and boundedness of solutions are obtained in Theorem 1; the existence of wild-type strain-free equilibrium and coexistence equilibrium is also obtained in Theorem 2; Theorems 3 and 4 show the sufficient and necessary threshold conditions for the local and global asymptotic stability of infection-free and wild-type strain-free equilibria; and the uniform persistence of HIV infection model is established in Theorem 5.
There are some problems waiting for further investigation. Firstly, Remarks 1 and 2 consider an interesting open problem is whether we can establish the global asymptotic stability of equilibria under the appropriate conditions. And it is meaningful to study more complex models (see [19]), for example, a two-strain infection model with delayed saturation incidence (see [20]) and general nonlinear incidence (see [15, 21]), etc. Furthermore, it is more reasonable to consider the dynamical behaviors of a virus infection model with spatial diffusion and age-dependence (see [22–25]). We will leave these problems for future investigation.
Rong, L., Gilchrist, M.A., Feng, Z.: Modeling within-host HIV-1 dynamics and the the evolution of drug resistance: trade-offs between viral enzyme function and drug susceptibility. J. Theor. Biol. 247, 804–818 (2007)
Feng, Z., Velasco-Hernandez, J., Tapia-Santons, B.: A mathematical model for coupling within-host and between-host dynamics in an environmentally-driven infectious disease. Math. Biosci. 241, 49–55 (2013)
Feng, Z., Velasco-Hernandez, J., Tapia-Santons, B.: A model for coupling within-host and between-host dynamics in an infectious disease. Nonlinear Dyn. 68, 401–411 (2012)
Bonhoeffer, S., Nowak, M.A.: Pre-existence and emergence of drug resistance in HIV-1 infection. Proc. R. Soc. Lond. B 264, 631–637 (1997)
Huang, G., Ma, W., Takeuchi, Y.: Global analysis for delay virus dynamics model with Beddington–DeAngelis functional response. Appl. Math. Lett. 24, 1199–1203 (2011)
Cen, X., Feng, Z., Zhao, Y.: Coupled within-host and between-host dynamics and evolution of virulence. Math. Biosci. 270, 204–212 (2015)
Perelson, A.S., Neumann, A.U., Markowitz, M., et al.: HIV-1 dynamics in vivo: virion clearance rate, infected cell life-span, and viral generation time. Science 271, 1582–1586 (1996)
Ribeiro, R.M., Bonhoeffer, S.: Production of resistant HIV mutants during antiretroviral therapy. Proc. Natl. Acad. Sci. 97, 7681–7686 (2000)
Rong, L., Feng, Z., Perelson, A.S.: Emergence of HIV-1 drug resistance during antiretroviral treatment. Bull. Math. Biol. 69, 2027–2060 (2007)
Huang, G., Ma, W., Takeuchi, Y.: Global properties for virus dynamics model with Beddington–DeAngelis functional response. Appl. Math. Lett. 22, 1690–1693 (2009)
Beddington, J.R.: Mutual interference between parasites or predators and its effect on searching efficiency. J. Anim. Ecol. 44, 331–340 (1975)
DeAngelis, D.L., Goldstein, R.A., O'Neill, R.V.: A model for trophic interaction. Ecology 56, 881–892 (1975)
Bonhoeffer, S., May, R.M., Shaw, G.M.: Virus dynamics and drug therapy. Proc. Natl. Acad. Sci. 94, 6971–6976 (1997)
Shiri, T., Garira, W., Musekwa, S.D.: A two-strain HIV-1 mathematical model to assess the effects of chemotherapy on disease parameters. Math. Biosci. Eng. 2, 811–832 (2005)
Miao, H., Teng, Z., Li, Z.: Global stability of delayed viral infection models with nonlinear antibody and CTL immune responses and general incidence rate. Comput. Math. Methods Med. (2016). https://doi.org/10.1155/2016/3903726
Van den Driessche, P., Watmough, J.: Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission. Math. Biosci. 180, 29–48 (2002)
Kuang, Y.: Delay Differential Equations with Application in Population Dynamics. Academic Press, Boston (1993)
Butler, G., Freedman, H.I., Waltman, P.: Uniformly persistent systems. Proc. Am. Math. Soc. 96, 425–430 (1986)
Ngina, P., Mbogo, R.W., Luboobi, L.S.: HIV drug resistance: insights from mathematical modelling. Appl. Math. Model. 75, 141–161 (2019)
Kaddar, A.: On the dynamics of a delayed SIR epidemic model with a modified saturated incidence rate. Electron. J. Differ. Equ. 2009, 133 (2009)
Elaiw, A.M., Raezah, A.A., Hattaf, K.: Stability of HIV-1 infection with saturated virus-target and infected-target incidences and CTL immune response. Int. J. Biomath. (2017). https://doi.org/10.1142/S179352451750070X
Duan, X., Yin, J., Li, X.: Competitive exclusion in a multi-strain virus model with spatial diffusion and age of infection. J. Math. Anal. Appl. 459, 717–742 (2018)
Yang, Y., Ruan, S., Xiao, D.: Global stability of an age-structured virus dynamics model with Beddington–DeAngelis infection function. Math. Biosci. Eng. 12, 859–877 (2015)
Shen, M., Xiao, Y., Rong, L.: Global stability of an infection-age structured HIV-1 model linking within-host and between-host dynamics. Math. Biosci. 263, 37–50 (2015)
Wang, J., Zhang, R., Kuniya, T.: Mathematical analysis for an age-structured HIV infection model with saturation infection rate. Electron. J. Differ. Equ. 2015, 33 (2015)
We would like to thank the anonymous referees for their helpful comments and the editor for his constructive suggestions, which greatly improved the presentation of this paper.
Data sharing is not applicable to this article as no data sets were generated or analysed during the current study.
This research is supported by the Natural Science Foundation of China (Grant No. 11771373, 11861065) and the Natural Science Foundation of Xinjiang Province of China (Grant No. 2016D03022).
College of Mathematics and Systems Science, Xinjiang University, Urumqi, People's Republic of China
Wei Chen
, Nafeisha Tuerxun
& Zhidong Teng
Search for Wei Chen in:
Search for Nafeisha Tuerxun in:
Search for Zhidong Teng in:
All authors contributed equally to this work. All authors read and approved the final manuscript.
Correspondence to Zhidong Teng.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Chen, W., Tuerxun, N. & Teng, Z. The global dynamics in a wild-type and drug-resistant HIV infection model with saturated incidence. Adv Differ Equ 2020, 25 (2020) doi:10.1186/s13662-020-2497-2
Received: 13 September 2019
HIV virus infection model
Wild-type and drug-resistant virus
Saturated incidence
Basic reproduction number
Stability and persistence
|
CommonCrawl
|
Visual rhythms for qualitative evaluation of video stabilization
Marcos Roberto e Souza1 na1 &
Helio Pedrini ORCID: orcid.org/0000-0003-0125-630X1 na1
Recent technological advances have enabled the development of compact and portable cameras for the generation of large volumes of video content. Several applications have benefited from such significant growth of multimedia data, such as telemedicine, surveillance and security, entertainment, teaching, and robotics. However, videos captured by amateurs are subject to unwanted motion or vibration while handling the camera. Video stabilization techniques aim to detect and remove glitches or instabilities caused during the acquisition process to enhance visual quality. In this work, we introduce and analyze a novel representation based on visual rhythms for qualitative evaluation of video stabilization methods. Experiments conducted on different video sequences are performed to demonstrate the effectiveness of the visual representation as qualitative measure for evaluating video stability. In addition, we present a proposal to calculate an objective metric extracted from the visual rhythms.
The popularization of mobile devices in recent years has contributed to making video acquisition possible for a variety of applications. Handling such devices generally causes unwanted motion during the video generation, which inevitably affects the quality of the final video.
Video stabilization [1–15] aims to remove undesired motion in camera handling during video acquisition. Efficient methods for stabilization of videos are important to improve their quality according to human perception or to facilitate certain tasks, such as multimedia indexing and retrieval [16–18].
Techniques and metrics for quality evaluation must be well established so that video stabilization approaches can be developed, refined, and compared in a consistent manner. Therefore, ineffective evaluation measures may lead to the development of inadequate techniques, compromising the advance of state-of-the-art video stabilization approaches.
Most of the quantitative techniques for the evaluation of video stabilization available in the literature are inaccurate and, in some cases, incompatible with human visual perception. Moreover, the techniques used to evaluate and report the results subjectively are little explored. In this work, we introduce and evaluate the use of visual rhythms as a novel mechanism for the qualitative evaluation of video stabilization methods.
Experimental results demonstrate that the visual rhythms are effective to evaluate the stability of camera motion by differentiating stable and unstable videos. Furthermore, it allows to determine how and when a given motion occurs. More complex types of motion, such as zoom and quick shifts, can also be identified.
This paper is organized as follows. Relevant concepts and related work are briefly described in Section 2. The use of visual rhythms for subjective evaluation of video stabilization is presented in Section 3. Experimental results are reported and discussed in Section 4. Final remarks and directions for future work are outlined in Section 5.
Different categories of stabilization systems have been proposed to improve the quality of videos. The three most common types are mechanical stabilization, optical stabilization, and digital stabilization.
Mechanical video stabilization typically uses sensors to detect camera shifts and compensate for undesired movements. A common way is to use gyroscopes to detect motion and send signals to motors connected to small wheels, such that the camera can move in the opposite direction of motion.
Optical video stabilization [19] is widely used in photographic cameras and consists of a mechanism to compensate for the angular and translational movements of the cameras, stabilizing the image before it is recorded on the sensor. A form of optical stabilization introduces a gyroscope to measure velocity differences at distinct instants to distinguish between normal and undesired motion.
Digital video stabilization is implemented in software without the use of special devices. Digital video stabilization methods are commonly categorized into two-dimensional (2D) and three-dimensional (3D) approaches. In the first category, techniques estimate camera motion from two consecutive frames and apply 2D transformations to stabilize the video. In the second category, techniques reconstruct the camera trajectories from 3D transformations [20, 21], such as scaling, translation, and rotation.
In the context of image and video processing, the evaluation can be classified as (i) objective, when obtained through functions applied between two images [22] or video frames, and (ii) subjective, when the analysis is performed by human observers. In both cases, a desired goal is to assess stabilization based on criteria in agreement with the perception of the human visual system.
Objective evaluation
Criteria for measuring the amount and nature of the camera displacement have been proposed to evaluate the quality of video stabilization in an objective manner [23]. Unintentional motion is decomposed into divergence and jitter through low-pass and high-pass filters, respectively. The amount of jitter from the stabilized and original video is compared. The divergence is also verified, which indicates the amount of expected displacement. For an overall assessment, the blurring caused by the stabilization process is considered.
Most of the video stabilization approaches found in the literature have adopted the interframe transformation fidelity (ITF) [24–28], which can be expressed as the peak signal-to-noise ratio (PSNR) of video frames. More recent approaches have considered the structural similarity (SSIM) [29] as an alternative to PSNR [28].
Liu et al. [30] employed the amount of energy present in the low-frequency portion of the 2D motion estimated as a stability metric. The rate of frame cropping and distortion are used to assess the stabilization process more generally.
Synthesizing unstable videos from stable videos has been proposed for the evaluation of video stabilization [31] in order to provide the ground-truth of the stable videos. The methods are evaluated according to two aspects: (i) the distance between the stabilized frame and the reference frame and (ii) the average of the SSIM between each pair of consecutive frames.
Due to the weaknesses of ITF in motion videos, an evaluation method based on the variation of the intersection of angles between the global motion vectors, calculated from the scale-invariant feature transform (SIFT) keypoints [32], was proposed to evaluate the video stabilization process [33]. In fixed-camera videos, the ITF is considered, however, only for overlapping the frame background, instead of the entire frame.
Subjective evaluation
Several methods found in the literature briefly describe and analyze review the trajectories made by the camera and the trajectories of the stabilized video [34–38]. These trajectories are usually related to the different factors that compose the estimated 2D motion. For instance, the approaches present the camera path for horizontal and vertical translations and rotations. Figure 1 shows an example of path for horizontal translation estimated from the original (blue) and smoothed (green) trajectory.
Horizontal translation path of a camera
From the trajectory, it is possible to identify when a motion occurs and its intensity in the original video, as well as such motion after its smoothing. This type of visualization can be very useful to analyze the behavior of the motion smoothing step used in a certain method. However, its result depends on the technique used in estimating the motion, so that the trajectory does not reliably represent the video motion. Thus, the trajectory may not be a good alternative to the evaluation of the stabilization quality, as well as not an adequate visualization for videos with spatially distinct motion.
Some approaches in the literature deal with frame sequences usually superimposed by horizontal and vertical lines [25, 28, 35–37, 39, 40]. Thus, it is possible to check the alignment of a small set of consecutive frames. Figure 2 illustrates an example of such type of visualization, where objects intercepted by lines are more aligned in the stabilized video.
Sequence of video frames. a Original video. b–d Different versions of the stable video. Extracted from [40]
From the sequence of frames, the displacement of each frame is noticeable, in addition to the amount of pixels lost due to the transformation applied to each frame. However, this technique becomes impractical when a large number of frames is considered, compromising the analysis of the entire video.
Furthermore, there are approaches that summarize a video in a single image calculated through the average gray levels of the frames [41, 42], as shown in Fig. 3. Better-defined images are expected for more stable videos. From this representation, it is possible to check if the video has more amount of motion; however, it is difficult to determine the nature of video motion.
The average gray levels for the first ten frames. a Original video. b Stabilized video
In a broader context, video visualization is concerned with the creation of a new visual representation, obtained from an input video, capable of indicating its characteristics and important events [43]. Video visualization techniques can generate different types of output data, such as another video, a collection of images or a single image. Borgo et al. [43] reported a review of several video visualization techniques proposed over the last years.
In order to help users find scenes with specific motion characteristics in the context of video browsing, motion histograms were proposed in the HSV color space [44]. Motion histograms are obtained by means of motion vectors contained in H.264/AVC codecs. Figure 4 presents an example of the visualization, where each frame of the video is represented by a vertical line, such that the motion direction is mapped by different colors and the motion intensity by brightness values. As a disadvantage, this technique suffers from the presence of noise in the motion vectors, introduced by the motion estimation algorithm [44].
Histograms of motion with HSV color space. Extracted from [44]
Visual rhythm [45] (VR) corresponds to a summary of temporal information of a video represented as a single image. This is done by concatenating portions of information from each frame of the video. Visual rhythms have been generally applied in the context of video identification and classification, for instance, location of video subtitles, recognition of person action detection of video shot boundaries, detection of face spoofing, among others [46–50]. Unlike these approaches, the visual rhythms are used in this work to create a representation of temporal information that allows the evaluation of the video stabilization by humans.
Typically, two different paths for constructing the visual rhythms are considered when traversing each video: horizontal and vertical. Such representations differ according to the information that is extracted from the video frames. The vertical rhythm extracts the information from the columns of each frame, whereas the horizontal rhythm is constructed from the rows of each frame.
A single column or row (or a small set of them) of each frame is usually used to construct the visual rhythm. Figure 5 illustrates the construction of a horizontal visual rhythm, as commonly described in the literature. However, the construction of a visual rhythm is very susceptible to different strategies for video traversal, for instance, a zigzag path, where an alternating direction might extract patterns from the video frames more appropriately for a certain problem.
Example of horizontal visual rhythm construction from a small set of columns of each frame
In this work, the visual rhythms are constructed by traversing the video at vertical and horizontal directions. However, as opposed to using a single row or column (or a small set of rows or columns), we use the average of the columns for the vertical rhythm and the average of the rows for the horizontal rhythm.
For both path directions, the rhythm is obtained from the sequential concatenation of the intensity values, such that the jth column of the visual rhythm image corresponds to the intensity values in the jth frame. In the horizontal rhythm, a rotation is performed on the rows in order to obtain the columns in the final image. The width of a visual rhythm corresponds to the number of video frames, whereas its height corresponds to the height or width of the frames for the vertical or horizontal rhythm, respectively.
Figure 6 shows the relations between the pixels of the neighborhood in a visual rhythm image, from which we can see that the visual rhythm maintains the temporal and spatial information of the video. Thus, the temporal behavior of the gray levels in a certain region can be easily visualized. This provides information on how and when movements occur in the video, that is, in addition to being able to distinguish the direction, the intensity, and the form that the movements are spatially arranged, we can verify the frequency of certain type of movement and determine the moments of its occurrence. Stable video is expected to have a more uniform visual rhythm, with fewer twitches and better defined curves. We refer to "neighbor i−1" as the pixel that is on the row immediately above the row of pixel i in the column that represents information extracted from a frame, whereas "neighbor i+1" corresponds to the pixel immediately below the row of pixel i.
Patterns for pixel neighborhood in the visual rhythm
Figure 7 shows the construction of a horizontal rhythm for two 3 ×3 frames. At the transition between frames A and B, the camera moves from right to left, causing the pixels to be to the right of their original position. Thus, when obtaining the horizontal rhythm, the pixels of the column corresponding to frame B are below the equivalent pixels of frame A, thereby forming a declination.
Direction of horizontal visual rhythm
The separation of the vertical and horizontal visual rhythms is important to thoroughly detect and evaluate problems in the video stabilization process. From the vertical rhythm, we can analyze the characteristics of the motion in the y axis. Thus, inclined rhythm lines indicate camera movements from the bottom to top, whereas declined lines indicate camera movements from top to bottom. From the horizontal rhythm, in turn, we have the characteristics of the motion in the x axis. Thus, sloped lines indicate camera movements from left to right, whereas declined lines indicate camera movement from right to left.
The use of only one column or row in the extraction of information from each frame may be inadequate since it considers little information of the frame. In addition, it makes horizontal and vertical separation less accurate. This problem can be seen in Fig. 8, where a vertical movement of the camera occurs, which can influence the horizontal rhythm, depending on the difference of the pixels between the rows. Thus, the average of the columns or rows is adopted in our work to compensate for this difference, making the horizontal rhythm less sensitive to vertical movements, and the vertical rhythm less sensitive to horizontal movements.
Direction of horizontal visual rhythm with a single row
In Fig. 8, both columns of the horizontal rhythm should have either the same or very close values. However, with a single row in each frame, the direction of the rhythm is uncertain.
As post-processing, we apply an adaptive histogram equalization technique through the contrast-limited adaptive histogram equalization (CLAHE) [51]. This is done to improve the contrast of the visual rhythm, facilitating human perception.
The construction of the visual rhythms is not based on motion estimation, as occurs in other visualizations, shown in Section 2. Therefore, their performance is not dependent on any motion estimation technique, which makes the representation of the video motion more reliable. In the context of video stabilization, such independence of methods for motion estimation is crucial to allow a more unbiased assessment of the results.
The complexity of constructing a visual rhythm depends on three main factors: width W of the video frames, height H of the frames, and number N of frames in the video. To calculate an average row in the construction of a horizontal visual rhythm, we need to compute W averages. The calculation of each mean considers H values. Thus, we have θ(WH) as the asymptotic complexity of constructing an average row. The same complexity is taken for the computation of an average column in a vertical visual rhythm. Since either a row or a column should be calculated for each frame of the video, we have θ(WHN) as the final complexity for constructing a visual rhythm.
Among the good practices in the construction of visual rhythms for the evaluation of video stabilization results, we recommend the following:
Crop the frames of the stabilized video so that there are no pixels with null information (since null information may imply inadequate row or column averages);
Preserve the frame rate of the video in order to not change its number of frames or generate visual rhythms of different sizes;
Rescale the video frames to the original size in order for the visual rhythms to have the same size.
Insights into objective metrics
This subsection provides some insights into the calculation of objective metrics from the visual rhythms for the evaluation of the video stabilization process. It is important to mention that we do not intend to replace existing objective metrics in the literature with the proposed objective metric, but to show that a metric can be extracted to distinguish unstable from stable videos.
In the visual rhythm, the behavior of the movement present in the video is represented by the shapes of the curves. A more stable video has rhythms with smoother curves. As shown in Fig. 7, the directions of the visual rhythm can be observed in each column pair of pixels. Objective metrics can be calculated from the texture of visual rhythms. We conjecture that a softer visual rhythm has more regular directions, with less abrupt changes in the near directions. Thus, to obtain a new objective metric from the visual rhythm, the directions and their changes must be computed. Figure 9 illustrates the strategy for calculating the metric.
Main steps of the objective metric strategy
Initially, we calculate the visual rhythm gradients in order to obtain the directions of each pixel of the rhythm. This was implemented through the Sobel filter [52]. The gradients are decomposed into magnitude and angle information.
A thresholding with the Otsu algorithm [53] is applied to the magnitude values to determine the edges of the visual rhythm. This is done in order to consider only the edge angles in the following calculations. Then, a co-occurrence representation is calculated based on the gray level co-occurrence matrix (GLCM) [54]. However, it considers the co-occurrence of the angles of the edges in the direction of the angles themselves.
Initially, we eliminated the sign from the angles, leaving them in the range of 0 to 180∘. For the calculation of the co-occurrence matrix M, we consider n directions D={d0,d1,...,dn}, resulting in a matrix of size n×n. The angles are then quantized in possible directions. For each pixel i belonging to the edge, we have its angle θi∈D, from which we calculate the closest pixel j in the direction of θi. Then, it counts as a co-occurrence at position \(M_{\theta _{i},\theta _{j}}\), that is, an increment at \(M_{\theta _{i},\theta _{j}}\). For cases where θi are different from the important angles, we have two pixels j1 and j2. Thus, the two positions of the matrix are incremented proportionally to the distances of the angles.
Finally, the matrix is normalized by the sum of its elements. Thus, the value of the matrix at position \(M_{\theta _{i},\theta _{j}}\) indicates the probability that θj is the next direction of the visual rhythm, since the previous one was θi. From the co-occurrence matrix generated, we can calculate features to obtain objective metrics. Among the textural features defined by Haralick and Shanmugam [54], the homogeneity can be expressed as
$$ \text{homogeneity} = \sum_{i=0}^{n}\sum_{j=0}^{n}\frac{1}{1 + (i-j)^{2}}M_{i,j} $$
The homogeneity feature, when calculated from the co-occurrence matrix of the edge angles, will assume larger values the closer the angles of consecutive directions.
Several other measures could be developed to extract useful information to qualify the stabilization from their visual rhythms. For this, a thorough investigation is necessary to identify which aspects are important to characterize an unstable motion and how to obtain such aspects through visual rhythm. These tasks may involve both handcrafted features and machine learning techniques.
This section describes and evaluates the experimental results obtained with two datasets. All the videos considered in our experiments were obtained from two publicly available databases: GaTech VideoStabFootnote 1 [55] and the database proposed by Liu et al.Footnote 2 [30].
Table 1 reports a summary of the first database with videos in alphabetical order. We will refer to the videos in this database through the identifiers assigned to each of them. Table 2 presents the database proposed by Liu et al. [30], which is divided into six categories, containing a total of 139 videos. We will refer to the videos in this dataset by the name of the category followed by the identifier of each video, attributed by the authors. Due to space limitations, we report only a few visual rhythms that illustrate the results obtained from these databases, which have been confirmed in the other videos.
Table 1 Video sequences from the first dataset
Table 2 Categories and amount of videos present in the second dataset, proposed by Liu et al. [30]
Figure 10 presents the visual rhythms generated for the video #12 before and after the video stabilization process. In order to obtain the stabilized version of the video, we submit it to YouTube, which applies one of the state-of-the-art digital video stabilization approaches [55]. The width of all the images presented in this section was considered constant for a better organization.
Visual rhythms for video #12. a Horizontal visual rhythm for original video. b Horizontal visual rhythm for stabilized video. c Vertical visual rhythm for original video. d Vertical visual rhythm for stabilized video
From the horizontal visual rhythm of the unstable video, shown in Fig. 10c, we can notice the twitches and irregularities present in the lines. On the other hand, in the horizontal visual rhythm of the stabilized video, shown in Fig. 10b, there are more continuous, well-defined and softer lines. Analogously, the vertical visual rhythm of the unstable video, shown in Fig. 10c, has twitches and irregularities that are eliminated in the visual rhythm of the stabilized video, shown in Fig. 10d. We can also observe that vertical and horizontal rhythms are not influenced by each other, where certain motion regions occur in one but not in the other.
For the video Regular8, we present a comparison of the visual rhythms obtained through the average of the rows or columns, and through the column or central row. In this case, we present the horizontal and vertical visual rhythms only for the unstable video.
It can be seen from Fig. 11a that the visual rhythm with only one row can be negatively influenced by the vertical motion of the video, with artifacts that do not correspond to the horizontal motion, such as the discontinuities present in the rhythm, whereas the visual rhythms presented by their average are more consistent with the motion present in the video. An analogous behavior can be seen in the vertical rhythm shown in Fig. 11c.
Visual rhythms for original video Regular8. a Horizontal visual rhythm with mean row. b Horizontal visual rhythm with central row. c Vertical visual rhythm with mean column. d Vertical visual rhythm with central column
Figure 12 presents the visual rhythms of the unstable video #1. For this video, we present the rhythms obtained after the stabilization of YouTube, in addition to a stabilization with inferior performance. Figure 13 shows the horizontal and vertical rhythms for both versions of the stabilized video.
Visual rhythms for original video #1. a Horizontal visual rhythm. b Vertical visual rhythm
Visual rhythms for stabilized video #1. a Horizontal visual rhythms for weak stabilization. b Horizontal visual rhythms for YouTube stabilization. c Vertical visual rhythms for weak stabilization. d Vertical visual rhythms for YouTube stabilization
By comparing the visual rhythms for the unstable video and the rhythms for the stabilized videos, it is possible to confirm the validity of using visual rhythms to compare versions of stable and unstable videos. In addition, from the visual rhythms of the two different methods, illustrated in Fig. 13, we can observe the occurrence of less twitches and smoother lines throughout the entire rhythm, both for the horizontal and vertical rhythm. This shows that the visual rhythm can be used in the comparison of two different video stabilization methods.
The horizontal and vertical rhythms for the original and stabilized video QuickRotation0 are shown in Fig. 14. In this case, the video was stabilized with the method proposed by Liu et al. [30]. The version of the video QuickRotation0 stabilized with Youtube was not shown here since the method modified its frame rate, reducing the number of frames and making the visualization of the stabilized video considerably smaller than the original video.
Visual rhythms for video QuickRotation0. a Horizontal visual rhythm for original video. b Horizontal visual rhythm for stabilized video. c Vertical visual rhythm for original video. d Vertical visual rhythm for stabilized video
Besides confirming the smoother lines obtained in the visual rhythm for the stabilized video, it is possible to observe totally vertical lines in the horizontal visual rhythms, which indicates a very fast horizontal movement of the camera. It is also possible to see that the horizontal lines are inclined in their origin, which indicates that the displacement is from left to right.
In Fig. 15, we present the horizontal and vertical visual rhythms for the original and stabilized video Zooming0. The video was stabilized through the method proposed by Liu et al. [30].
Visual rhythms for video Zooming0. a Horizontal visual rhythm for original video. b Horizontal visual rhythm for stabilized video. c Vertical visual rhythm for original video. d Vertical visual rhythm for stabilized video
In the visual rhythms for video Zooming0, it is also possible to see the presence of well defined, regular lines in the visual rhythm of the stabilized video. In addition, it is possible to observe inclined and declined lines in the horizontal visual rhythms present simultaneously in the beginning of the video, which indicates the existence of zoom.
Figure 16 shows visual rhythms for a video where there is a low-texture background and a moving object. This scenario can be challenging for the proposed representation, since we do not separate the background from the objects in the construction of the rhythm. Nevertheless, the visual rhythym representation makes it possible to distinguish the unstable from the stable videos.
Visual rhythms for video with moving object on low-texture background. a Horizontal visual rhythm for original video. b Horizontal visual rhythm for stabilized video. c Vertical visual rhythm for original video. d Vertical visual rhythm for stabilized video
Table 3 reports the results of the homogeneity extracted from the horizontal and vertical visual rhythms for the video sequences listed in Table 1, where the original videos are stabilized by the YouTube method [55]. We can observe that the obtained results are able to distinguish original and stabilized videos. However, further investigation is needed regarding the extraction of other features from the co-occurrence matrix, which may be complementary to the homogeneity information. In addition, the results from the proposed metrics will be compared to objective metrics available in the literature.
Table 3 Results of homogeneity for video sequences
Conclusions and future work
This work presented the use of visual rhythms for the subjective evaluation of video stabilization. The vertical visual rhythm is constructed from the average of the columns of each frame, whereas the horizontal visual rhythm is constructed from the average of the rows of each frame.
We were able to characterize and separate the horizontal and vertical movements of the video, determining how and when they happen. The stability of a video can be determined from the regularity and smoothness of the curves of each visual rhythm. In addition, the presence of more complex movements, such as zoom, can be verified in the visual rhythm.
As directions for future work, we intend to thoroughly investigate objective evaluation metrics for the stabilization of videos, calculated from the visual rhythms.
Data are publicly available.
https://www.cc.gatech.edu/cpl/projects/videostabilization/
http://liushuaicheng.org/SIGGRAPH2013/database.html
AVC:
Advanced video coding
CLAHE:
Contrast-limited adaptive histogram equalization
GLCM:
Gray level co-occurrence matrix
HSV:
Hue saturation value
ITF:
Interframe transformation fidelity
PSNR:
Peak signal-to-noise ratio
SIFT:
Scale-invariant feature transform
Virtual rhythm
A. A. Amanatiadis, I Andreadis, Digital image stabilization by independent component analysis. IEEE Trans. Instrum. Meas.59(7), 1755–1763 (2010).
J. Y. Chang, W. F. Hu, M. H. Cheng, B. S. Chang, Digital image translational and rotational motion stabilization using optical flow technique. IEEE Trans. Consum. Electron.48(1), 108–115 (2002).
S. Ertürk, Real-time digital image stabilization using Kalman filters. Real Time Imaging. 8(4), 317–328 (2002).
MATH Article Google Scholar
R. Jia, H. Zhang, L. Wang, J. Li, in International Conference on Artificial Intelligence and Computational Intelligence. vol. 3. Digital image stabilization based on phase correlation (IEEE, 2009). https://doi.org/10.1109/aici.2009.489.
S. J. Ko, S. H. Lee, K. H. Lee, Digital image stabilizing algorithms based on bit-plane matching. IEEE Trans. Consum. Electron.44(3), 617–622 (1998).
S. Kumar, H. Azartash, M. Biswas, T. Nguyen, Real-time affine global motion estimation using phase correlation and its application for digital image stabilization. IEEE Trans. Image Process.20(12), 3406–3418 (2011).
MathSciNet MATH Article Google Scholar
C. T. Lin, C. T. Hong, C. T. Yang, Real-time digital image stabilization system using modified proportional integrated controller. IEEE Trans. Circ. Syst. Video Technol.19(3), 427–431 (2009).
L. Marcenaro, G. Vernazza, C. S. Regazzoni, in Proceedings 2001 International Conference on Image Processing (Cat. No.01CH37205). Image stabilization algorithms for video-surveillance applications (IEEE, 2001). https://doi.org/10.1109/icip.2001.959025.
C. Morimoto, R. Chellappa, in 13th International Conference on Pattern Recognition. vol. 3. Fast electronic digital image stabilization (IEEE, 1996). https://doi.org/10.1109/icpr.1996.546956.
Y. G. Ryu, M. J. Chung, Robust online digital image stabilization based on point-feature trajectory without accumulative global motion estimation. IEEE Sig. Process. Lett.19(4), 223–226 (2012).
J. Li, T. Xu, K. Zhang, Real-time feature-based video stabilization on FPGA. IEEE Trans. Circ. Syst. Video Technol.27(4), 907–919 (2017).
M. Okade, G. Patel, P. K. Biswas, Robust learning-based camera motion characterization scheme with applications to video stabilization. IEEE Trans. Circ. Syst. Video Technol.26(3), 453–466 (2016).
M. R. Souza, H. Pedrini, Combination of local feature detection methods for digital video stabilization. SIViP. 12(8), 1513–1521 (2018).
M. R. Souza, H. Pedrini, Digital video stabilization based on adaptive camera trajectory smoothing. EURASIP J. Image Video Process.2018(37), 1–11 (2018).
M. R. Souza, L. F. R. Fonseca, H. Pedrini, Improvement of global motion estimation in two-dimensional digital video stabilization methods. IET Image Process.12(12), 2204–2211 (2018).
M. V. M. Cirne, H. Pedrini, in Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications. A video summarization method based on spectral clustering (Springer, 2013), pp. 479–486. https://doi.org/10.1007/978-3-642-41827-3_60.
M. V. M. Cirne, H. Pedrini, in Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications. Summarization of videos by image quality assessment (Springer, 2014), pp. 901–908.
T. S. Huang, Image Sequence Analysis. vol. 5 (Springer Science & Business Media, Berlin, 2013).
B. Cardani, Optical image stabilization for digital cameras. IEEE Control Syst.26(2), 21–22 (2006).
C. Buehler, M. Bosse, L. McMillan, in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001. Non-metric image-based rendering for video stabilization (IEEE, 2001). https://doi.org/10.1109/cvpr.2001.991019.
G. Zhang, W. Hua, X. Qin, Y. Shao, H. Bao, Video stabilization based on a 3D perspective camera model. Vis. Comput.25(11), 997–1008 (2009).
R. C. Gonzalez, R. E. Woods, Digital Image Processing (Prentice Hall, Upper Saddle River, 2002).
M. Niskanen, O. Silvén, M. Tico, in IEEE International Conference on Multimedia and Expo. video stabilization performance assessment (IEEE, 2006). https://doi.org/10.1109/icme.2006.262522.
S. Battiato, G. Gallo, G. Puglisi, S. Scellato, in 14th International Conference on Image Analysis and Processing (ICIAP 2007). SIFT features tracking for video stabilization (IEEE, 2007). https://doi.org/10.1109/iciap.2007.4362878.
S. Choi, T. Kim, W. Yu, in 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems. Robust video stabilization to outlier motion using adaptive RANSAC (IEEE, 2009). https://doi.org/10.1109/iros.2009.5354240.
G. Puglisi, S. Battiato, A robust image alignment algorithm for video stabilization purposes. IEEE Trans. Circ. Syst. Video Technol.21(10), 1390–1400 (2011).
D. Shukla, R. K. Jha, A robust video stabilization technique using integral frame projection warping. SIViP. 9(6), 1287–1297 (2015).
B. H. Chen, A. Kopylov, S. C. Huang, O. Seredin, R. Karpov, S. Y. Kuo, et al., Improved global motion estimation via motion vector clustering for video stabilization. Eng. Appl. Artif. Intell.54:, 39–48 (2016).
S. Liu, L. Yuan, P. Tan, J. Sun, Bundled camera paths for video stabilization. ACM Trans. Graph.32(4), 78 (2013).
H. Qu, L. Song, G. Xue, in 2013 Visual Communications and Image Processing (VCIP). Shaking video synthesis for video stabilization performance assessment (IEEE, 2013). https://doi.org/10.1109/vcip.2013.6706422.
D. G. Lowe, in Object Recognition from Local Scale-Invariant Features. Object recognition from local scale-invariant features (IEEE, 1999). https://doi.org/10.1109/iccv.1999.790410.
B. Chen, J. Zhao, Y. Wang, in Proceedings of the 2016 International Conference on Advanced Materials Science and Environmental Engineering. Research on evaluation method of video stabilization (Atlantis Press, 2016). https://doi.org/10.2991/amsee-16.2016.67.
K. Ratakonda, in ISCAS '98. Proceedings of the 1998 IEEE International Symposium on Circuits and Systems (Cat. No.98CH36187). Real-time digital video stabilization for multi-media applications (IEEE, 1998). https://doi.org/10.1109/iscas.1998.698760.
A. Litvin, J. Konrad, W. C. Karl, in Proceedings of SPI. vol. 5022. Probabilistic video stabilization using Kalman filtering and mosaicing (International Society for Optics and Photonics, 2003), pp. 663–674.
H. C. Chang, S. H. Lai, K. R. Lu, in 2004 IEEE International Conference on Multimedia and Expo (ICME) (IEEE Cat. No.04TH8763). A robust and efficient video stabilization algorithm (IEEE, 2004). https://doi.org/10.1109/icme.2004.1394117.
Y. Matsushita, E. Ofek, W. Ge, X. Tang, H. Y. Shum, Full-frame video stabilization with motion in painting. IEEE Trans. Pattern Anal. Mach. Intell.28(7), 1150–1163 (2006).
B. Y. Chen, K. Y. Lee, W. T. Huang, J. S. Lin, Wiley Online Library. Capturing intention-based full-frame video stabilization. Comput. Graph. Forum. 27(7), 1805–1814 (2008).
Y. Shen, P. Guturu, T. Damarla, B. P. Buckles, K. R. Namuduri, Video stabilization using principal component analysis and scale invariant feature transform in particle filter framework. IEEE Trans. Consum. Electron.55(3), 1714–1721 (2009).
J. Yang, D. Schonfeld, M. Mohamed, Robust video stabilization based on particle filter tracking of projected camera motion. IEEE Trans. Circ. Syst. Video Technol.19(7), 945–954 (2009).
N. Joshi, W. Kienzle, M. Toelle, M. Uyttendaele, M. F. Cohen, Real-time hyperlapse creation via optimal frame selection. ACM Trans. Graph.34(4), 63 (2015).
Q. Zheng, M. Yang, A video stabilization method based on inter-frame image matching score. Glob. J. Comput. Sci. Technol.17(1-F) (2017).
R. Borgo, M. Chen, B. Daubney, E. Grundy, G. Heidemann, B. Höferlin, et al., Wiley Online Library. State of the art report on video-based graphics and video visualization. Comput. Graph. Forum. 31(8), 2450–2477 (2012).
K. Schoeffmann, M. Lux, M. Taschwer, L. Boeszoermenyi, in 2009 IEEE International Conference on Multimedia and Expo. Visualization of video motion in context of video browsing (IEEE, 2009). https://doi.org/10.1109/icme.2009.5202582.
M. G. Chung, J. Lee, H. Kim, S. M. H. Song, W. M. Kim, Automatic video segmentation based on spatio-temporal features. Korea Telecom J.4(1), 4–14 (1999).
F. B. Valio, H. Pedrini, N. J. Leite, in 16th Iberoamerican Congress on Pattern Recognition. Fast rotation-invariant video caption detection based on visual rhythm (PucónChile, 2011), pp. 157–164.
A. Pinto, W. R. Schwartz, H. Pedrini, A. Rezende Rocha, Using visual rhythms for detecting video-based facial spoof attacks. IEEE Trans. Inf. Forensic. Secur.10(5), 1025–1038 (2015).
B. S. Torres, H. Pedrini, Detection of complex video events through visual rhythm. Vis. Comput.34(2), 145–165 (2018).
A. Silva Pinto, H. Pedrini, W. Schwartz, A. Rocha, in 25th Conference on Graphics, Patterns and Images Ouro Preto-MG. Video-based face spoofing detection through visual rhythm analysis (IEEEBrazil, 2012), pp. 221–228.
T. P. Moreira, D. Menotti, H. Pedrini, in IEEE International Conference on Acoustics, Speech, and Signal Processing. First-person action recognition through visual rhythm texture description (New Orleans, LA, USA, 2017), pp. 2627–2631. https://doi.org/10.1109/icassp.2017.7952632.
K. Zuiderveld, in Graphics Gems IV. Contrast limited adaptive histogram equalization (Academic Press Professional, Inc., 1994), pp. 474–485. https://doi.org/10.1016/b978-0-12-336156-1.50061-6.
I. Sobel, G. Feldman, A 3x3 isotropic gradient operator for image processing. Talk Stanf. Artif. Proj, 271–2 (1968).
N. Otsu, A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man. Cybernet.9(1), 62–66 (1979).
R. M. Haralick, K. Shanmugam, Textural features for image classification. IEEE Trans. Syst. Man. Cybernet.SMC-3(6), 610–621 (1973).
M. Grundmann, V. Kwatra, I. Essa, in IEEE Conference on Computer Vision and Pattern Recognition. Auto-directed video stabilization with robust L1 optimal camera paths (IEEE, 2011), pp. 225–232. https://doi.org/10.1109/cvpr.2011.5995525.
The authors are thankful to the FAPESP (grants #2014/12236-1 and #2017/12646-3) and CNPq (grant #305169/2015-7) for their financial support.
Marcos Roberto e Souza and Helio Pedrini contributed equally to this work.
Institute of Computing, University of Campinas, Av. Albert Einstein 1251, Campinas, 13083-852, Brazil
Marcos Roberto e Souza & Helio Pedrini
Marcos Roberto e Souza
Helio Pedrini
HP and MRS contributed equally to this work. Both authors carried out the in-depth analysis of the experimental results and checked the correctness of the evaluation. Both authors took part in the writing and proof reading of the final version of the paper. The authors read and approved the final manuscript.
Correspondence to Helio Pedrini.
e Souza, M.R., Pedrini, H. Visual rhythms for qualitative evaluation of video stabilization. J Image Video Proc. 2020, 19 (2020). https://doi.org/10.1186/s13640-020-00508-4
Qualitative evaluation
|
CommonCrawl
|
Discrete & Continuous Dynamical Systems - A
March 2016 , Volume 36 , Issue 3
An improved Hardy inequality for a nonlocal operator
Boumediene Abdellaoui and Fethi Mahmoudi
2016, 36(3): 1143-1157 doi: 10.3934/dcds.2016.36.1143 +[Abstract](1779) +[PDF](456.7KB)
Let $0 < s < 1$ and $1< p < 2$ be such that $ps < N$ and let $\Omega$ be a bounded domain containing the origin. In this paper we prove the following improved Hardy inequality:
Given $1 \le q < p$, there exists a positive constant $C\equiv C(\Omega, q, N, s)$ such that $$ \int\limits_{\mathbb{R}^N}\int\limits_{\mathbb{R}^N} \, \frac{|u(x)-u(y)|^{p}}{|x-y|^{N+ps}}\,dx\,dy - \Lambda_{N,p,s} \int\limits_{\mathbb{R}^N} \frac{|u(x)|^p}{|x|^{ps}}\,dx$$$$\geq C \int\limits_{\Omega}\int\limits_{\Omega}\frac{|u(x)-u(y)|^p}{|x-y|^{N+qs}}dxdy $$ for all $u \in \mathcal{C}_0^\infty({\Omega})$. Here $\Lambda_{N,p,s}$ is the optimal constant in the Hardy inequality (1.1).
Boumediene Abdellaoui, Fethi Mahmoudi. An improved Hardy inequality for a nonlocal operator. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1143-1157. doi: 10.3934\/dcds.2016.36.1143.
Pure discrete spectrum for a class of one-dimensional substitution tiling systems
Marcy Barge
We prove that if a primitive and non-periodic substitution is injective on initial letters, constant on final letters, and has Pisot inflation, then the $\mathbb{R}$-action on the corresponding tiling space has pure discrete spectrum. As a consequence, all $\beta$-substitutions for $\beta$ a Pisot simple Parry number have tiling dynamical systems with pure discrete spectrum, as do the Pisot systems arising, for example, from substitutions associated with the Jacobi-Perron and Brun continued fraction algorithms.
Marcy Barge. Pure discrete spectrum for a class of one-dimensional substitution tiling systems. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1159-1173. doi: 10.3934\/dcds.2016.36.1159.
Particle approximation of the one dimensional Keller-Segel equation, stability and rigidity of the blow-up
Vincent Calvez and Thomas O. Gallouët
2016, 36(3): 1175-1208 doi: 10.3934/dcds.2016.36.1175 +[Abstract](1371) +[PDF](1638.1KB)
We investigate a particle system which is a discrete and deterministic approximation of the one-dimensional Keller-Segel equation with a logarithmic potential. The particle system is derived from the gradient flow of the homogeneous free energy written in Lagrangian coordinates. We focus on the description of the blow-up of the particle system, namely: the number of particles involved in the first aggregate, and the limiting profile of the rescaled system. We exhibit basins of stability for which the number of particles is critical, and we prove a weak rigidity result concerning the rescaled dynamics. This work is complemented with a detailed analysis of the case where only three particles interact.
Vincent Calvez, Thomas O. Gallou\u00EBt. Particle approximation of the one dimensional Keller-Segel equation, stability and rigidity of the blow-up. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1175-1208. doi: 10.3934\/dcds.2016.36.1175.
Nonlocal-interaction equations on uniformly prox-regular sets
José A. Carrillo, Dejan Slepčev and Lijiang Wu
We study the well-posedness of a class of nonlocal-interaction equations on general domains $\Omega\subset \mathbb{R}^{d}$, including nonconvex ones. We show that under mild assumptions on the regularity of domains (uniform prox-regularity), for $\lambda$-geodesically convex interaction and external potentials, the nonlocal-interaction equations have unique weak measure solutions. Moreover, we show quantitative estimates on the stability of solutions which quantify the interplay of the geometry of the domain and the convexity of the energy. We use these results to investigate on which domains and for which potentials the solutions aggregate to a single point as time goes to infinity. Our approach is based on the theory of gradient flows in spaces of probability measures.
Jos\u00E9 A. Carrillo, Dejan Slep\u010Dev, Lijiang Wu. Nonlocal-interaction equations on uniformly prox-regular sets. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1209-1247. doi: 10.3934\/dcds.2016.36.1209.
The $\beta$-transformation with a hole
Lyndsey Clark
This paper extends those of Glendinning and Sidorov [3] and of Hare and Sidorov [6] from the case of the doubling map to the more general $\beta$-transformation. Let $\beta \in (1,2)$ and consider the $\beta$-transformation $T_\beta(x)=\beta x$ mod 1. Let $\mathcal{J}_\beta(a,b) := \{ x \in (0,1) : T_\beta^n(x) \notin (a,b) \text{ for all } n \geq 0 \}$. An integer $n$ is bad for $(a,b)$ if every periodic point of period $n$ for $T_\beta$ intersects $(a,b)$. Denote the set of all bad $n$ for $(a,b)$ by $B_\beta(a,b)$. In this paper we completely describe the following sets: \begin{align*} D_0(\beta) &= \{ (a,b) \in [0,1)^2 : \mathcal{J}(a,b) \neq \emptyset \}, \\ D_1(\beta) &= \{ (a,b) \in [0,1)^2 : \mathcal{J}(a,b) \text{ is uncountable} \}, \\ D_2(\beta) &= \{ (a,b) \in [0,1)^2 : B_\beta(a,b) \text{ is finite} \}. \end{align*}
Lyndsey Clark. The $\\beta$-transformation with a hole. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1249-1269. doi: 10.3934\/dcds.2016.36.1249.
Rigidity of Hamenstädt metrics of Anosov flows
Let $\varphi$ be a $C^\infty$ transversely symplectic topologically mixing Anosov flow such that $dim E^{su}\geq 2$. We suppose that the weak distributions of $\varphi$ are $C^1$. If the length Hamenstädt metrics of $\varphi$ are sub-Riemannian then we prove that the weak distributions of $\varphi$ are necessarily $C^\infty$. Combined with our previous rigidity result in [5] we deduce the classification of such Anosov flows with $C^1$ weak distributions provided that the length Hamenstädt metrics are sub-Riemannian.
Yong Fang. Rigidity of Hamenst\u00E4dt metrics of Anosov flows. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1271-1278. doi: 10.3934\/dcds.2016.36.1271.
Reaction-diffusion equations with fractional diffusion on non-smooth domains with various boundary conditions
Ciprian G. Gal and Mahamadi Warma
We investigate the long term behavior in terms of finite dimensional global attractors and (global) asymptotic stabilization to steady states, as time goes to infinity, of solutions to a non-local semilinear reaction-diffusion equation associated with the fractional Laplace operator on non-smooth domains subject to Dirichlet, fractional Neumann and Robin boundary conditions.
Ciprian G. Gal, Mahamadi Warma. Reaction-diffusion equations with fractional diffusion on non-smooth domains with various boundary conditions. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1279-1319. doi: 10.3934\/dcds.2016.36.1279.
Wandering continua for rational maps
Guizhen Cui and Yan Gao
We prove that a Lattès map admits an always full wandering continuum if and only if it is flexible. The full wandering continuum is a line segment in a bi-infinite or one-side-infinite geodesic under the flat metric.
Guizhen Cui, Yan Gao. Wandering continua for rational maps. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1321-1329. doi: 10.3934\/dcds.2016.36.1321.
Asymptotic stability of non-monotone traveling waves for time-delayed nonlocal dispersion equations
Rui Huang, Ming Mei, Kaijun Zhang and Qifeng Zhang
This paper is concerned with the stability of non-monotone traveling waves to a nonlocal dispersion equation with time-delay, a time-delayed integro-differential equation. When the equation is crossing-monostable, the equation and the traveling waves both loss their monotonicity, and the traveling waves are oscillating as the time-delay is big. In this paper, we prove that all non-critical traveling waves (the wave speed is greater than the minimum speed), including those oscillatory waves, are time-exponentially stable, when the initial perturbations around the waves are small. The adopted approach is still the technical weighted-energy method but with a new development. Numerical simulations in different cases are also carried out, which further confirm our theoretical result. Finally, as a corollary of our stability result, we immediately obtain the uniqueness of the traveling waves for the non-monotone integro-differential equation, which was open so far as we know.
Rui Huang, Ming Mei, Kaijun Zhang, Qifeng Zhang. Asymptotic stability of non-monotone traveling waves for time-delayed nonlocal dispersion equations. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1331-1353. doi: 10.3934\/dcds.2016.36.1331.
Equivalence between duality and gradient flow solutions for one-dimensional aggregation equations
François James and Nicolas Vauchelet
Existence and uniqueness of global in time measure solution for a one dimensional nonlinear aggregation equation is considered. Such a system can be written as a conservation law with a velocity field computed through a self-consistent interaction potential. Blow up of regular solutions is now well established for such system. In Carrillo et al. (Duke Math J (2011)) [18], a theory of existence and uniqueness based on the geometric approach of gradient flows on Wasserstein space has been developed. We propose in this work to establish the link between this approach and duality solutions. This latter concept of solutions allows in particular to define a flow associated to the velocity field. Then an existence and uniqueness theory for duality solutions is developed in the spirit of James and Vauchelet (NoDEA (2013)) [26]. However, since duality solutions are only known in one dimension, we restrict our study to the one dimensional case.
Fran\u00E7ois James, Nicolas Vauchelet. Equivalence between duality and gradient flow solutions for one-dimensional aggregation equations. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1355-1382. doi: 10.3934\/dcds.2016.36.1355.
Passive scalars, moving boundaries, and Newton's law of cooling
Juhi Jang and Ian Tice
We study the evolution of passive scalars in both rigid and moving slab-like domains, in both horizontally periodic and infinite contexts. The scalar is required to satisfy Robin-type boundary conditions corresponding to Newton's law of cooling, which lead to nontrivial equilibrium configurations. We study the equilibration rate of the passive scalar in terms of the parameters in the boundary condition and the equilibration rates of the background velocity field and moving domain.
Juhi Jang, Ian Tice. Passive scalars, moving boundaries, and Newton\'s law of cooling. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1383-1413. doi: 10.3934\/dcds.2016.36.1383.
Effective boundary conditions of the heat equation on a body coated by functionally graded material
Huicong Li
We consider the linear heat equation on a bounded domain, which has two components with a thin coating surrounding a body (of metallic nature), subject to the Dirichlet boundary condition. The coating is composed of two layers, the pure ceramic part and the mixed part. The mixed part is considered to be functionally graded material (FGM) that is meant to make a smooth transition from being metallic to being ceramic. The diffusion tensor is isotropic on the body, and allowed to be anisotropic on the coating; and the size of diffusion tensor may differ significantly in these components. We find effective boundary conditions (EBCs) that are approximately satisfied by the solution of the heat equation on the boundary of the body. A concrete example is considered to study the effect of FGM coating. We also provide numerical simulations to verify our theoretical results.
Huicong Li. Effective boundary conditions of the heat equation on a body coated by functionally graded material. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1415-1430. doi: 10.3934\/dcds.2016.36.1415.
Positive solutions of a nonlinear Schrödinger system with nonconstant potentials
Haidong Liu and Zhaoli Liu
Existence of a solution of the nonlinear Schrödinger system \begin{equation*} \left\{ \begin{aligned} & - \Delta u + V_1(x) u=\mu_1(x) u^3 + \beta(x) u v^2 \qquad\mbox{in}\ \mathbb{R}^N, \\ & - \Delta v + V_2(x) v=\beta(x) u^2 v + \mu_2(x) v^3 \qquad \mbox{in}\ \mathbb{R}^N, \\ & u>0,\ v>0,\quad u,\ v\in H^1(\mathbb{R}^N), \end{aligned} \right. \end{equation*} where $N=1,2,3$, and $V_j,\mu_j,\beta$ are continuous functions of $x\in\mathbb{R}^N$, is proved provided that either $V_j,\mu_j,\beta$ are invariant under the action of a finite subgroup of $O(N)$ or there is no such invariance assumption. In either case the result is obtained both for $\beta$ small and for $\beta$ large in terms of $V_j$ and $\mu_j$.
Haidong Liu, Zhaoli Liu. Positive solutions of a nonlinear Schr\u00F6dinger system with nonconstant potentials. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1431-1464. doi: 10.3934\/dcds.2016.36.1431.
Young towers for product systems
Stefano Luzzatto and Marks Ruziboev
We show that the direct product of maps with Young towers admits a Young tower whose return times decay at a rate which is bounded above by the slowest of the rates of decay of the return times of the component maps. An application of this result, together with other results in the literature, yields various statistical properties for the direct product of various classes of systems, including Lorenz-like maps, multimodal maps, piecewise $C^2$ interval maps with critical points and singularities, Hénon maps and partially hyperbolic systems.
Stefano Luzzatto, Marks Ruziboev. Young towers for product systems. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1465-1491. doi: 10.3934\/dcds.2016.36.1465.
One smoothing property of the scattering map of the KdV on $\mathbb{R}$
Alberto Maspero and Beat Schaad
In this paper we prove that in appropriate weighted Sobolev spaces, in the case of no bound states, the scattering map of the Korteweg-de Vries (KdV) on $\mathbb{R}$ is a perturbation of the Fourier transform by a regularizing operator. As an application of this result, we show that the difference of the KdV flow and the corresponding Airy flow is 1-smoothing.
Alberto Maspero, Beat Schaad. One smoothing property of the scattering map of the KdV on $\\mathbb{R}$. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1493-1537. doi: 10.3934\/dcds.2016.36.1493.
On the existence of global strong solutions to the equations modeling a motion of a rigid body around a viscous fluid
Šárka Nečasová and Joerg Wolf
The paper deals with the global existence of strong solution to the equations modeling a motion of a rigid body around viscous fluid. Moreover, the estimates of second gradients of velocity and pressure are given.
\u0160\u00E1rka Ne\u010Dasov\u00E1, Joerg Wolf. On the existence of global strong solutions to the equations modeling a motion of a rigid body around a viscous fluid. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1539-1562. doi: 10.3934\/dcds.2016.36.1539.
Global existence of solutions for the three-dimensional Boussinesq system with anisotropic data
Yuming Qin, Yang Wang, Xing Su and Jianlin Zhang
In this paper, we study the three-dimensional axisymmetric Boussinesq equations with swirl. We establish the global existence of solutions for the three-dimensional axisymmetric Boussinesq equations for a family of anisotropic initial data.
Yuming Qin, Yang Wang, Xing Su, Jianlin Zhang. Global existence of solutions for the three-dimensional Boussinesq system with anisotropic data. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1563-1581. doi: 10.3934\/dcds.2016.36.1563.
Large-time behavior of the full compressible Euler-Poisson system without the temperature damping
Zhong Tan, Yong Wang and Fanhui Xu
We study the three-dimensional full compressible Euler-Poisson system without the temperature damping. Using a general energy method, we prove the optimal decay rates of the solutions and their higher order derivatives. We show that the optimal decay rates is algebraic but not exponential since the absence of temperature damping.
Zhong Tan, Yong Wang, Fanhui Xu. Large-time behavior of the full compressible Euler-Poisson system without the temperature damping. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1583-1601. doi: 10.3934\/dcds.2016.36.1583.
Infinitely many solutions for an elliptic problem with double critical Hardy-Sobolev-Maz'ya terms
Chunhua Wang and Jing Yang
In this paper, by an approximating argument, we obtain infinitely many solutions for the following problem \begin{equation*} \left\{ \begin{array}{ll} -\Delta u = \mu \frac{|u|^{2^{*}(t)-2}u}{|y|^{t}} + \frac{|u|^{2^{*}(s)-2}u}{|y|^{s}} + a(x) u, & \hbox{$\text{in} \Omega$}, \\ u=0,\,\, &\hbox{$\text{on}~\partial \Omega$}, \\ \end{array} \right. \end{equation*} where $\mu\geq0,2^{*}(t)=\frac{2(N-t)}{N-2},2^{*}(s) = \frac{2(N-s)}{N-2},0\leq t < s < 2,x = (y,z)\in \mathbb{R}^{k} \times \mathbb{R}^{N-k},2 \leq k < N,(0,z^*)\subset \bar{\Omega}$ and $\Omega$ is an open bounded domain in $\mathbb{R}^{N}.$ We prove that if $N > 6+t$ when $\mu>0$ and $N > 6+s$ when $\mu=0,$ $a((0,z^*)) > 0,$ $\Omega$ satisfies some geometric conditions, then the above problem has infinitely many solutions.
Chunhua Wang, Jing Yang. Infinitely many solutions for an elliptic problem with double critical Hardy-Sobolev-Maz\'ya terms. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1603-1628. doi: 10.3934\/dcds.2016.36.1603.
On the shape Conley index theory of semiflows on complete metric spaces
Jintao Wang, Desheng Li and Jinqiao Duan
In this work we develop the shape Conley index theory for local semiflows on complete metric spaces by using a weaker notion of shape index pairs. This allows us to calculate the shape index of a compact isolated invariant set $K$ by restricting the system on any closed subset that contains a local unstable manifold of $K$, and hence significantly increases the flexibility of the calculation of shape indices and Morse equations. In particular, it allows to calculate shape indices and Morse equations for an infinite dimensional system by using only the unstable manifolds of the invariant sets, without requiring the system to be two-sided on the unstable manifolds.
Jintao Wang, Desheng Li, Jinqiao Duan. On the shape Conley index theory of semiflows on complete metric spaces. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1629-1647. doi: 10.3934\/dcds.2016.36.1629.
Lipschitz dependence of viscosity solutions of Hamilton-Jacobi equations with respect to the parameter
Kaizhi Wang and Jun Yan
Let $M$ be a closed and smooth manifold and $H_\varepsilon:T^*M\to\mathbf{R}^1$ be a family of Tonelli Hamiltonians for $\varepsilon\geq0$ small. For each $\varphi\in C(M,\mathbf{R}^1)$, $T^\varepsilon_t\varphi(x)$ is the unique viscosity solution of the Cauchy problem \begin{align*} \left\{ \begin{array}{ll} d_tw+H_\varepsilon(x,d_xw)=0, & \ \mathrm{in}\ M\times(0,+\infty),\\ w|_{t=0}=\varphi, & \ \mathrm{on}\ M, \end{array} \right. \end{align*} where $T^\varepsilon_t$ is the Lax-Oleinik operator associated with $H_\varepsilon$. A result of Fathi asserts that the uniform limit, for $t\to+\infty$, of $T^\varepsilon_t\varphi+c_\varepsilon t$ exists and the limit $\bar{\varphi}_\varepsilon$ is a viscosity solution of the stationary Hamilton-Jacobi equation \begin{align*} H_\varepsilon(x,d_xu)=c_\varepsilon, \end{align*} where $c_\varepsilon$ is the unique $k$ for which the equation $H_\varepsilon(x,d_xu)=k$ admits viscosity solutions. In the present paper we discuss the continuous dependence of the viscosity solution $\bar{\varphi}_\varepsilon$ with respect to the parameter $\varepsilon$.
Kaizhi Wang, Jun Yan. Lipschitz dependence of viscosity solutions of Hamilton-Jacobi equations with respect to the parameter. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1649-1659. doi: 10.3934\/dcds.2016.36.1649.
The regularity of sonic curves for the two-dimensional Riemann problems of the nonlinear wave system of Chaplygin gas
Qin Wang and Kyungwoo Song
We study the regularity of sonic curves from a two-dimensional Riemann problem for the nonlinear wave system of Chaplygin gas, which is an essential step for the global existence of solutions to the two-dimensional Riemann problems. As a result, we establish the global existence of uniformly smooth solutions in the semi-hyperbolic patches up to the sonic boundary, where the degeneracy of hyperbolicity occurs. Furthermore, we show the $C^1$-regularity of sonic curves.
Qin Wang, Kyungwoo Song. The regularity of sonic curves for the two-dimensional Riemann problems of the nonlinear wave system of Chaplygin gas. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1661-1675. doi: 10.3934\/dcds.2016.36.1661.
On the persistence of lower-dimensional elliptic tori with prescribed frequencies in reversible systems
Xiaocai Wang, Junxiang Xu and Dongfeng Zhang
This work focuses on the persistence of lower-dimensional elliptic tori with prescribed frequencies in reversible systems. By KAM method and the special structure of unperturbed nonlinear terms, we prove that the invariant torus with given frequency persists under small perturbations. Our result is a generalization of [22].
Xiaocai Wang, Junxiang Xu, Dongfeng Zhang. On the persistence of lower-dimensional elliptic tori with prescribed frequencies in reversible systems. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1677-1692. doi: 10.3934\/dcds.2016.36.1677.
Structurally stable homoclinic classes
Xiao Wen
In this paper we study structurally stable homoclinic classes. In a natural way, the structural stability for an individual homoclinic class is defined through the continuation of periodic points. Since the classes is not innately locally maximal, it is hard to answer whether structurally stable homoclinic classes are hyperbolic. In this article, we make some progress on this question. We prove that if a homoclinic class is structurally stable, then it admits a dominated splitting. Moreover we prove that codimension one structurally stable classes are hyperbolic. Also, if the diffeomorphism is far away from homoclinic tangencies, then structurally stable homoclinic classes are hyperbolic.
Xiao Wen. Structurally stable homoclinic classes. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1693-1707. doi: 10.3934\/dcds.2016.36.1693.
Global solutions of two coupled Maxwell systems in the temporal gauge
Jianjun Yuan
In this paper, we consider the Maxwell-Klein-Gordon and Maxwell-Chern-Simons-Higgs systems in the temporal gauge. By using the fact that when the spatial gauge potentials are in the Coulomb gauge, their $\dot{H}^1$ norms can be controlled by the energy of the corresponding system and their $L^2$ norms, and the gauge invariance of the systems, we show that finite energy solutions of these two systems exist globally in this gauge.
Jianjun Yuan. Global solutions of two coupled Maxwell systems in the temporal gauge. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1709-1719. doi: 10.3934\/dcds.2016.36.1709.
A Liouville theorem for $\alpha$-harmonic functions in $\mathbb{R}^n_+$
Lizhi Zhang, Congming Li, Wenxiong Chen and Tingzhi Cheng
In this paper, we consider $\alpha$-harmonic functions in the half space $\mathbb{R}^n_+$: \begin{equation} \left\{\begin{array}{ll} (-\triangle)^{\alpha/2} u(x)=0,~u(x)\geq0, & \qquad x\in\mathbb{R}^n_+, \\ u(x)\equiv0, & \qquad x\notin\mathbb{R}^{n}_{+}. \end{array}\right. (1) \end{equation} We prove that all solutions of (1) are either identically zero or assuming the form \begin{equation} u(x)=\left\{\begin{array}{ll}Cx_n^{\alpha/2}, & \qquad x\in\mathbb{R}^n_+, \\ 0, & \qquad x\notin\mathbb{R}^{n}_{+}, \end{array}\right. \label{2} \end{equation} for some positive constant $C$.
Lizhi Zhang, Congming Li, Wenxiong Chen, Tingzhi Cheng. A Liouville theorem for $\\alpha$-harmonic functions in $\\mathbb{R}^n_+$. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1721-1736. doi: 10.3934\/dcds.2016.36.1721.
On the boundedness and decay of solutions for a chemotaxis-haptotaxis system with nonlinear diffusion
Pan Zheng, Chunlai Mu and Xiaojun Song
This paper deals with a parabolic-parabolic-ODE chemotaxis haptotaxis system with nonlinear diffusion \begin{eqnarray*}\label{1a} \left\{ \begin{split}{} &u_{t}=\nabla\cdot(\varphi(u)\nabla u)-\chi\nabla\cdot(u\nabla v)-\xi\nabla\cdot(u\nabla w)+\mu u(1-u-w), \\ &v_{t}=\Delta v-v+u, \\ &w_{t}=-vw, \end{split} \right. \end{eqnarray*} under Neumann boundary conditions in a smooth bounded domain $\Omega\subset \mathbb{R}^{2}$, where $\chi$, $\xi$ and $\mu$ are positive parameters and $\varphi(u)$ is a nonlinear diffusion function. Firstly, under the case of non-degenerate diffusion, it is proved that the corresponding initial boundary value problem possesses a unique global classical solution that is uniformly bounded in $\Omega\times(0,\infty)$. Moreover, under the case of degenerate diffusion, we prove that the corresponding problem admits at least one nonnegative global bounded-in-time weak solution. Finally, under some additional conditions, we derive the temporal decay estimate of $w$.
Pan Zheng, Chunlai Mu, Xiaojun Song. On the boundedness and decay of solutions for a chemotaxis-haptotaxis system with nonlinear diffusion. Discrete & Continuous Dynamical Systems - A, 2016, 36(3): 1737-1757. doi: 10.3934\/dcds.2016.36.1737.
|
CommonCrawl
|
Jianfeng Huang 1,, and Haihua Liang 2,
Department of Mathematics, Jinan University, Guangzhou 510632, P.R. China
School of Mathematics and Systems Science, Guangdong Polytechnic Normal University, Guangzhou 510665, P.R. China
* Corresponding author: Jianfeng Huang
Received January 2019 Revised January 2020 Published May 2020
Fund Project: The first author is supported by the NSF of China (No.11401255, No.11771101) and the China Scholarship Council (No.201606785007). The second author is supported by the NSF of China (No.11771101), the major research program of colleges and universities in Guangdong Province (No.2017KZDXM054), and the Science and Technology Program of Guangzhou, China (201805010001)
In this paper we consider the limit cycles of the planar system
$ \begin{align*} \frac{d}{dt}(x,y) = \boldsymbol X_n+\boldsymbol X_m, \end{align*} $
$ \boldsymbol X_n $
$ \boldsymbol X_m $
are quasi-homogeneous vector fields of degree
$ n $
$ m $
respectively. We prove that under a new hypothesis, the maximal number of limit cycles of the system is
$ 1 $
. We also show that our result can be applied to some systems when the previous results are invalid. The proof is based on the investigations for the Abel equation and the generalized-polar equation associated with the system, respectively. Usually these two kinds of equations need to be dealt with separately, and for both equations, an efficient approach to estimate the number of periodic solutions is constructing suitable auxiliary functions. In the present paper we introduce a formula on the divergence, which allows us to construct an auxiliary function of one equation with the auxiliary function of the other equation, and vice versa.
Keywords: Limit cycles, existence and uniqueness, polynomial differential systems, quasi-homogeneous nonlinearities.
Mathematics Subject Classification: Primary: 34C07; Secondary: 37C05; Tertiary: 35F60.
Citation: Jianfeng Huang, Haihua Liang. Limit cycles of planar system defined by the sum of two quasi-homogeneous vector fields. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 861-873. doi: 10.3934/dcdsb.2020145
A. Algaba, E. Freire, E. Gamero and C. García, Monodromy, center-focus and integrability problems for quasi-homogeneous polynomial systems, Nonlinear Anal., 72 (2010), 1726-1736. doi: 10.1016/j.na.2009.09.012. Google Scholar
A. Algaba, E. Gamero and C. García, The integrability problem for a class of planar systems, Nonlinearity, 22 (2009), 395-420. doi: 10.1088/0951-7715/22/2/009. Google Scholar
A. Algaba, C. García and M. Reyes, Integrability of two dimensional quasi-homogeneous polynomial differential systems, Rocky Mt. J. Math., 41 (2011), 1-22. doi: 10.1216/RMJ-2011-41-1-1. Google Scholar
M. J. Álvarez, A. Gasull and H. Giacomini, A new uniqueness criterion for the number of periodic orbits of Abel equations, J. Differential Equations, 234 (2007), 161-176. doi: 10.1016/j.jde.2006.11.004. Google Scholar
R. Benterki and J. Llibre, Limit cycles of polynomial differential equations with quintic homogeneous nonlinearities, J. Math. Anal. Appl., 407 (2013), 16-22. doi: 10.1016/j.jmaa.2013.04.076. Google Scholar
L. Cairó and J. Llibre, Phase portraits of planar semi-homogeneous vector fields (Ⅰ), Nonlinear Anal., 29 (1997), 783-811. doi: 10.1016/S0362-546X(96)00088-0. Google Scholar
L. Cairó and J. Llibre, Phase portraits of planar semi-homogeneous vector fields (Ⅱ), Nonlinear Anal., 39 (2000), 351-363. doi: 10.1016/S0362-546X(98)00177-1. Google Scholar
M. Carbonell and J. Llibre, Limit cycles of a class of polynomial systems, Proc. Royal Soc. Edinburgh, 109 (1988), 187-199. doi: 10.1017/S0308210500026755. Google Scholar
J. Chavarriga, I. A. Garcia and J. Gine, On integrability of differential equations defined by the sum of homogeneous vector fields with degenerate infinity, Int. J. Bifurcation Chaos, 11 (2011), 711-722. doi: 10.1142/S0218127401002390. Google Scholar
L. A. Čerkas, Number of limit cycles of an autonomous second-order system, Differ. Uravn., 12 (1976), 944-946. Google Scholar
A. Cima, A. Gasull and F. Mańosas, Limit cycles for vector fields with homogeneous components, Appl. Math., 24 (1997), 281-287. doi: 10.4064/am-24-3-281-287. Google Scholar
A. Cima and J. Llibre, Algebraic and topological classification of the homogeneous cubic vector fields in the plane, J. Math. Anal. Appl., 147 (1990), 420-448. doi: 10.1016/0022-247X(90)90359-N. Google Scholar
B. Coll, A. Gasull and R. Prohens, Differential equations defined by the sum of two quasi-homogeneous vector fields, Can. J. Math., 49 (1997), 212-231. doi: 10.4153/CJM-1997-011-0. Google Scholar
A. Gasull and J. Llibre, Limit cycles for a class of Abel equation, SIAM J. Math. Anal., 21 (1990), 1235-1244. doi: 10.1137/0521068. Google Scholar
A. Gasull, J. Yu and X. Zhang, Vector fields with homogeneous nonlinearities and many limit cycles, J. Differential Equations, 258 (2015), 3286-3303. doi: 10.1016/j.jde.2015.01.009. Google Scholar
L. Gavrilov, J. Giné and M. Grau, On the cyclicity of weight-homogeneous centers, J. Differential Equations, 246 (2009), 3126-3135. doi: 10.1016/j.jde.2009.02.010. Google Scholar
J. Giné, M. Grau and J. Llibre, Limit cycles bifurcating from planar polynomial quasi-homogeneous centers, J. Differential Equations, 259 (2015), 7135-7160. doi: 10.1016/j.jde.2015.08.014. Google Scholar
J. Huang and H. Liang, A uniqueness criterion of limit cycles for planar polynomial systems with homogeneous nonlinearities, J. Math. Anal. Appl., 457 (2018), 498-521. doi: 10.1016/j.jmaa.2017.08.008. Google Scholar
J. Huang, H. Liang and J. Llibre, Non-existence and uniqueness of limit cycles for planar polynomial differential systems with homogeneous nonlinearities, J. Differential Equations, 265 (2018), 3888-3913. doi: 10.1016/j.jde.2018.05.019. Google Scholar
J. Huang and Y. Zhao, Periodic solutions for equation $\dot{x}=A(t)x^m+B(t)x^n+C(t)x^l$ with $A(t)$ and $B(t)$ changing signs, J. Differential Equations, 253 (2012), 73-99. doi: 10.1016/j.jde.2012.03.021. Google Scholar
W. Li, J. Llibre, J. Yang and Z. Zhang, Limit cycles bifurcating from the period annulus of quasi-homogeneous centers, J. Dynam. Differential Equations, 21 (2009), 133-152. doi: 10.1007/s10884-008-9126-1. Google Scholar
J. Llibre, Jesús S. Pérez del Río and J. A. Rodríguez, Structural stability of planar homogeneous polynomial vector fields: Applications to critical points and to infinity, J. Differential Equations, 125 (1996), 490-520. doi: 10.1006/jdeq.1996.0038. Google Scholar
J. Llibre, Jesús S. Pérez del Río and J. A. Rodríguez, Structural stability of planar semi-homogeneous polynomial vector fields: Applications to critical points and to infinity, Discrete Contin. Dyn. Syst., 6 (2000), 809-828. doi: 10.3934/dcds.2000.6.809. Google Scholar
J. Llibre and G. Świrszcz, On the limit cycles of polynomial vector fields, Dyn. Contin. Discrete Impuls. Syst, 18 (2011), 203-214. Google Scholar
J. Llibre and C. Valls, Classification of the centers, their cyclicity and isochronicity for a class of polynomial differential systems generalizing the linear systems with cubic homogeneous nonlinearities, J. Differential Equations, 246 (2009), 2192-2204. doi: 10.1016/j.jde.2008.12.006. Google Scholar
N. G. Lloyd, A note on the number of limit cycles in certain two-dimensional systems, J. London Math. Soc., 20 (1979), 277-286. doi: 10.1112/jlms/s2-20.2.277. Google Scholar
N. G. Lloyd and J. M. Pearson, Bifurcation of limit cycles and integrability of planar dynamical systems in complex form, J. Phys. A: Math. Gen., 32 (1999), 1973-1984. doi: 10.1088/0305-4470/32/10/014. Google Scholar
K. S. Sibirskii, On the number of limit cycles in the neighborhood of a singular point, Differential Equations, 1 (1965), 53-66. Google Scholar
Figure 1. The dot curve represents the curve $ b_n(\theta)+b_m(\theta)r = 0 $ in Cartesian coordinates, which is the inner boundary of the region $ U $
Jaume Llibre, Claudia Valls. Rational limit cycles of abel equations. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021007
Peizhao Yu, Guoshan Zhang, Yi Zhang. Decoupling of cubic polynomial matrix systems. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 13-26. doi: 10.3934/naco.2020012
Peter H. van der Kamp, D. I. McLaren, G. R. W. Quispel. Homogeneous darboux polynomials and generalising integrable ODE systems. Journal of Computational Dynamics, 2021, 8 (1) : 1-8. doi: 10.3934/jcd.2021001
Christian Beck, Lukas Gonon, Martin Hutzenthaler, Arnulf Jentzen. On existence and uniqueness properties for solutions of stochastic fixed point equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020320
Tuoc Phan, Grozdena Todorova, Borislav Yordanov. Existence uniqueness and regularity theory for elliptic equations with complex-valued potentials. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1071-1099. doi: 10.3934/dcds.2020310
Yukihiko Nakata. Existence of a period two solution of a delay differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1103-1110. doi: 10.3934/dcdss.2020392
Jan Bouwe van den Berg, Elena Queirolo. A general framework for validated continuation of periodic orbits in systems of polynomial ODEs. Journal of Computational Dynamics, 2021, 8 (1) : 59-97. doi: 10.3934/jcd.2021004
Hideki Murakawa. Fast reaction limit of reaction-diffusion systems. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1047-1062. doi: 10.3934/dcdss.2020405
Max E. Gilmore, Chris Guiver, Hartmut Logemann. Sampled-data integral control of multivariable linear infinite-dimensional systems with input nonlinearities. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021001
Ryuji Kajikiya. Existence of nodal solutions for the sublinear Moore-Nehari differential equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1483-1506. doi: 10.3934/dcds.2020326
Rim Bourguiba, Rosana Rodríguez-López. Existence results for fractional differential equations in presence of upper and lower solutions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1723-1747. doi: 10.3934/dcdsb.2020180
Yichen Zhang, Meiqiang Feng. A coupled $ p $-Laplacian elliptic system: Existence, uniqueness and asymptotic behavior. Electronic Research Archive, 2020, 28 (4) : 1419-1438. doi: 10.3934/era.2020075
Karoline Disser. Global existence and uniqueness for a volume-surface reaction-nonlinear-diffusion system. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 321-330. doi: 10.3934/dcdss.2020326
Daniele Bartolucci, Changfeng Gui, Yeyao Hu, Aleks Jevnikar, Wen Yang. Mean field equations on tori: Existence and uniqueness of evenly symmetric blow-up solutions. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3093-3116. doi: 10.3934/dcds.2020039
Thabet Abdeljawad, Mohammad Esmael Samei. Applying quantum calculus for the existence of solution of $ q $-integro-differential equations with three criteria. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020440
Qingfeng Zhu, Yufeng Shi. Nonzero-sum differential game of backward doubly stochastic systems with delay and applications. Mathematical Control & Related Fields, 2021, 11 (1) : 73-94. doi: 10.3934/mcrf.2020028
Zedong Yang, Guotao Wang, Ravi P. Agarwal, Haiyong Xu. Existence and nonexistence of entire positive radial solutions for a class of Schrödinger elliptic systems involving a nonlinear operator. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020436
Dorothee Knees, Chiara Zanini. Existence of parameterized BV-solutions for rate-independent systems with discontinuous loads. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 121-149. doi: 10.3934/dcdss.2020332
Jianfeng Huang Haihua Liang
|
CommonCrawl
|
The Annals of Statistics
Ann. Statist.
On the contraction properties of some high-dimensional quasi-posterior distributions
Yves A. Atchadé
More by Yves A. Atchadé
Full-text: Access denied (no subscription detected)
We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text
We study the contraction properties of a quasi-posterior distribution $\check{\Pi}_{n,d}$ obtained by combining a quasi-likelihood function and a sparsity inducing prior distribution on $\mathbb{R}^{d}$, as both $n$ (the sample size), and $d$ (the dimension of the parameter) increase. We derive some general results that highlight a set of sufficient conditions under which $\check{\Pi}_{n,d}$ puts increasingly high probability on sparse subsets of $\mathbb{R}^{d}$, and contracts toward the true value of the parameter. We apply these results to the analysis of logistic regression models, and binary graphical models, in high-dimensional settings. For the logistic regression model, we shows that for well-behaved design matrices, the posterior distribution contracts at the rate $O(\sqrt{s_{\star}\log(d)/n})$, where $s_{\star}$ is the number of nonzero components of the parameter. For the binary graphical model, under some regularity conditions, we show that a quasi-posterior analog of the neighborhood selection of [Ann. Statist. 34 (2006) 1436–1462] contracts in the Frobenius norm at the rate $O(\sqrt{(p+S)\log(p)/n})$, where $p$ is the number of nodes, and $S$ the number of edges of the true graph.
Ann. Statist., Volume 45, Number 5 (2017), 2248-2273.
Received: September 2015
Revised: September 2016
First available in Project Euclid: 31 October 2017
https://projecteuclid.org/euclid.aos/1509436834
doi:10.1214/16-AOS1526
Primary: 62F15: Bayesian inference 62Jxx: Linear inference, regression
Quasi-Bayesian inference high-dimensional inference Bayesian asymptotics logistic regression models discrete graphical models
Atchadé, Yves A. On the contraction properties of some high-dimensional quasi-posterior distributions. Ann. Statist. 45 (2017), no. 5, 2248--2273. doi:10.1214/16-AOS1526. https://projecteuclid.org/euclid.aos/1509436834
[1] Alquier, P. and Lounici, K. (2011). PAC-Bayesian bounds for sparse regression estimation with exponential weights. Electron. J. Stat. 5 127–145.
[2] Arias-Castro, E. and Lounici, K. (2014). Estimation and variable selection with exponential weights. Electron. J. Stat. 8 328–354.
[3] Atchadé, Y. F. (2014). Estimation of high-dimensional partially-observed discrete Markov random fields. Electron. J. Stat. 8 2242–2263.
[4] Atchadé, Y. F. (2015). A Moreau-Yosida approximation scheme for high-dimensional posterior and quasi-posterior distributions. Available at arXiv:1505.07072.
arXiv: 1505.07072
[5] Atchadé, Y. F. (2015). A scalable quasi-Bayesian framework for Gaussian graphical models. Available at arXiv:1512.07934.
[6] Atchadé, Y. F. (2017). Supplement to "On the contraction properties of some high-dimensional quasi-posterior distributions." DOI:10.1214/16-AOS1526SUPP.
[7] Banerjee, O., El Ghaoui, L. and d'Aspremont, A. (2008). Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. J. Mach. Learn. Res. 9 485–516.
[8] Banerjee, S. and Ghosal, S. (2015). Bayesian structure learning in graphical models. J. Multivariate Anal. 136 147–162.
[9] Barber, R. F. and Drton, M. (2015). High-dimensional Ising model selection with Bayesian information criteria. Electron. J. Stat. 9 567–607.
[10] Baricz, A. (2008). Mills' ratio: Monotonicity patterns and functional inequalities. J. Math. Anal. Appl. 340 1362–1370.
[11] Besag, J. (1974). Spatial interaction and the statistical analysis of lattice systems. J. Roy. Statist. Soc. Ser. B 36 192–236.
[12] Castillo, I., Schmidt-Hieber, J. and van der Vaart, A. (2015). Bayesian linear regression with sparse priors. Ann. Statist. 43 1986–2018.
[13] Castillo, I. and van der Vaart, A. (2012). Needles and straw in a haystack: Posterior concentration for possibly sparse sequences. Ann. Statist. 40 2069–2101.
[14] Catoni, O. (2004). Statistical Learning Theory and Stochastic Optimization. Lecture Notes in Math. 1851. Springer, Berlin.
[15] Chernozhukov, V. and Hong, H. (2003). An MCMC approach to classical estimation. J. Econometrics 115 293–346.
[16] Dalalyan, A. S. and Tsybakov, A. B. (2007). Aggregation by exponential weighting and sharp oracle inequalities. In Learning Theory. Lecture Notes in Computer Science 4539 97–111. Springer, Berlin.
[17] Florens, J.-P. and Simoni, A. (2012). Nonparametric estimation of an instrumental regression: A quasi-Bayesian approach based on regularized posterior. J. Econometrics 170 458–475.
[18] Ghosh, J. K. and Ramamoorthi, R. V. (2003). Bayesian Nonparametrics. Springer Series in Statistics. Springer, New York.
[19] Höfling, H. and Tibshirani, R. (2009). Estimation of sparse binary pairwise Markov networks using pseudo-likelihoods. J. Mach. Learn. Res. 10 883–906.
[20] Kato, K. (2013). Quasi-Bayesian analysis of nonparametric instrumental variables models. Ann. Statist. 41 2359–2390.
[21] Kleijn, B. J. K. and van der Vaart, A. W. (2006). Misspecification in infinite-dimensional Bayesian statistics. Ann. Statist. 34 837–877.
[22] Li, C. and Jiang, W. (2014). Model selection for likelihood-free Bayesian methods based on moment conditions: Theory and numerical examples. Available at arXiv:1405.6693v1.
arXiv: 1405.6693v1
[23] Li, Y.-H., Scarlett, J., Ravikumar, P. and Cevher, V. (2014). Sparsistency of $\ell_{1}$-regularized M-estimators. Preprint. Available at arXiv:1410.7605v1.
[24] Liao, Y. and Jiang, W. (2011). Posterior consistency of nonparametric conditional moment restricted models. Ann. Statist. 39 3003–3031.
[25] Marin, J.-M., Pudlo, P., Robert, C. P. and Ryder, R. J. (2012). Approximate Bayesian computational methods. Stat. Comput. 22 1167–1180.
[26] McAllester, D. A. (1999). Some pac-Bayesian theorems. Mach. Learn. 37 355–363.
[27] Meinshausen, N. and Buhlmann, P. (2006). High-dimensional graphs with the lasso. Ann. Statist. 34 1436–1462.
[28] Mitchell, T. J. and Beauchamp, J. J. (1988). Bayesian variable selection in linear regression. J. Amer. Statist. Assoc. 83 1023–1036.
Digital Object Identifier: doi:10.1080/01621459.1988.10478694
[29] Negahban, S. N., Ravikumar, P., Wainwright, M. J. and Yu, B. (2012). A unified framework for high-dimensional analysis of $m$-estimators with decomposable regularizers. Statist. Sci. 27 538–557.
[30] Ravikumar, P., Wainwright, M. J. and Lafferty, J. D. (2010). High-dimensional Ising model selection using $\ell_{1}$-regularized logistic regression. Ann. Statist. 38 1287–1319.
Mathematical Reviews (MathSciNet): MR2662343
Digital Object Identifier: doi:10.1214/09-AOS691
Project Euclid: euclid.aos/1268056617
[31] Schreck, A., Fort, G., Le Corff, S. and Moulines, E. (2013). A shrinkage-thresholding Metropolis adjusted Langevin algorithm for Bayesian variable selection. Available at arXiv:1312.5658.
arXiv: 1312.5658
[32] Sun, T. and Zhang, C.-H. (2013). Sparse matrix inversion with scaled lasso. J. Mach. Learn. Res. 14 3385–3418.
[33] Yang, W. and He, X. (2012). Bayesian empirical likelihood for quantile regression. Ann. Statist. 40 1102–1131.
[34] Zhang, T. (2006). From $\varepsilon $-entropy to KL-entropy: Analysis of minimum information complexity density estimation. Ann. Statist. 34 2180–2210.
Supplement to "On the contraction properties of some high-dimensional quasi-posterior distributions". The supplementary material contains the proof of Theorems 4, 9 and 10.
Digital Object Identifier: doi:10.1214/16-AOS1526SUPP
Supplemental files are immediately available to subscribers. Non-subscribers gain access to supplemental files with the purchase of the article.
The Institute of Mathematical Statistics
First Online
Future Papers
New content alerts
Email RSS ToC RSS Article
Debiasing the lasso: Optimal sample size for Gaussian designs
Javanmard, Adel and Montanari, Andrea, The Annals of Statistics, 2018
False discovery rate control via debiased lasso
Javanmard, Adel and Javadi, Hamid, Electronic Journal of Statistics, 2019
High-dimensional Ising model selection using ℓ1-regularized logistic regression
Ravikumar, Pradeep, Wainwright, Martin J., and Lafferty, John D., The Annals of Statistics, 2010
Rates of contraction of posterior distributions based on Gaussian process priors
van der Vaart, A. W. and van Zanten, J. H., The Annals of Statistics, 2008
Maximum likelihood estimation in logistic regression models with a diverging number of covariates
Liang, Hua and Du, Pang, Electronic Journal of Statistics, 2012
Prediction when fitting simple models to high-dimensional data
Steinberger, Lukas and Leeb, Hannes, The Annals of Statistics, 2019
Minimax-optimal nonparametric regression in high dimensions
Yang, Yun and Tokdar, Surya T., The Annals of Statistics, 2015
Adaptive Bayesian density regression for high-dimensional data
Shen, Weining and Ghosal, Subhashis, Bernoulli, 2016
Posterior convergence rates for estimating large precision matrices using graphical models
Banerjee, Sayantan and Ghosal, Subhashis, Electronic Journal of Statistics, 2014
On expected number of level crossings of a random hyperbolic polynomial
Mahanti, Mina Ketan and Sahoo, Loknath, Rocky Mountain Journal of Mathematics, 2015
euclid.aos/1509436834
|
CommonCrawl
|
Structural equation modeling of immunotoxicity associated with exposure to perfluorinated alkylates
Ulla B. Mogensen1,
Philippe Grandjean2,3,
Carsten Heilmann4,
Flemming Nielsen3,
Pál Weihe5 &
Esben Budtz-Jørgensen1
Exposure to perfluorinated alkylate substances (PFASs) is associated with immune suppression in animal models, and serum concentrations of specific antibodies against certain childhood vaccines tend to decrease at higher exposures. As such, we investigated the immunotoxic impacts of the three major PFASs in a Faroese birth cohort.
A total of 464 children contributed blood samples collected at age 7 years. PFAS concentrations and concentrations of antibodies against diphtheria and tetanus were assessed in serum at age 7 years, and results were available from samples collected at age 5. In addition to standard regressions, structural equation models were generated to determine the association between three major PFASs measured at the two points in time and the two antibody concentrations.
Concentrations of all three 7-year PFAS concentrations were individually associated with a decrease in concentrations of antibodies, however, it was not possible to attribute causality to any single PFAS concentration. Hence, the three 7-year concentrations were combined and showed that a 2-fold increase in PFAS was associated with a decrease by 54.4 % (95 % CI: 22.0 %, 73.3 %) in the antibody concentration. If considering both the age-5 and age-7 concentrations of the three major PFASs, the exposure showed a slightly greater loss.
These analyses strengthen the evidence of human PFAS immunotoxicity at current exposure levels and reflect the usefulness of structural equation models to adjust for imprecision in the exposure variables.
Perfluorinated alkylate substances (PFASs) are applied in water-, soil-, and stain-resistant coatings for clothing and other textiles, oil-resistant coatings for food wrapping materials, and other products. Hence, human PFAS exposures may therefore originate from PFAS-containing products or from environmental dissemination, including house dust, ground water, and seafood [1, 2]. Although systematic toxicity testing has not been carried out, animal models have suggested that immunotoxicity may be an important outcome of PFAS exposures at levels commonly encountered [3]. Pursuant to the above, in the mouse, exposure to perfluorooctane sulfonic acid (PFOS) caused a variety of immunotoxic consequences, including decreased immunoglobulin response to a standard antigen challenge [4, 5]. These associations were reported at serum concentrations similar to, or somewhat higher, than those widely occurring in humans.
In human studies, childhood vaccination responses can be applied as feasible and clinically relevant outcomes, as the children have received the same antigen doses at similar ages [6]. Using this approach, a birth cohort established in the Faroe Islands showed strong negative correlations between serum PFAS concentrations at age 5 years and antibody concentrations before and after booster vaccination at age 5, and 2.5 years later [7]. However, the exposure assessment relied on a single serum sample obtained at age 5. Serial analyses of serum samples from former production workers after retirement suggested elimination half-lives of ~3 years for perfluorooctanoic acid (PFOA) and ~5 years for perfluorooctanesulfonic acid (PFOS) [8], and declines in serum-PFOA concentrations in an exposed community after elimination of the water contamination suggested a median elimination half-life of 2.3 years [9]. Although serum-PFAS concentrations in adults may be fairly stable over time, substantial age-dependent changes occur during childhood [10]. In addition, uncertainty prevails about the relevant exposure window in regard to possible adverse effects in children. Further, binding to serum albumin [11] and body mass index [12] may affect serum concentrations of these substances. Accordingly, imprecision of serum concentrations as exposure indicators must be taken into regard in the data analysis.
Serum-PFAS concentrations of the Faroese birth cohort at age 7 have now been determined, and possible confounders have been ascertained. We can therefore link the immunotoxic outcomes to prospective exposure data. As before [7], we focus on the three major PFASs, i.e., PFOA, PFOS, and perfluorohexanesulfonic acid (PFHxS). Given the fact that three substances were measured postnatally on two occasions and that two different antibody concentrations are available as outcome variables, we complemented standard regression analysis with structural equation models. These models are powerful tools to simultaneously study the associations of several correlated exposures with several outcomes while taking into account exposure uncertainty, missing data, and covariates [13, 14].
A cohort of 656 children was compiled from births at the National Hospital in Tórshavn in the Faroe Islands during 1997–2000 to explore childhood immune function and the impact on vaccination efficacy [7]. Faroese children receive vaccinations against diphtheria, tetanus, and other major antigens at ages 3 months, 5 months, and 12 months, with a booster at 5 years, as part of the government-supported health care system. All children received the same amount of vaccines and associated alum adjuvant from the same source, although additional vaccines (pertussis and polio) were added to the booster during the project period. Of the 464 children participating in the age-7 examination, 412 had previously undergone the 5-year testing in connection with the booster vaccination. Six children were excluded, as they had more recently received an additional booster vaccination. The study protocol was approved by the Faroese ethical review committee and by the institutional review board at the Harvard School of Public Health; written informed consent was obtained from all mothers.
The PFAS concentrations were measured in serum at age 5 (before the booster vaccination) and age 7. As albumin is likely a major binding protein [11], the serum-albumin concentration was analyzed in all age-7 serum samples to allow adjustment for this possible source of variability. PFAS concentrations were measured by online solid-phase extraction and analysis using high-pressure liquid chromatography with tandem mass spectrometry [7]. Within-batch and between-batch imprecision (assessed by the coefficient of variation) were better than 5.6 % for all analytes. Results with excellent accuracy were obtained in the regular comparisons organized by the German Society of Occupational Medicine.
Serum concentrations of antibodies were measured by the vaccine producer (Statens Serum Institut, Copenhagen, Denmark) using enzyme-linked immunosorbent assay for tetanus and, for diphtheria, a Vero cell-based neutralization assay using 2-fold dilutions of the serum. For both assays, calibration was performed using both international and local standard antitoxins.
Antibody concentrations and PFAS exposures were all log-transformed (base 2) before they entered the models. Initial analyses were based on separate multiple linear regressions with an antibody concentration as the dependent variable and a serum-PFAS concentration as a predictor along with age, sex, and booster type. The assumptions of linear dose–response associations were verified by allowing for a more flexible relation between the PFAS and the antibody concentration in generalized additive models using cubic regression splines with three knots [15].
Structural equation models (SEMs) allow for a joint analysis of multiple exposures with several outcome variables [14, 16]. SEMs typically consists of a measurement model in which the observed variables are linked to a limited number of latent variables and a structural model describing the relationship between the latent variables with possible adjustment for the effects of covariates. We considered models with an increasing complexity where the measurement models included an increasing proportion of the available variables from the longitudinal multivariate exposure profile of the subjects.
Model 1 included both the 5-year and 7-year concentrations of the same PFAS as shown in Fig. 1 for PFOA exposure and its association with the anti-diphtheria antibody concentration (other PFASs and associations with anti-tetanus were modeled similarly). We considered the two observed PFOA concentrations at ages 5 and 7 years: (log PFOA5, log PFOA7) as proxy variables for the latent true long-term exposure level (logPFOA):
Structural equation model for the association between the latent PFAS concentration and the antibody concentration adjusted for covariates (Additional file 1: Model 1). The model is shown for the relation between latent PFOA (circle) measured by the 5- and 7-year observed concentrations (left squares) and anti-diphtheria (right square). "Covariates" (middle square) are age, sex, and booster type that predicts the latent variable and the two antibody concentrations additively and linearly
$$ log{\mathrm{PFOA}}_5= log\mathrm{PFOA}+{\varepsilon}_5 $$
$$ log{\mathrm{PFOA}}_7=\alpha + log\mathrm{PFOA}+{\varepsilon}_7 $$
Thus, after a log-transformation the observed concentration is a sum of the truth and random measurement error. The parameter α allows for a general change in the concentration level from age 5 to 7. We further assumed the latent PFOA concentration affect the anti-diphtheria concentration linearly after adjustment for covariates (denoted C 1 ,…, C k ):
$$ log\ \mathrm{Anti}\hbox{-} {\mathrm{diphtheria}}_7={\beta}_0+{\beta}_1\bullet log\mathrm{PFOA}+{\beta}_2\bullet {C}_1\kern0.5em +\kern0.5em \cdots + {\beta}_{k+1}\bullet {C}_k+\varepsilon . $$
The regression coefficients have the same interpretations as in standard regression analyses, but this model include adjustment for measurement errors, as it is the underlying latent variable, and not the observed exposure which are assumed to affect the outcome. In addition, we allowed for correlations between covariates and exposure variables by assuming that the covariates affected the latent variable (see Additional file 1, p. 3).
The model was extended to allow for the possible dependence for observed PFAS concentrations on the serum-albumin concentration. This was done by modifying the equations (1) and (2) such that the observed PFOA concentrations depended on albumin in addition to the latent PFOA exposure level (see Additional file 1, p. 5). Note that because we modeled the influence of albumin on the observed concentrations, the model allows children with the same underlying exposure to have different measured PFOA concentrations. A similar approach was used for body mass index (body weight divided by the squared height).
To compare the effects of the different PFASs, we combined the individual PFAS models into a single SEM (Additional file 1: Model 2) that allowed antibody concentrations to depend on all three latent PFAS concentrations (see Additional file 1, p. 5–6).
We then considered models where concentrations of the three major PFASs were viewed as reflections of a latent variable representing the overall true PFAS exposure concentration. We first developed a measurement model for 7-year concentrations (logPFOA7, logPFOS7, logPFHxS7)
$$ log\ PFO{A}_7={\alpha}_1+{\lambda}_1\bullet log PFAS+{\varepsilon}_1 $$
$$ log\ {\mathrm{PFOS}}_7={\alpha}_2+{\lambda}_2\bullet log\mathrm{PFAS}+{\varepsilon}_2 $$
$$ log\ PFHx{S}_7={\alpha}_3+{\lambda}_3\bullet log PFAS+{\varepsilon}_3, $$
where logPFAS represent the joint latent PFAS concentration and ε1, ε2, ε3 are measurement errors. The latent PFAS variable was then related to the antibody concentrations as in equation (3) (see Additional file 1: Figure S3). This model is similar to the 5-year PFAS exposure model previously used [7].
To investigate if the PFAS exposure at age 5 or 7 years had the strongest effect on antibody concentrations, we included a similar model for the latent 5-year PFAS exposure and allowed antibody concentrations to depend on the latent PFAS variables at both age 5 and age 7; Additional file 1: Model 4 (see Additional file 1, p. 9–10).
As a final model (Additional file 1: Model 5), we extended the joint latent PFAS exposure reflect both sets of serum concentrations at 5 and 7 years, the latter entering the measurement model for the latent PFAS exposure with equations similar to equations 5–7, thus leading to a total of six equations (see full details in Additional file 1, p. 12). We allowed for the possibility that the three concentrations obtained at the same exposure age were correlated (locally dependent) even after conditioning on the latent variable (Fig. 2). This association was assumed to be equally strong for the two exposure assessments. Furthermore, we modeled a local dependence between the age 5 and 7 concentrations for each PFAS. As in the previous models, the latent PFAS variable was then related to the antibody concentrations after adjustment for covariates.
Structural equation model for latent PFAS exposure (middle circle) manifested by concentrations of PFOA, PFOS, and PFHxS at year 5 and 7 (left squares); Additional file 1: Model 5. The dashed (doubled-headed) arrows indicate the local dependencies between the manifest variables. Local dependence between concentrations at the same age is modeled by latent variables Year 5 and Year 7 (left circles). "Covariates" (middle square) are age, sex, and booster type that predicts the joint latent PFAS variable and the two antibody concentrations additively and linearly
An important advantage of SEMs is the ability to analyze incomplete data through a full-information maximum likelihood (FIML) procedure [16]. We used this option in all the SEMs under the assumption that the data were missing at random [17]. The goodness-of-fit of the models were assessed with the χ2-test, and the root mean squared error of approximation (RMSEA). The χ2-test compares the covariance matrix of the observed variables to the covariance matrix predicted from the model. The RMSEA measures the lack-of-fit per degree of freedom. In addition, we report the Bentler Comparative Fit Index (CFI), and the standardized root mean square residuals (SRMR) [16]. Models were considered to have a good fit if the χ2-test yielded a p-value above 5 %, the RMSEA was below 5 %, CFI was above 0.95 and SRMR below 0.08 [16]. All analyses were performed using the statistical software R [18] with the lava package [19].
The characteristics of the children who provided serum for antibody measurements at age 7 years are shown in Table 1. PFOS was by far the most prevalent PFAS with a median serum concentration at age 7 of 15.5 ng/mL (interquartile range (IQR): 12.9, 19.2 ng/mL) which represented a decrease from age 5 (Table 2). Notably, the correlation between the age 5 and age 7 concentrations of the same PFAS was closer than the correlation between the different PFASs at each age.
Table 1 Characteristics of children that contributed with a 7-year antibody determination
Table 2 Pearson correlations for the 5- and 7 year PFAS concentrations. Shown are also the median (IQR) PFAS concentrations
Multiple linear regressions showed that higher 7-year PFAS concentrations were associated with lower antibody concentrations. This tendency was most pronounced for anti-diphtheria that decreased by 30.3 % (95 % CI: 7.8 %, 47.3 %), and 25.4 % (95 % CI: 5.8 %, 40.9 %), for a doubling in the PFOS and PFOA concentration, respectively (Table 3). For the anti-tetanus antibody, PFHxS showed the strongest association with a decrease by 22.3 % (95 % CI: 5.2 %, 36.3 %), for a doubling in the exposure. The dose–response relations obtained with generalized additive models verified approximate linearity (Fig. 3).
Table 3 The percentage change (% Change) in antibody concentration at age 7 years when the PFAS concentration is doubled. Results are from multiple linear regressions for PFAS concentrations at age 7 years (6 models shown in row 1--2), structural equations models of latent individual PFAS measured by 5- and 7-year concentrations (6 models shown in row 3–4), and one structural equation model with all three latent PFASs (1 model shown in row 5–6). All models were adjusted for the covariates gender, age, and booster type
Associations between PFAS and antibody concentrations at age 7 years. Dose–response functions are modeled by generalized additive models with cubic smoothing spline with 3 degrees of freedom, adjusted for age, sex, and booster vaccination type. The dashed lines indicate the 95 % confidence intervals. The spikes on the horizontal line indicate individual observations
In the linear regressions we adjusted for gender, age, and booster type. There were significant differences in antibody concentrations between genders where girls had between 23.2 % (95 % CI: 3.8 %, 43.6 %) and 27.2 % (95 % CI: 4.9 %, 44.2 %) lower concentrations. Children that received booster type 1 showed slightly higher antibody concentrations; however, this tendency was only borderline significant for anti-diphtheria.
In structural equation models of individual PFASs (Additional file 1: Model 1; Fig. 1), we focused on the relation of the latent PFAS childhood exposure with the antibody concentrations. Although the serum albumin concentration was positively associated with the observed PFAS measurements, albumin adjustment affected the association of the estimated exposure only to a negligible extent (Table 3 shows the unadjusted estimates, Additional file 1: Table S1 shows the albumin-adjusted estimates). Similarly, we also considered adjustment for BMI, but again with virtually no impact on the results. Consequently, these adjustments were not further considered.
The three latent PFAS variables showed a trend of inverse relationships with each of the antibody concentrations. The effects of PFOS and PFOA were strengthened in comparison to the regression coefficients based on individual 7-year concentrations only. In particular the anti-tetanus concentration showed a decrease of 38.2 % (95 % CI: 13.0 %, 56.1 %), for a doubling in the average childhood PFOA concentration (Table 3, Additional file 1: Model 1). This decrease was nearly twice the magnitude of the decrease obtained in the linear regressions that ignore measurement errors. When the three latent PFASs were mutually adjusted the associations became less apparent and only anti-tetanus antibody showed a borderline significant decrease by 29.6 % (95 % CI: −0.4 %, 50.6 %) at a doubling of PFOA exposure after this adjustment (Table 3, Additional file 1: Model 2). As expected the precision of the coefficients decreased.
As it was not possible to attribute causality to individual PFAS compounds, we combined the data for all three substances at age 7 in a joint latent exposure variable. This total PFAS exposure variable showed the strongest association with anti-diphtheria, which was reduced by more than one half (57.5 %; 95 % CI: 21.2 %, 77.0 %) for a doubling of the PFAS concentration (Table 4, Additional file 1: Model 3). The anti-tetanus concentration similarly showed a decrease by 49.8 % (95 % CI: 2.7 %, 74.1 %), for a doubling in the PFAS concentration.
Table 4 Structural equation models for the association between antibodies and latent joint PFAS concentrations. Shown are models for latent PFAS at age 5 years (row 1), latent PFAS at age 7 years (row 2), and latent childhood PFAS (row 3). % Change indicates the percentages change in antibody concentration for a 2-fold PFAS concentration. All models were adjusted for the covariates gender, age, and booster type
The PFAS associations with the two antibody concentrations were similar and could be assumed to be identical (P = 0.64). The estimated joint change in antibody concentration then showed a decrease by 54.4 % (95 % CI: 22.0 %, 73.3 %), per doubling of the latent age-7 PFAS exposure.
The impacts of the 5-year PFAS exposure were dramatically reduced when adjusted for the 7-year PFAS exposure (Additional file 1: Model 4, Additional file 1: Table S2). Likewise, the estimated decrease in the antibody concentration per doubling in the 7-year concentration was halved after adjustment of the 5-year concentration. At the same time, the precision of the coefficients decreased, with much wider confidence intervals. As the data did not allow separation of influences attributed to 5-year or concomitant exposure, we combined the 5-year and the 7-year PFAS concentrations into one joint childhood PFAS exposure (Additional file 1: Model 5; Fig. 2). In this model, the adverse impact of PFAS increased, and the precision of the estimates improved. For a doubling of the PFAS exposure, the antibody concentrations decreased by 55.5 % (95 % CI: 25.5 %, 73.4 %).
All analyses were repeated with adjustment also for the pre-booster antibody concentration at age 5 (see Additional file 1, p. 13–15). In this analysis, we modeled the PFAS association with the change in antibody level from age 5 (pre-booster) to age 7 years. For the multiple linear regressions, the precision of all estimates increased, and the regression coefficients of PFOA, and PFOS on anti-tetanus and the regression coefficients of PFOA, and PFHxS on anti-diphtheria were strengthened (Additional file 1: Table S3). For the SEM of latent individual PFASs (Additional file 1: Model 1, Additional file 1: Figure S5), the precision decreased, and estimates were therefore less stable. In the 7-year joint PFAS exposure models (Additional file 1: Model 3), all adverse relations increased while the adverse associations with the joint 5-year PFAS exposure decreased (Additional file 1: Model 3, Additional file 1: Table S4). In the final pre-booster adjusted model (Additional file 1: Model 5), the concentration of antibodies decreased by 51.8 % (95 % CI: 24.6 %, 68.5 %), for a doubling of the joint childhood PFAS exposure, i.e., a slightly weaker association as compared to the unadjusted model.
The SEMs applied generally showed a good fit to the data with χ2-test P-values above 5 % and RMSEA below 5 %. The only exception was the model with mutually adjusted latent PFASs (Additional file 1: Model 4, Additional file 1: Table S4) which did not satisfy the strict P-value criterion, but had acceptable values for the other fit-indices.
The focus of the present study was the association between PFAS exposure and the serum-antibody concentration at age 7. The new measurements of concomitant serum-PFAS concentrations at age 7 allowed us to examine the relation to the current PFAS exposures as well as the joint exposures at ages 5 and 7. These analyses showed a stronger inverse association than those obtained with the 5-year PFAS concentration only. However, when the 5-year and the 7-year joint PFAS exposures were mutually adjusted, their substantial correlation made it impossible to distinguish between the influences of each of the two sets of exposure data in regard to the antibody levels. Still, the overall childhood exposures, as reflected by both the age-5 and age-7 PFAS measurements showed the strongest inverse association between PFAS exposure and antibody concentrations.
The amount of antigen in the 5-year booster vaccination was the same for all children. Thus, the 7-year antibody concentrations could be meaningfully adjusted for the 5-year pre-booster antibody level to reveal impacts that could be ascribed to more recent adverse impacts on antibody formation. In this analysis, the joint PFAS association with the anti-tetanus antibody decreased somewhat as compared to the unadjusted results, while the association with the diphtheria antibody remained almost unaffected. This finding suggests that PFASs primarily affect antibody production rather than preceding steps in immune responses to antigen stimulation. In vitro studies suggest that such effects are plausible [20].
Although the impacts of PFAS exposure appear substantial, some residual confounding could still be present. As PFASs in serum are mainly bound to albumin [11], the serum concentration of the former may be affected by the amount present of the latter. However, the association between PFAS exposure and antibody concentrations changed only negligibly after adjustment for the albumin concentration. Similarly, as BMI may also affect the serum concentrations [21], we considered this covariate as a possible confounder. Again, the adjusted results were almost unchanged. All reported results are therefore unadjusted for albumin and BMI.
In addition to the association with the joint PFAS exposure, we explored the individual influences of PFOS, PFOA, and PFHxS. The results did not reveal any clear tendencies, thus suggesting that none of the individual PFASs was the primary explanation of the antibody decrease. Still, PFOA showed a borderline statistically significant association after adjustment for the other PFASs. Most experimental studies focused on PFOS and PFOA [3] and very little are known about the immunotoxicity of PFHxS. The mechanisms of action are unclear. While PFOS and PFOA in humans may only in part act via the same pathways [20], it is unknown if PFHxS shares mechanisms of action with the other PFASs. Thus, whether a joint analysis of PFAS exposures has a toxicological foundation is also uncertain. Nonetheless, the fact that the combined exposure parameter in the SEMs showed a stronger association with the antibody concentrations would seem to support some degree of additive effects. Further, the standard regression analyses suggest that each individual PFAS has immunotoxic impacts of its own.
Structural equation models proved to be very useful in the analysis of these data with multiple exposures measured at different time points and a multivariate outcome. A main advantage of this framework is that information from many variables can be combined into a joint analysis which potentially becomes more powerful than standard regression modeling. In addition, SEMs allow for measurement error in covariates, which is essential for unbiased effect estimation, especially in models that include correlated independent variables [22]. We considered a number of different models to describe the association between the longitudinal multivariate exposure profile and the antibody response at age 7 years. These analyses indicated that the causative dose is a long-term average where all three PFASs may contribute. Such an analysis would not be meaningful in standard regression models, as it would require that all exposure variables were included simultaneously as independent variables. The present study illustrates how to develop SEMs where observed exposures were assumed to be correlated with underlying causative latent variables. Some models grouped variables according to chemical type (PFOS, PFOA or PFHxS) while others combined information from the same year of exposure (5 or 7 years). We believe that all the models are relevant as they show different aspects of the relationship between PFAS exposure level and antibody concentration. The model selection process resulted in a model where all exposure variables were related to one causative latent variable. However, this does not mean that we consider the final model to be superior in general. Thus, had one of the underlying analyses clearly indicated that a single chemical concentration or only a single exposure age was important, it would not have been appropriate to develop a model combining all variables.
The increased power of SEMs comes from the fact that all collected variables are exploited but also from additional assumptions necessary for inclusion of all variables simultaneously. According to a recent critique, SEMs often make unrealistic assumptions about the relationships between the covariates [23]. This concern is not relevant to our analysis, which allowed a completely flexible model for the covariates. However, multiple exposures were included by postulating the existence of one or more sets of underlying latent variables. Although a reasonable assumption, the existence of such unobserved variables is difficult to prove, but statistical fit indices indicated that these models provided a good approximation to the data distribution. Our findings therefore suggest that carefully conducted SEM analyses with a minimal number of weak assumptions provide an important supplement to standard regression results, especially when involving imprecise assessment of multiple exposures.
In support of our findings of PFAS immunotoxicity, a recent study of 99 Norwegian children at age 3 years found that the maternal serum PFOA concentrations were associated with decreased vaccine responses in the children, especially toward rubella vaccine, as well as increased frequencies of common cold and gastroenteritis [24]. However, PFOS and PFOA concentrations in serum from 1400 pregnant women from the Danish National Birth Cohort were not associated with the total hospitalization rate for a variety of infectious diseases in 363 of the children up to an average age of 8 years [25]. In adults, PFOA exposure was associated with lower serum concentrations of total IgA, IgE (females only), though not total IgG [26]. Also, elevated serum-PFOA serum concentrations were associated with a reduced antibody titer rise after influenza vaccination [27]. Thus, overall, support is building that PFAS exposure may be associated with deficient immune functions, although the clinical implications need to be defined in detail.
Our study is limited to PFAS exposure assessments at two points during childhood at an interval of about 30 months. While taking into regard the elimination half-lives of the PFASs of 2–5 years [8, 9], these two measurements may not fully characterize the childhood exposure profile that is most relevant to humoral immunity functions. The addition of the age-7 serum concentrations clearly improved the association between PFAS exposure and the antibodies, thus suggesting that the imprecision of the exposure estimate had decreased. As exposures during childhood are likely to vary [1, 10], it is possible that serial serum-PFAS analyses would provide even stronger evidence for PFAS immunotoxicity. However, distinguishing between exposures at different ages and between the associations attributed to different PFASs will require greater variability and lower correlation between the individual exposure parameters.
The 5-year booster vaccination is in principle the last booster vaccination that a Faroese child receives, and long-term protection is therefore anticipated. Our previous results [7] showed that many children at age 7 had antibody concentrations below the level assumed to provide the desired protection. Thus, while the exact magnitude of the serum-antibody concentration may not be clinically important, very low levels will mean poor or absent protection. Our findings show that PFAS exposure may inhibit the formation of antibodies and cause more children to be unprotected despite a full regimen of vaccinations. While tetanus and diphtheria may not be a serious hazard in the Faroese and many other countries, the strongly decreased antibody concentrations reflect a severe immunological deficit, one that is much stronger than the one associated with PCB exposure [28]. As optimal immune system function is crucial for health; the associations identified should be regarded as adverse. We recently calculated benchmark dose levels to estimate the magnitude of exposure limits that would protect against the immunotoxicity observed [29]. The results suggested that current exposure limits may be more than 100-fold too high. The improved adjustment for imprecision of the exposure assessment in the present study adds support to the notion that substantially strengthened prevention of PFAS exposures is indicated.
The vulnerable window of time for the immunotoxic effects of PFASs is unknown, and exposure assessment must therefore take into account temporal changes in serum concentrations. Structural equation models provided a useful approach to statistical analysis as they take into account that each PFAS measurement is likely to an imprecise indicator of the causative exposure. These models also allow for a multivariate response, and incorporate incomplete data. By incorporation of serum concentrations at two points in time, the results confirmed and extended findings in standard regression models where antibody concentrations decreased as a function of PFAS concentrations. This tendency was strengthened in the structural equation models that included both sets of serum concentrations. These analyses add further evidence to the notion that immunotoxicity may occur in humans at the current exposure levels.
Lindstrom AB, Strynar MJ, Libelo EL. Polyfluorinated compounds: past, present, and future. Environ Sci Technol. 2011;45(19):7954–61.
Weihe P, Kato K, Calafat AM, Nielsen F, Wanigatunga AA, Needham LL, et al. Serum concentrations of polyfluoroalkyl compounds in Faroese whale meat consumers. Environ Sci Technol. 2008;42(16):6291–5.
DeWitt JC, Peden-Adams MM, Keller JM, Germolec DR. Immunotoxicity of perfluorinated compounds: recent developments. Toxicol Pathol. 2012;40(2):300–11.
Peden-Adams MM, Keller JM, Eudaly JG, Berger J, Gilkeson GS, Keil DE. Suppression of humoral immunity in mice following exposure to perfluorooctane sulfonate. Toxicol Sci. 2008;104(1):144–54.
Fair PA, Driscoll E, Mollenhauer MA, Bradshaw SG, Yun SH, Kannan K, et al. Effects of environmentally-relevant levels of perfluorooctane sulfonate on clinical parameters and immunological functions in B6C3F1 mice. J Immunotoxicol. 2011;8(1):17–29.
Dietert RR. Developmental immunotoxicology (DIT): windows of vulnerability, immune dysfunction and safety assessment. J Immunotoxicol. 2008;5(4):401–12.
Grandjean P, Andersen EW, Budtz-Jorgensen E, Nielsen F, Molbak K, Weihe P, et al. Serum vaccine antibody concentrations in children exposed to perfluorinated compounds. JAMA. 2012;307(4):391–7.
Olsen GW, Burris JM, Ehresman DJ, Froehlich JW, Seacat AM, Butenhoff JL, et al. Half-life of serum elimination of perfluorooctanesulfonate, perfluorohexanesulfonate, and perfluorooctanoate in retired fluorochemical production workers. Environ Health Perspect. 2007;115(9):1298–305.
Bartell SM, Calafat AM, Lyu C, Kato K, Ryan PB, Steenland K. Rate of decline in serum PFOA concentrations after granular activated carbon filtration at two public water systems in Ohio and West Virginia. Environ Health Perspect. 2010;118(2):222–8.
Kato K, Calafat AM, Wong LY, Wanigatunga AA, Caudill SP, Needham LL. Polyfluoroalkyl compounds in pooled sera from children participating in the National Health and Nutrition Examination Survey 2001–2002. Environ Sci Technol. 2009;43(7):2641–7.
Luo Z, Shi X, Hu Q, Zhao B, Huang M. Structural evidence of perfluorooctane sulfonate transport by human serum albumin. Chem Res Toxicol. 2012;25(5):990–2.
Kim DH, Lee MY, Oh JE. Perfluorinated compounds in serum and urine samples from children aged 5–13 years in South Korea. Environ Pollut. 2014;192:171–8.
Budtz-Jorgensen E, Keiding N, Grandjean P, Weihe P. Estimation of health effects of prenatal methylmercury exposure using structural equation models. Environ Health. 2002;1(1):2.
Sanchez BN, Budtz-Jørgensen E, Ryan L, Hu H. Structural equation models: a review with applications to environmental epidemiology. J Am Stat Assoc. 2005;100(472):1443–55.
Hastie TJ, Tibshirani RJ. Generalized additive models. In monographs on statistics and applied probability 43. Boca Raton, FL: Chapman and Hall/CRC Press; 1990.
Kline RB. Principles and practice of structural equation modeling. 3rd ed. New York: Guilford Press; 2011.
Little RJA, Rubin DB. Statistical analysis with missing data. 2nd ed. Hoboken: Wiley; 2002.
Core Team R. R: a language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2012.
Holst KK, Budtz-Jorgensen E. Linear latent variable models: the lava-package. Comput Stat. 2013;28:1385–452.
Corsini E, Sangiovanni E, Avogadro A, Galbiati V, Viviani B, Marinovich M, et al. In vitro characterization of the immunotoxic potential of several perfluorinated compounds (PFCs). Toxicol Appl Pharmacol. 2012;258(2):248–55.
Eliakim A, Schwindt C, Zaldivar F, Casali P, Cooper DM. Reduced tetanus antibody titers in overweight children. Autoimmunity. 2006;39(2):137–41.
Carroll R, Ruppert D, Stefanski LA, Crainiceanu CM. Measurement error in nonlinear models: a modern perspective. 2nd ed. Boca Raton, FL: Chapman and Hall/CRC Press; 2006.
VanderWeele TJ. Invited commentary: structural equation models and epidemiologic analysis. Am J Epidemiol. 2012;176(7):608–12.
Granum B, Haug LS, Namork E, Stolevik SB, Thomsen C, Aaberge IS, et al. Pre-natal exposure to perfluoroalkyl substances may be associated with altered vaccine antibody levels and immune-related health outcomes in early childhood. J Immunotoxicol. 2013;10(4):373–9.
Fei C, McLaughlin JK, Lipworth L, Olsen J. Prenatal exposure to PFOA and PFOS and risk of hospitalization for infectious diseases in early childhood. Environ Res. 2010;110(8):773–7.
C8 Science Panel. Status Report: PFOA and immune biomarkers in adults exposed to PFOA in drinking water in the mid Ohio valley. March 16. C8 Science Panel (Tony Fletcher, Kyle Steenland, David Savitz). http://www.c8sciencepanel.org/study_results.html. Accessed 13 Jun 2013.
Looker C, Luster MI, Calafat AM, Johnson VJ, Burleson GR, Burleson FG, et al. Influenza vaccine response in adults exposed to perfluorooctanoate and perfluorooctanesulfonate. Toxicol Sci. 2014;138(1):76–88.
Heilmann C, Budtz-Jorgensen E, Nielsen F, Heinzow B, Weihe P, Grandjean P. Serum concentrations of antibodies against vaccine toxoids in children exposed perinatally to immunotoxicants. Environ Health Perspect. 2010;118(10):1434–8.
Grandjean P, Budtz-Jorgensen E. Immunotoxicity of perfluorinated alkylates: calculation of benchmark doses based on serum concentrations in children. Environ Health. 2013;12:35.
This study was supported by the National Institute of Environmental Health Sciences, NIH (ES012199); the U.S. Environmental Protection Agency (R830758); the Danish Council for Strategic Research (09–063094); and the Danish Environmental Protection Agency as part of the environmental support program DANCEA (Danish Cooperation for Environment in the Arctic). The authors are solely responsible for all results and conclusions, which do not necessarily reflect the position of any of the funding agencies.
Department of Biostatistics, University of Copenhagen, Copenhagen, Denmark
Ulla B. Mogensen & Esben Budtz-Jørgensen
Department of Environmental Health, Harvard School of Public Health, 401 Park Drive, 3E110, Boston, MA, 02215, USA
Philippe Grandjean
Department of Environmental Medicine, University of Southern Denmark, Odense, Denmark
Philippe Grandjean & Flemming Nielsen
Pediatric Clinic, Rigshospitalet - National University Hospital, Copenhagen, Denmark
Carsten Heilmann
Department of Occupational Medicine and Public Health, Faroese Hospital System, Torshavn, Faroe Islands
Pál Weihe
Ulla B. Mogensen
Esben Budtz-Jørgensen
Correspondence to Philippe Grandjean.
PG is editor-in-chief of this journal, but was not involved in the editorial handling of this manuscript. The authors declare that they have no competing interests.
The study was planned by PG, CH, and PW. Statistical analysis was carried out by EBJ and UBM. All authors participated in the interpretation of data. PG, EBJ, and UBM drafted the manuscript, and all authors contributed critical revision and approved the final version.
Additional file
Structural equation modeling of immunotoxicity with exposure to perflourinated alkylates.
Mogensen, U.B., Grandjean, P., Heilmann, C. et al. Structural equation modeling of immunotoxicity associated with exposure to perfluorinated alkylates. Environ Health 14, 47 (2015). https://doi.org/10.1186/s12940-015-0032-9
Accepted: 14 May 2015
Structural Equation Model
Antibody Concentration
Standardize Root Mean Square Residual
Booster Vaccination
PFOA Concentration
|
CommonCrawl
|
Zero-Knowledge Proofs for Set Membership
|In The Art of Zero Knowledge
|By Dario Fiore
In this post, I will attempt to present the problem of proving set membership in zero-knowledge — proving that an element is part of a large public set without disclosing which element — while discussing the main existing solutions, with their pros and cons, and a recent paper on this topic.
The post is intended for researchers and practitioners interested in the topic of zero-knowledge, and it tries to convey high-level explanations while sometimes turning into the mathematical details of the solutions. For the latter, I assume familiarity with basic computer science concepts like binary trees and computational complexity, as well as with some discrete maths like the notions of finite rings and groups.
Set Membership
Set membership is the problem of checking whether an element $x$ belongs to some, possibly public, set $S$. It arises in a large variety of contexts, mostly in applications that have large lists of data, where checking if an element is in the list can be very costly, or in those applications that require some form of privacy assurance either on the set, $S$, or on the element, $x$. One example is with financial regulation, where a bank must prove to the regulator that a new client is a citizen of the given country. In order to do this, the bank will prove that the new client belongs to the set of all citizens of that country. In this setting, the list itself may be public (at least to the banks and regulators), but the bank may not want to reveal the specific client it is checking membership for. This implies that the bank would like to use some zero-knowledge version of the system, to prove to the regulator the set-membership without revealing the specific query.
Another example is where a company wants to prove to their investors that they belong to the list of certified ecological companies. In this case, however, there is seemingly no need for privacy. Indeed, there are plenty of other examples where set-membership appears in business and governamental-level applications.
More recently, this problem has also emerged in the context of blockchain, mainly in cryptocurrency designs. In Bitcoin for example, the blockchain is supposed to maintain the so-called set of "unspent transaction outputs" (UTXO, in short). In a nutshell, the UTXO set contains all the coins that have not been already spent and therefore are eligible to be spent in a new transaction. In this scenario, validating a transaction that wants to spend (or consume) a coin $x$ involves checking that $x$ is in the set UTXO. The same setting is given in the Zcash cryptocurrency, but in this case, the element $x$ must actually be hidden from the blockchain, making the set-membership proof zero knowledge.
When looking at applications of set membership there are at least two intriguing questions that arise:
Scalability: Is it possible to check that an element $x$ is part of a set $S$ without having to store $S$ and by spending time significantly faster than its size $|S|$?
Privacy: Is it possible to check that $x$ is in $S$ for an unknown element $x$?
If we leave them as they are, the questions above look impossible to solve, but if make a twist to the problem we can make the impossible possible. The twist is: let us assume an asymmetry of roles. There is one party, the prover, for which we are not concerned about achieving scalability and privacy. Namely, the prover knows $x$ and $S$ and can afford computational and storage costs proportional to $|S|$. Everybody else are instead the verifiers, who can invest only small computing and storage resources, and against which we want to achieve privacy (namely, they should not learn $x$).
Adopting this asymmetric setting, then the idea to solve the problem is to let the prover compute and send a short proof about $x \in S$ to the verifiers, who can then check the proof in short time.
In what follows I will first review some solutions that solve scalability, and then I will discuss techniques that enable extensions that achieve also privacy.
Verifying Set Membership, Efficiently
Solutions to this problem date back to 1980 and 1994 when Merkle trees, by Ralph Merkle and cryptographic accumulators respectively were proposed.
In its basic form, an accumulator can be seen as a triple of algorithms $(\textsf{Acc}, \textsf{Prove}, \textsf{Verify})$ with the following functionality:
$A \leftarrow \textsf{Acc}(S)$ compresses a set of values $S$ into a short value $A$, the accumulator.
$\pi_{x} \leftarrow \textsf{Prove}(S, x)$ generates a membership proof $\pi_x$ about $x \in S$.
$\textsf{Verify}(A, x, \pi_{x})$ accepts or rejects the proof by using only knowledge of the accumulated set $A$.
To assure verifiers against malicious provers, accumulators come with the guarantee that creating false proofs (i.e., a proof $\pi^{*}$ that is accepted by $\textsf{Verify}(\textsf{Acc}(S), x, \pi^{*})$ for an $x \notin S$) is impossible under certain computational assumptions.
If this is the functionality provided by accumulators, we are then left to discuss the main question of this section: why accumulators enable efficient verification of set membership?
This is achieved thanks to a key property of them: the size of accumulator values $A$, proofs $\pi_{x}$ and the running time of $\textsf{Verify}$ are much smaller than the length of $S$; for example they can be logarithmic $O(\log |S|)$ or even constant (note: in the literature, with the term "accumulators" one typically refers to schemes with constant-size proofs and verification.)
One detail that I kept hidden from the description above is that all algorithms actually take an additional input, which is a public set of parameters that are generated in a probabilistic and trusted way.
Nowadays, we know several realizations of accumulators. Nevertheless, here I want to mention two popular realizations: Merkle trees and RSA Accumulators. Interestingly enough, they are not only the oldest proposals but also the only ones that by now achieve all desired properties by using constant-size public parameters.
At this point it is worth mentioning that they offer different efficiency tradeoffs: proofs size and verification time are $O(\log |S|)$ in Merkle trees, and $O(1)$ in RSA accumulators.
Let us briefly review how these two accumulators constructions work.
Merkle Trees
With a Merkle tree one can accumulate sets made of arbitrary elements using a collision-resistant hash function. We refer to this resource or this one for a more detailed explanation, but the basic idea is the following.
The public parameters simply consist of an hash function (e.g., SHA-256).
In order to accumulate a set $S = \{x_1, \ldots, x_n\}$ (let us assume for simplicity that $n$ is a power of $2$), one builds a binary tree in which $ \{x_1, \ldots, x_n\}$ are the leaves, and every internal node is the hash of its two children. The accumulator value $A$ is then the value at the root of such tree.
To create a proof $\pi_j$ that a certain $x_j \in S$, one returns all the sibling nodes that are in the path from the leaf $x_j$ until the root.
From the binary tree structure one then gets that these proofs consist of $\log(n)$ strings of $\ell$ bits each (where $\ell$ is the hash's output bit length), and can be verified by using the siblings to recompute the nodes in the path and then check if the final result is equal to the root $A$.
Merkle trees are a very powerful construct, with countless applications and a lot of nice properties. They are actually more powerful than accumulators; they realize a so-called vector commitment, since the position of each leaf in the tree is also encoded in a binding way, in the sense that it is computationally impossible to claim two distinct values at the same position.
RSA Accumulators
With RSA Accumulators one can accumulate sets made of prime numbers. The main ingredient of these constructions are groups of unknown order, of which RSA groups (from which the name) or class groups are candidate realizations. Let us show how this works in RSA groups.
The public parameters consist of an RSA modulus $N=p \cdot q$ product of two large prime numbers $p$ and $q$, and of a random generator $G \in \mathbb{Z}_{N}^{*}$. Here, $\mathbb{Z}_{N}$ denotes the ring of non-negative integers modulo $N$—the set $\{0, 1, \ldots, N-1\}$—while $\mathbb{Z}^{*}_{N}$ is the subset of elements in $\mathbb{Z}_{N}$ that are also coprime with $N$, which form a group under multiplication.
In order to accumulate a set $S$ of prime numbers $\{e_1,\ldots e_{n}\}$, one computes
$$A \leftarrow G^{\prod_{i=1}^{n} e_i} \bmod N.$$
To create a proof that a certain $e_j \in S$, one computes
$$\pi_j \leftarrow G^{\prod_{i=1,i\neq j}^{n} e_i} \bmod N = A^{1/e_j} \bmod N$$
which can be verified by checking if
$$\pi_j^{e_j} = A \bmod N$$
Overall, RSA accumulators are a quite simple and elegant construction. While the basic construction above has the limitation that elements must be prime numbers, this can be solved by using appropriately constructed hash functions that map arbitrary strings into primes. Compared to Merkle trees, they have the appealing property that everything is constant-size: both $A$ and the proof $\pi_j$ consist of one group element each, and verification requires one modular exponentiation only.
On top of this, they enjoy further properties. For example, it is possible to efficiently add elements to the accumulator value (i.e. it is dynamic), to create proofs of non-membership (it is universal), and to create short proofs for membership of many elements. See for example this recent work by Boneh, Bunz and Fisch to read about these and more properties.
Verifying Set Membership, Efficiently and Privately
We have seen how accumulators can provide a solution to the problem of proving and verifying set membership in an efficient manner.
Let us now move to the question of how to make these proofs also privacy-preserving, namely how to prove that $x \in S$ without revealing $x$.
Actually, in most applications, the privacy requirement about $x$ typically goes together with the need of proving more than just membership. One may wish to prove that a property $P(x)$ holds for some element $x \in S$, without revealing exactly for which element.
In other (more cryptographic) words, one is interested in making a zero-knowledge proof (ZKP) for the NP relation $R(S, x) := x\in S \wedge P(x)$, where $S$ is public and $x$ is secret.
A ZKP for $R$ however is not necessarily "efficient" in the same sense as discussed earlier since the verifier should read $S$, and the proof itself may not be succinct.
To make the proof also short and efficient to verify, a solution is to mix ZKPs with accumulators, that is to build a ZKP for the following relation
$$R(A, (x, \pi_x)) := \textsf{Verify}(A, x, \pi_{x}) \wedge P(x)$$
that, in words, proves existence of an element $x$ for which $P$ holds and for which there is a valid accumulator proof relative to $A$, which in turn implies membership in the set accumulated in $A$.
This blueprint can be applied to both Merkle trees and RSA accumulators.
In the rest of this post, we review the main existing solutions that follow these two approaches, and then we conclude by mentioning a recent work.
ZKPs for Set Membership via Merkle Trees
ZKPs for set membership via Merkle trees are a straightforward application of the ZKP & Accumulators mixing approach mentioned above.
Notably, this idea has been implemented in the Zcash protocol, which used the general-purpose power of zkSNARKs in order to prove in zero-knowledge existence of a valid Merkle path (in addition to other properties modeling the validity of a transaction). This in turn translates into proving correctness of about $\log |S|$ hash computations on secret inputs. Encoding hash computations in the zkSNARK is what makes this approach particularly expensive for the prover. Zcash's Sapling provided significant speedups of this approach via an ingenious choice of a pairing-friendly elliptic curve and the Pedersen hash function. Nevertheless, proving set membership is by now the most expensive part of proof generation in Zcash.
ZKPs for Set Membership via RSA Accumulators
For the case of RSA Accumulators, a notable work is that of Camenisch and Lysyanskaya who designed a ZKP protocol for the knowledge of an integer $e$ that is in the accumulator and that is committed in a group $\mathbb{G}$ of prime order $q$. More technically, given an accumulator $A$ and a Pedersen commitment $C_{e}\in \mathbb{G}$ as public values, they show how to prove knowledge of $(e, W, r)$ such that
$$A = W^{e} \bmod N \wedge C_e = g^{e} h^{r} \in \mathbb{G}$$
Having the commitment $C_{e}$ comes in handy if the final goal is to prove some property $P$ about the set element $e$: one simply creates another commit-and-prove ZKP for proving knowledge of $(e, r)$ such that $C_e = g^{e} h^{r} \in \mathbb{G}$ and $P(e)$ holds.
The Camenisch-Lysyanskaya protocol is potentially more efficient than the one based on Merkle trees as it does not require expensive general-purpose ZKPs that encode hash computations. Nevertheless, its use can be cumbersome due to some technical details. Most notably, the accumulated sets must be prime numbers of a specific size (which also imposes a minimum size for the prime-order group), and the hash-to-prime trick to avoid this problem cannot be used straightforwardly to mitigate this issue.
ZKPs for Set Membership: Efficient, Succinct, Modular
In the last part of this post, I would like to mention a recent work [BCFK19] that investigates modular and efficient constructions of ZKPs for set membership.
Modularity in ZKP Design
One is often confronted with creating ZKPs for "composite statements", e.g., statements of the form "I know a value $x$ that belongs to a set $S$ and for which properties $P_1(x)$ and $P_2(x)$ hold". In such a case, it can be convenient to create a proof by using three different proof systems, one for each subtask, e.g., $\Pi_{mem}$ for set membership, $\Pi_1$ for $P_1$, $\Pi_2$ for $P_2$.
This modular approach can be beneficial in multiple ways: one can focus on designing efficiency-optimized ZKPs for specific tasks, the same scheme can be re-used and replaced in a plug-and-play fashion, and it is just a simple design paradigm.
Technically, this approach can be realized by using commit-and-prove ZKP systems.
In a nutshell, a proof system $\Pi$ for a property $P$ is commit-and-prove if it can prove statements of the form "I know $x$ such that $P(x)$ holds and $C_{x}$ is a commitment to $x$". Since commitments are binding, generating two such proofs with respect to the same commitment immediately implies an AND composition of the two proven statement. Essentially, commitments act as a "secure glue" between different proof systems.
While commit-and-prove has been known and used extensively in cryptography constructions, the recent LegoSNARK paper studied this paradigm in the context of succinct ZKPs, aka zkSNARKs.
Modular in ZKP for Set-Membership
The [BCFK19] paper starts from the observation that an accumulator value can be seen as a commitment to a set, and that the whole accumulator primitive can be seen as a succinct commit-and-prove system for set membership relations. In this sense, accumulators with ZK proofs of knowledge (i.e., that can prove "I know a valid accumulator proof for an $x$ in the commitment") can be seen as commit-and-prove zkSNARK for set membership relations involving two types of commitments, a commitment to a set—the accumulator—and a commitment to an element.
So, one theoretical contribution of the paper is to extend the model of commit-and-prove zkSNARKs (and their composability properties) to the setting of typed-commitments, namely commitments to messages of different types (e.g., strings and sets over strings).
More Flexible and Modular ZKPs for RSA Accumulators
On the more practical side, [BCFK19] proposes new commit-and-prove zkSNARKs for set-membership, notably two solutions based on RSA accumulators that can be combined modularly and efficiently with other popular ZKP systems, such as Bulletproofs or Groth16. The latter feature is useful since it allows one to prove statements of the form "I know a value $x$ that belongs to a set $S$ and for which property $P(x)$ holds'", by using this specialized scheme for the set-membership part and a general-purpose one for any other property about $x$.
Going more into the detail, [BCFK19] proposes two solutions based on RSA.
Both of them provide a functionality similar to the ZKP of Camenisch and Lysyanskaya mentioned earlier, in the sense that the element $x$ object of the membership proof is committed in a Pedersen commitment $C_x$ over a group $\mathbb{G}$ of prime order $q$.
One key difference that helps efficient modularity however is that in [BCFK19] $\mathbb{G}$ can be of standard size (e.g., 256 bits for 128 bits of security). This, we recall, means that other commit-and-prove systems that want to use $C_x$ do not need to suffer efficiency slowdowns due to inflated sizes of $\mathbb{G}$.
In terms of supported sets, both solutions in [BCFK19] allow more flexible choices of sets than the Camenisch-Lysyanskaya protocol: the first scheme supports the accumulation of sets whose elements are arbitrary binary strings, while in the second scheme elements are prime numbers of exactly $\mu$ bits (for various flexible choices of $\mu$).
Deep Diving into Linking Pedersen Commitments in Different Groups
Let us go even deeper and see how [BCFK19] achieves these improvements. In a nutshell, this is due to a new way to link a proof of membership for RSA accumulators to a Pedersen commitment in a prime order group, together with a careful analysis showing how this can be secure under parameters not requiring a larger prime order group.
For this post we only summarize the main idea for the scheme supporting sets of primes; we refer the interested readers to the paper for further details.
Let us recall the setting. The statement known to the verifier consists of an accumulator $A = G^{\prod_{i=1}^{n} e_i} \bmod N$, which is a commitment to a set $S = \{e_1, \ldots, e_n\}$, and of a Pedersen commitment $C_{e}$ in a group $\mathbb{G}$ of prime order $q$.
The goal of the prover is to argue knowledge of $(e, r)$ that open $C_e$, i.e., $C_{e} = g^{e} h^{r}$, and such that $e \in S$ where $A = \textsf{Acc}(S)$. This is achieved through a combination of the following:
$C^{*}_e$, a commitment to $e$ created by the prover in the RSA group: $C^{*}_{e} = G^{e} H^{s} \bmod N$.
$\Pi_{root}$, a ZKP of a committed root for $A$, i.e., a proof of knowledge of $e, s$ and $W$ such that
$$W^{e} = A \bmod N \quad \textrm{and} \quad C^{*}_{e} = G^{e} H^{s} \bmod N.$$
$\Pi_{modeq}$, a ZKP that $C^{*}_{e}$ and $C_{e}$ commit to the same value modulo $q$.
$\Pi_{range}$, a ZKP that $C_{e}$ commits to an integer in the range $(2^{\mu-1}, 2^{\mu})$.
Intuitively, $\Pi_{root}$ shows that $C^*_{e}$ commits to an integer that is accumulated in $A$ (at this point, however, such integer may be a trivial root, i.e., $1$). The goal of $\Pi_{modeq}$ and $\Pi_{range}$ is to rule out this corner case $e=1$ and to "securely link" the commitment $C^*_e$ in the RSA group created by the prover with the prime-order group commitment $C_e$ known to the verifier, namely to ensure that these two commitment open to the same integer $e$.
This is less straightforward than expected because $\Pi_{modeq}$ is only able to prove that the equality of the values committed in $C^*_e$ and $C_e$ holds modulo $q$ and not necessarily over the integers. Also, from $\Pi_{root}$ alone, one can only infer that $C^*_{e}$ commits to some integer $e^*$ that divides $\prod_{i=1}^{n} e_i$.
So, the bad case that malicious provers may trigger and that the protocol must exclude is that $e^*$ and $e^* \bmod q$ are different over the integers. This can be shown with a quite detailed analysis whose basic idea is to use the fact that $\Pi_{root}$ nevertheless guarantees that $e^*$ can be the product of only few primes in $S$ (say 1, 2, depending on the difference between $\mu$ and $|q|$), and that $\Pi_{range}$ ensures that $e^* \bmod q$ lies in the range $(2^{\mu-1}, 2^{\mu})$.
To summarize, this technique enables linking the commitments in the RSA and prime-order group in an efficient way, and this enables to use this ZKP for set membership in combination with other SNARKs that are instantiated over the same prime-order group.
To conclude, in this post we went on a journey about the set membership problem. We started from seminal works (Merkle trees and RSA accumulators) that provide a solution to the scalability problem of set membership. Next, we went on to discuss solutions that allow one to maintain the privacy of the element involved in the set membership statement — a problem that is key in a number of applications from different domains, including finance, and business- or governmental-level interaction. Given the emergence of ZKP applications in the real-world, we expect that new solutions to the problem of privacy-preserving set membership will come out.
Dario Fiore
Practical SNARK-based VDF
Protocol Labs, the Ethereum Foundation,…
by Jonathan Gross
Darlin: Proof-carrying data based on Marlin
In this blog post, we describe Darlin,…
by Ulrich Haböck
Setup Ceremonies
We often refer to zero-knowledge proofs…
by Anthony Mpho Matlala
|
CommonCrawl
|
Current Search: Research Repository (x) » Dissertation Abstracts International (x)
THE "HOLY EXPERIMENT": AN EXAMINATION OF THE INFLUENCE OF THE SOCIETY OF FRIENDS UPON THE DEVELOPMENT AND EVOLUTION OF AMERICAN CORRECTIONAL PHILOSOPHY (QUAKERS, RENAL, PRISON REFORM).
CROMWELL, PAUL FRANK., Florida State University
The Quaker era in American corrections is traditionally characterized in criminological literature as the brief experiment with substitution of imprisonment for the sanguinary corporal and capital punishments of England and the other colonies by William Penn in 1682, and as the subsequent rebirth of the philosophy by Philadelphia Quakers between 1790-1840., The premise underlying this research is that the origin and evolution of American correctional philosophy cannot be fully and accurately...
Show moreThe Quaker era in American corrections is traditionally characterized in criminological literature as the brief experiment with substitution of imprisonment for the sanguinary corporal and capital punishments of England and the other colonies by William Penn in 1682, and as the subsequent rebirth of the philosophy by Philadelphia Quakers between 1790-1840., The premise underlying this research is that the origin and evolution of American correctional philosophy cannot be fully and accurately understood from any perspective that limits the Quaker influence to early periods of American history. The study elaborates the direct and indirect influence of a Quaker social reform movement which began in Europe in 1670 and continues today as a vital and viable force behind correctional public policy in the United States. Although the strength and impact of the Quaker social reform movement, the "holy experiment," as William Penn termed it, has waxed and waned over the past three centuries, the efforts of the Society of Friends to attain social justice in correctional reform has been a continuous social reform movement., The present research interprets the Quaker correctional reforms in America as a single social movement which evolved in distinct stages over a period of three hundred years. The theoretical frame of reference is a social contextual perspective, which considers the events in the social, political and economic context of the time., The evolution of the American correctional philosophy can be seen as a single, extended social movement which began with the Quaker persecution in Europe and the subsequent migration to America; evolved into an utopian effort to establish a new and better means of dealing with the criminal; and, further developed into a reform effort, diffusing the gospel of the "penitentiary" and the new "prison discipline." Its basic philosophy remained for the next one hundred years the foundation of American correctional policy, only to be reexamined in the mid-twentieth century and found wanting by the same reformers who established it, and the struggle for reform began again.
THE "INNER GAME" APPROACH TO MOTOR SKILL LEARNING AND PERFORMANCE: AN INVESTIGATION INTO A SUGGESTED SUBCONSCIOUS MOTOR MECHANISM.
AUSTIN, JEFFREY STEWART., Florida State University
The "inner game" approach to skill acquisition and performance as presented by Gallwey was investigated in this study. His ideas were transposed into a working model which, in turn, formed the basis for all hypotheses in this study. Performance on an electronic video game was measured across two levels of "inner game" cueing, three levels of conscious attention blocking, and control, for both novice and advanced skill levels. A total of 120 subjects was utilized (72 male; 48 female). A...
Show moreThe "inner game" approach to skill acquisition and performance as presented by Gallwey was investigated in this study. His ideas were transposed into a working model which, in turn, formed the basis for all hypotheses in this study. Performance on an electronic video game was measured across two levels of "inner game" cueing, three levels of conscious attention blocking, and control, for both novice and advanced skill levels. A total of 120 subjects was utilized (72 male; 48 female). A preliminary test on the experimental apparatus (electronic video game) was used to determine skill level. Subjects were then assigned to groups (N = 10) by random stratification based on sex., Data in this study suggest that under certain dual processing conditions, learning and performance are facilitated. The cueing method advocated by Gallwey was effective in both the novice (learning) and advanced (performing) groups. However, all aspects of the working model are not supported in this study. Nevertheless, those groups that functioned with a secondary task designd to block conscious attention performed as well as control subjects., The approach presented by Gallwey, while in need of further exploration, may be considered a viable instructional strategy. The results are discussed in relation to previous findings reported in the motor learning literature.
The "noble experiment" in Tampa: A study of prohibition in urban America.
Alduino, Frank William., Florida State University
Prohibition sprang forth from the Progressive Era--the widespread reform movement that swept across the United States at the turn of the century. Responding to the dramatic changes in American society since the end of the Civil War, the Progressive movement encompassed a wide array of individuals and groups advocating a far-reaching program of economic, political, and social reform. For over forty years temperance zealots strived to impose their values on the whole of American society,...
Show moreProhibition sprang forth from the Progressive Era--the widespread reform movement that swept across the United States at the turn of the century. Responding to the dramatic changes in American society since the end of the Civil War, the Progressive movement encompassed a wide array of individuals and groups advocating a far-reaching program of economic, political, and social reform. For over forty years temperance zealots strived to impose their values on the whole of American society, particularly on the rapidly expanding immigrant population. These alien newcomers epitomized the transformation of the country from rural to urban, from agricultural to industrial., Rapidly-expanding urban centers were often the battleground between prohibitionists and supporters of the whiskey traffic. European immigrants, retaining their traditional values, gravitated to metropolitan areas such as Boston, New York, and Chicago. With the opening of the cigar industry in the mid-1880s, Tampa, Florida also began attracting large numbers of immigrants. Because of its pluralistic composition, the city might serve as a microcosm of the national struggle between the "wet" and "dry" forces., Using newspapers, oral interviews, and other primary materials, this study traces the various aspects of the prohibition movement in the city of Tampa. In addition, it details other peripheral areas associated with the advent of the Eighteenth Amendment including the drug and alien trades. Finally, this study examines the lengthy efforts to repeal the "Noble Experiment" and return legalized drinking back to Tampa.
THE "OLD SUMPTER HERO": A BIOGRAPHY OF MAJOR-GENERAL ABNER DOUBLEDAY.
RAMSEY, DAVID MORGAN., The Florida State University
Abner Doubleday was an unusual and often a controversial person. Born into a family staunchly supporting Andrew Jackson, Doubleday reflected the determined Unionist position of the strong-willed president. Abner's attitude towards the Union was later vividly demonstrated at Fort Sumter. A mediocre career at West Point illustrated Doubleday's lack of desire to excel although he possessed the ability to do so. The controversy over the origin of baseball, although Doubleday was never directly...
Show moreAbner Doubleday was an unusual and often a controversial person. Born into a family staunchly supporting Andrew Jackson, Doubleday reflected the determined Unionist position of the strong-willed president. Abner's attitude towards the Union was later vividly demonstrated at Fort Sumter. A mediocre career at West Point illustrated Doubleday's lack of desire to excel although he possessed the ability to do so. The controversy over the origin of baseball, although Doubleday was never directly involved in the question, was the first of several controversies with which Abner Doubleday's name is associated., Doubleday never seemed satisfied with his early life. In his papers he continually referred to people, prominent in later years, which he knew. While serving in the Mexican War, Doubleday continually felt the need to relate the dangerous situations in which he was placed. He seemed to want to demonstrate his personal responsibilities, which while actually meager, he viewed as of supreme importance. Doubleday apparently wanted to be a famous, bold cavalier, but realized he failed to accomplish his objective and stressed his "noble" deeds., Doubleday loved large cities and the benefits they offered a person. He liked being in the right social circles and enjoyed the "good life." By 1852, while serving as a commissioner for the Senate, Doubleday had come to despise Mexico and the Mexicans. By 1858, while serving in Florida, he disliked the inconveniences of chasing "savages." With secession in 1860 Doubleday no longer liked Charlestonians; later extending his revulsion to all Confederates., With the crisis at Sumter in 1861 Doubleday was greatly troubled. The affront to the United States government was almost more than he could bear. With the outbreak of the war, Doubleday was more than willing to fight the rebels. A dependable, if unspectacular soldier, Doubleday served well during the Civil War. While no one accused him of original thinking militarily, his men always fought well. Gettysburg was Doubleday's finest hour but became his final hour in the Civil War when he could not countenance serving under a junior officer., It seems strange that Doubleday served in the Freedmen's Bureau since his superior was none other than his old enemy from Gettysburg, O.O. Howard. Doubleday's service in California brought the controversy over the origin of the cable car. Retirement from the army in 1873 brought out several new qualities in Abner Doubleday. He wrote books, read French and Spanish literature, and became interested in the occult and became a believer in theosophy., Doubleday was a colorful figure in nineteenth century America. He was associated with several significant events in the growth of the nation. Doubleday represented, possibly to an extreme, the attitude of many American Unionists and supporters of Manifest Destiny. His commitment to a united nation is similar to Lincoln's attitude. Doubleday not only vocalized this sentiment, but, like Lincoln, was prepared to fight for his belief. Abner Doubleday was an intense American. He desired a strong, powerful United States and opposed those not supporting such a course.
THE "SACRED HARP" SINGING GROUP AS AN INSTANCE OF NON-FORMAL EDUCATION.
MITCHELL, HENRY CHESTERFIELD., The Florida State University
The "talk" of returning women graduate students: An ethnographic study of reality construction.
McKenna, Alexis Yvonne., Florida State University
This study looked at women's internal experience of graduate school. In particular, it focused on the experience of women returning full-time to graduate school after an extended time-out for careers and/or family. The questions examined were: (1) how do returning women "name and frame" their experience? (2) what, if any, is the relationship between the way the women "name and frame" their experience and their response to it? and, (3) what role does the researcher-as-interviewer play in the...
Show moreThis study looked at women's internal experience of graduate school. In particular, it focused on the experience of women returning full-time to graduate school after an extended time-out for careers and/or family. The questions examined were: (1) how do returning women "name and frame" their experience? (2) what, if any, is the relationship between the way the women "name and frame" their experience and their response to it? and, (3) what role does the researcher-as-interviewer play in the construction of the data?, Data were collected through a series of three ethnographic interviews with 12 returning women, ranging in age from 28 to 50. Two of the twelve women were single, two were widowed, seven were divorced and one was divorced and remarried. Eight of the women had children., Analysis of the data showed that returning women, as a group, "named and framed" their experience in terms of change. Some women wanted to change self-image or self-concept while others wanted to acquire a new set of skills or credentials. Individually, the women "named and framed" their experiences in terms of an internalized "meaning-making map" acquired in the family of origin but modified through adult experiences. This "map" told them who they were and what kind of a life they could have. It gave their "talk" and behavior a consistency that could be recognized; it could make life easier or harder. A woman who felt she must "prove" herself, for example, found graduate school more difficult than a woman who wanted to "work smart.", The researcher-as-interviewer influenced the construction of data through her presence as well as through the kinds of questions she asked. The women understood and gave meaning to their experiences through the process of explaining them to the interviewer. The insights gained through this process of "shared talk" influenced future action and decisions.
THE 'PRESENT ETERNITE' OF 'TROILUS AND CRISEYDE.'.
LORRAH, JEAN., The Florida State University
THE (CARBON-12,BERYLLIUM-8) AND (CARBON-12,CARBON-12) REACTIONS ON EVEN CALCIUM ISOTOPES.
MORGAN, GORDON REESE., The Florida State University
THE (CARBON-12,BERYLLIUM-8) AND (OXYGEN-16,BERYLLIUM-8) REACTIONS ON CARBON-12, OXYGEN-16, AND SILICON-28 NUCLEI.
ARTZ, JERRY LEE., The Florida State University
(oxygen-16 + thorium-232) incomplete fusion followed by fission at 140 MeV.
Gavathas, Evangelos P., Florida State University
Cross sections for incomplete fusion followed by fission have been measured for the reaction ($\sp{16}$O + $\sp{232}$Th) at 140 MeV. In plane and out of plane measurements were made of cross sections for beamlike fragments in coincidence with fission fragments. The beamlike fragments were detected with the Florida State large acceptance Bragg curve spectrometer. The detector was position sensitive in the polar direction. The beamlike particles observed in coincidence with fission fragments...
Show moreCross sections for incomplete fusion followed by fission have been measured for the reaction ($\sp{16}$O + $\sp{232}$Th) at 140 MeV. In plane and out of plane measurements were made of cross sections for beamlike fragments in coincidence with fission fragments. The beamlike fragments were detected with the Florida State large acceptance Bragg curve spectrometer. The detector was position sensitive in the polar direction. The beamlike particles observed in coincidence with fission fragments were He, Li, Be, B, C, N and O. Fission fragments were detected by three surface barrier detectors using time of flight for particle identification. The reaction cross section due to incomplete fusion is 747 $\pm$ 112 mB, or 42% of the total fission cross section. The strongest incomplete fusion channels were the helium and carbon channels. The average transferred angular momentum for each incomplete fusion channel was calculated using the $Q\sb{opt}$ model of Wilczynski, and the angular correlation was calculated using the saddle point transition state model. The K distribution was determined from the Rotating Liquid Drop model. The theoretical angular distributions were fitted to the experimental angular distributions with the angular momentum J and the dealignment factor $\alpha\sb{o}$ as free parameters. The fitted parameter J was in excellent agreement with the $Q\sb{opt}$ model predictions. The conclusions of this study are that the incomplete fusion cross section is a large part of the total cross section, and that the saddle point transition state model adequately describes the observed angular correlations for fission following incomplete fusion.
125-Iodine: a probe in radiobiology.
Warters, Raymond Leon
THE 1928 PRESIDENTIAL ELECTION IN FLORIDA.
HUGHES, MELVIN EDWARD, JR., The Florida State University
THE 1964 WISCONSIN PRESIDENTIAL PRIMARY: GEORGE C. WALLACE.
WINDLER, CHARLES WILLIAM, JR., Florida State University
In 1963, Alabama Governor George C. Walace defied a court order by Attorney General Nicholas Katzenbach to integrate the University of Alabama. This incident turned the governor into a national celebrity and led to a number of speaking engagements across the country. During one of these engagements, Wallace indicated an interest in entering certain presidential primaries in the North in order to campaign against the pending national civil rights legislation. The Wisconsin Democratic...
Show moreIn 1963, Alabama Governor George C. Walace defied a court order by Attorney General Nicholas Katzenbach to integrate the University of Alabama. This incident turned the governor into a national celebrity and led to a number of speaking engagements across the country. During one of these engagements, Wallace indicated an interest in entering certain presidential primaries in the North in order to campaign against the pending national civil rights legislation. The Wisconsin Democratic presidential primary was the first of these races., Since President Lyndon Johnson had the Democratic presidential nomination for the asking, little attention was given to the Wallace candidacy. Governor John Reynolds was selected to run against Wallace as the Democraic favorite-son candidate, and the Republicans chose Representative John Byrnes as their favorite-son candidate. When the votes were cast on April 7, the entire nation was surprised at the large number of votes obtained by Wallace., Upon examination of the conditions and events prior to and during the presidential primary campaign, the following factors apparently contributed to the surprising showing of Governor Wallace: (1) An open primary system existed in Wisconsin that allowed a large Republican cross-over vote for Wallace; (2) The Republican favorite-son candidate had no opponent; (3) The Democratic party was divided over their favorite-son candidate, one of the most unpopular Governors in the political history of Wisconsin; (4) Wallace's opponents waged a personal defamation campaign based on Wallace's reputation as a racist to which Wallace did not respond; and (5) Some white residents of Wisconsin were afraid of the increasing civil rights demands of the black population. These factors served to gain support and sympathy for the Wallace candidacy and to focus national attention on the Alabama governor as he conducted subsequent campaigns in Maryland and Indiana.
The 1988 World Bank policy study on education in sub-Saharan Africa revisited: A value-critical policy inquiry.
Ota, Cleaver Chakawuya., Florida State University
The spirit and logic of the 1988 World Bank report resides in the trilogy that is its subtitle: adjustment, revitalization and expansion. In the context of ongoing austerity in Africa, it is strongly asserted that a fundamental restructuring of education is necessary to improve efficiency, effectiveness and equity in education. Controversial adjustment reforms proposed include measures that will substantially shift the burden of educational finance from government to students, parents, and...
Show moreThe spirit and logic of the 1988 World Bank report resides in the trilogy that is its subtitle: adjustment, revitalization and expansion. In the context of ongoing austerity in Africa, it is strongly asserted that a fundamental restructuring of education is necessary to improve efficiency, effectiveness and equity in education. Controversial adjustment reforms proposed include measures that will substantially shift the burden of educational finance from government to students, parents, and other parties. Such measures include cost recovery and the reduction of teachers' salaries among other things., If and only if, adjustment measures have been implemented and begun to take hold, then revitalization and selective expansion may be undertaken. Revitalization and selective expansion will reportedly improve quality and access in education. They include the provision of a minimum package of textbooks and other instructional materials and expansion of primary education to provide universal access., The purpose of this study was to investigate and critically evaluate the knowledge base that undergirds the World Bank study and the technical and political feasibility of the proposed reforms. A multi-methodological research strategy including critical public policy analysis and value-critical policy inquiry was employed., The main findings of this study are that: the data used in the Bank study are unreliable, the knowledge base narrow, the arguments underlying the policy framework of the report, unpersuasive and controversial and the agenda for action internally inconsistent. These criticisms should not detract from the immense value and importance of the document in that it is the first document that critically looks at education in the crisis beleaguered continent.
50 MeV lithium-6 scattering from carbon-12, oxygen-16, and beryllium-9 and the calibration of the tensor-polarized lithium-6 beam.
Trcka, Darryl Eugene., Florida State University
The experimental work reported consists of (1) the measurements of the angular distributions for the scattering of $\sp6$Li from the targets $\sp9$Be, $\sp{12}$C, and $\sp{16}$O at a lithium bombarding energy of 50 MeV, and (2) the measurement of the tensor polarization of the FSU polarized $\sp6$Li source. 50 MeV data were taken for elastic and inelastic scattering to the 2$\sp+$ (4.44 MeV), 0$\sp+$ (7.65 MeV), and 3$\sp-$ (9.64 MeV) states in $\sp{12}$C, the 5/2$\sp-$ (2.43 MeV) state in $...
Show moreThe experimental work reported consists of (1) the measurements of the angular distributions for the scattering of $\sp6$Li from the targets $\sp9$Be, $\sp{12}$C, and $\sp{16}$O at a lithium bombarding energy of 50 MeV, and (2) the measurement of the tensor polarization of the FSU polarized $\sp6$Li source. 50 MeV data were taken for elastic and inelastic scattering to the 2$\sp+$ (4.44 MeV), 0$\sp+$ (7.65 MeV), and 3$\sp-$ (9.64 MeV) states in $\sp{12}$C, the 5/2$\sp-$ (2.43 MeV) state in $\sp9$Be, and the unresolved 0$\sp+$/3$\sp-$ (6.05/6.13 MeV) and $2\sp{+}/1\sp{-}$ (6.92/7.12 MeV) states in $\sp{16}$O. The measurement of the tensor polarization of the FSU $\sp6$Li source allowed the absolute polarization efficiency of the source-accelerator system to be determined., The analytical work reported consists of a determination of the energy dependence of the optical potential parameters for $\sp6$Li + $\sp{12}$C scattering over the energy range from 11 MeV to 210 MeV. This has been attempted previously and the results have not been successful. A large body of data for $\sp6$Li + $\sp{12}$C allows more severe constraints than in previous studies. The inclusion of an angular momentum-dependent imaginary potential provides a good description of the elastic scattering data and the parameters determined in this study are smoothly varying with energy using Woods-Saxon form factors for the real and imaginary potentials. Inelastic scattering to the 2$\sp+$ (4.44 MeV), 0$\sp+$ (7.65 MeV), and 3$\sp-$ (9.64 MeV) states in $\sp{12}$C are described well using the constructed energy dependent potentials in DWBA calculations. Analysis using the double folded real potential and a Woods-Saxon imaginary potential were performed on the same $\sp6$Li + $\sp{12}$C scattering data from 11 MeV to 210 MeV., The scattering data for 50 MeV $\sp6$Li scattering from the targets $\sp{16}$O and $\sp9$Be are described using optical potentials and DWBA calculations. Less information is obtained from these analyses because data do not exist at this time over a wide enough energy range to provide a constraint on the interaction potentials.
A COMPARATIVE EVALUATION OF FACULTY AND STUDENT PARAPROFESSIONAL ACADEMICADVISEMENT PROGRAMS AT THE FLORIDA STATE UNIVERSITY.
MAC ALEESE, ROBERT WILLIAM., Florida State University
A comparison of two distinctive preparations for quantitative items in the Scholastic Aptitude Test.
Kelly, Frances Smith., Florida State University
The SAT is a major milestone for many high school juniors and seniors. Scoring as high as possible is of utmost concern for college bound students because SAT scores often determine the college or university they may attend and the scholarships they may receive. As a result, those who can financially afford to take prep courses for the SAT do., Over the past forty years research studies have found that SAT preparation increases test scores. These previous studies have been concerned only with...
Show moreThe SAT is a major milestone for many high school juniors and seniors. Scoring as high as possible is of utmost concern for college bound students because SAT scores often determine the college or university they may attend and the scholarships they may receive. As a result, those who can financially afford to take prep courses for the SAT do., Over the past forty years research studies have found that SAT preparation increases test scores. These previous studies have been concerned only with increasing test scores. To date, no study has investigated if one method of preparation produces higher gains than another, nor has any study identified those students for whom preparation is most beneficial. A comparison of methods among existing studies is impossible because most reports do not include the methods or materials used., The contents of most SAT preparatory books deal primarily with a review of the mathematical concepts involved. However, an inspection of several SAT items reveals that the SAT tests more than mere rote calculations and algebraic manipulations--it tests "understanding," "application," and "nonroutine" methods of problem solving. Therefore, the present study was proposed to examine and assess the effectiveness of two methods of student preparation for the SAT-M: the first method of preparation explored content review, solving each item in a rigid traditional manner, and the second method of preparation examines the use of flexible problem solving strategies to answer the items rather than using routine mathematical manipulations., Sixty-two juniors and seniors participated in the study. The results of the study showed that the students taught test-taking strategies scored significantly better than the control group. However, this strategies group did not score significantly better than the group who was taught content. The content group did not score significantly better than the control group. This indicates that students could benefit from instruction in flexible, nonroutine methods of solving SAT-M items efficiently.
A CRITICAL EDITION OF THE FIRST TWO MONTHS OF W. B. YEATS'S AUTOMATIC SCRIPT (IRELAND).
ADAMS, STEVE LAMAR., Florida State University
William Butler Yeats's involvement in the esoteric and the occult has attracted considerable interest in the past decade, but much remains unknown about his philosophical development during the period of his life when he was engaged in the most profound spiritual or psychical investigation or experiment of his brilliant career, an experiment which gave birth to A Vision. Often described as the most important work in the canon to the understanding of his art and thought if not his life, this...
Show moreWilliam Butler Yeats's involvement in the esoteric and the occult has attracted considerable interest in the past decade, but much remains unknown about his philosophical development during the period of his life when he was engaged in the most profound spiritual or psychical investigation or experiment of his brilliant career, an experiment which gave birth to A Vision. Often described as the most important work in the canon to the understanding of his art and thought if not his life, this ambitious work represents Yeats's attempt to explain the basic psychological polarities of the human personality, the course of Western civilization, and the evolution and movement of the soul after death. The cogency and gravity of the experiment of investigation which produced a book of these epic proportions cannot be underestimated; indeed, the contents of this well-recorded experiment may well be the most significant body of unexplored Yeats material. The fundamental aim of this study, which includes only the first crucial months of the Automatic Script, is to present to the scholarly world for the first time a transcript of the often obscure, often complex body of materials that led directly to Yeats's most profound work of art. In order to place this manuscript in its proper biographical and critical context, explanatory notes have been included, explicating the essential features of the experiment (i.e., the recording of dates, the authors of questions and responses, the placement of diagrams and notes by George and Yeats, the physical state of the manuscript, etc.) and unraveling or spelling out the numerous references to Yeats's primary works, those appearing prior to as well as those growing directly out of the Automatic Script; special attention has been focused on those materials which were eventually embodied in the 1925 version of A Vision. An editorial, introduction preceding the transcript demonstrates how this momentous experiment was the logical extension of a series of psychical investigations and, in much broader terms, the culmination of a spiritual odyssey that Yeats had begun almost as early as the days of his youth.
A critical edition of W. B. Yeats's automatic script, 11 March-30 December 1918.
Frieling, Barbara Johnston., Florida State University
Professor George Mills Harper writes in his recent book The Making of Yeats's 'A Vision': A Study of the Automatic Script that, despite his copious quotations from these unpublished manuscripts, "nothing but the whole will satisfy the truly involved reader." Perhaps the most comprehensive occult papers that have been preserved in the history of psychical research, the 3627 existing pages of the Automatic Script are of extreme interest to Yeats scholars, not only as the source for A Vision but...
Show moreProfessor George Mills Harper writes in his recent book The Making of Yeats's 'A Vision': A Study of the Automatic Script that, despite his copious quotations from these unpublished manuscripts, "nothing but the whole will satisfy the truly involved reader." Perhaps the most comprehensive occult papers that have been preserved in the history of psychical research, the 3627 existing pages of the Automatic Script are of extreme interest to Yeats scholars, not only as the source for A Vision but also as documentation of the creative collaboration between Yeats and his new wife George during the 450 sittings held between 5 Nov 1917 and 28 Mar 1920. This critical edition provides the complete text for that portion of the Automatic Script written during the Yeatses' first visit to Ireland following their marriage. (Under the direction of Professor Harper, Steve L. Adams has edited the first two months of the Script as a doctoral dissertation in 1982, and Sandra Sprayberry is preparing that portion of the Script written between 2 Jan 1919 and 28 Mar 1920.) Included in this dissertation is an editorial introduction describing the methods used by the Yeatses in the automatic writing and its subsequent "codification"; the relationship of the Script to Yeats's 1918 poetry and plays; and the synthesis of his life-long involvement in the occult Yeats achieves in the two versions of A Vision. Extensive endnotes relate the Automatic Script to Yeats's Card File and Vision notebooks as well as to his poetry, plays, and the two versions. Of special note is the emergence of the tower as a major symbol as the Yeatses first occupied Thoor Ballylee, and their growing conviction that their expected child would be the Irish Avatar. The 1918 Script demonstrates clearly that George Yeats was an equal partner in the amazing collaboration that produced A Vision and that provided her husband with metaphors for his later poetry.
A model for reading comprehension.
Salazar Melendez, Clara Enriqueta., Florida State University
This study intended to provide the information needed when deciding which reading processes to develop in order to improve the reading comprehension of seventh and eighth graders., Of 49 possible variables, Inferences, Text Structure, Decoding, Prior Topical Knowledge, Vocabulary, and NewVocabulary were chosen to create a model for Main Idea performance which was embedded into a model for overall comprehension as measured by a Cloze exercise. The variables having the greatest total effects on...
Show moreThis study intended to provide the information needed when deciding which reading processes to develop in order to improve the reading comprehension of seventh and eighth graders., Of 49 possible variables, Inferences, Text Structure, Decoding, Prior Topical Knowledge, Vocabulary, and NewVocabulary were chosen to create a model for Main Idea performance which was embedded into a model for overall comprehension as measured by a Cloze exercise. The variables having the greatest total effects on comprehension were defined as the most indicated to be included in treatment studies., Subjects were 102 seventh and eighth grade average readers from the Florida Developmental Research School. The materials used were a standardized vocabulary test, a reading passage, a list of decoding words, and a set of 56 questions which measured the included processes., The hypothesized model, which was tested using LISREL 7, was not supported by the data. Improvement in fitness resulted from fixing effects falling on Main Idea and estimating effects falling on the Cloze., In the new model, the included variables explained more Cloze, 62%, than Main Idea, 37% variance. Moreover, Main Idea performance was unrelated to overall comprehension as measured by the Cloze. Inferences, Prior Topical Knowledge, and Text Structure had large and statistically significant direct, indirect, and total effects on comprehension; Decoding affects comprehension only indirectly. Vocabulary and NewVocabulary were unrelated to comprehension., Three conclusions stemmed from these findings. First, since Inferences affected both vocabulary measures and comprehension and since the vocabulary measures did not affect comprehension it was suggested that the positive correlation between vocabulary and comprehension is due to an intervening variable, Inferences. Second, the evidence for defining Inferences and Text Structure as having the potential for being causally related to comprehension became stronger. Lastly, if performance on a cloze exercise is the outcome variable from a study with average seventh and eighth grade readers possessing adequate prior knowledge, then it is hypothesized that Decoding, Prior Topical Knowledge, Inferencing ability, and Text Structure are the variables most indicated to be included in treatment studies since they had the largest total effects on comprehension.
ABILITIES OF COLLEGE STUDENTS TO INVOLVE SYMMETRY OF EQUALITY WITH APPLICATIONS OF MATHEMATICAL GENERALIZATIONS.
FRAZER, COLLEEN DOANE., The Florida State University
THE ABORTIVE ENTENTE: THE AMERICAN POPULAR MIND AND THE IDEA OF ANGLO-AMERICAN COOPERATION TO KEEP THE PEACE, 1921-1931.
RICHARDS, DAVID ALLEN., The Florida State University
Abscisic acid: Molecular requirements for activity on stomata and effects on guard-cell protein synthesis.
Hite, Daniel Russell., Florida State University
A rapid, quantitative stomatal bioassay was developed to test abscisic acid (ABA)-like inhibition of stomatal opening by ABA-conjugates in epidermal peels of Commelina communis L. The one-hour bioassay was sensitive to 0.02 $\mu$M (+)-S-ABA and was insensitive to 20 $\mu$M ($\pm$)-S-ABA-1-methyl ester, which is consistent with previous work. Replacement of the C-4$\sp\prime$ carbonyl on ABA with hydrazone conjugates rendered these ABA-conjugates ineffective in inhibiting stomatal opening....
Show moreA rapid, quantitative stomatal bioassay was developed to test abscisic acid (ABA)-like inhibition of stomatal opening by ABA-conjugates in epidermal peels of Commelina communis L. The one-hour bioassay was sensitive to 0.02 $\mu$M (+)-S-ABA and was insensitive to 20 $\mu$M ($\pm$)-S-ABA-1-methyl ester, which is consistent with previous work. Replacement of the C-4$\sp\prime$ carbonyl on ABA with hydrazone conjugates rendered these ABA-conjugates ineffective in inhibiting stomatal opening. Competition assays between ABA and excess ABA-conjugates demonstrated that ABA-conjugates did not interfere with ABA inhibition of stomatal opening. Together, these findings demonstrate the unlikelihood of producing anti-idiotype antibodies against C-4$\sp\prime$-substituted ABA for identification of receptor(s) involved in stomatal closure., In a separate study, the interactions of ABA, Ca$\sp{2+}$, and osmoticum on ABA accumulation, $\sp{35}$S-amino-acid accumulation and incorporation into protein and protein synthesis were investigated in "isolated" guard cells of Vicia faba L. The effects of eight permutations, $\pm$ ABA, $\pm$ Ca$\sp{2+}$ (EGTA) and $\pm$ osmoticum (mannitol), on $\sp{35}$S-protein synthesis were investigated during two one-hour radiolabelling periods: the first and the fifth hours of incubation. Guard cells were "isolated" by sonication of epidermal peels. Ca$\sp{2+}$ depletion (EGTA) and osmoticum inhibited ABA accumulation in guard cells. $\sp{35}$S-amino-acid accumulation was inhibited to $<$50% of control values by ABA during the first hour of incubation and to varying extents by ABA, Ca$\sp{2+}$ depletion and osmoticum during the fifth hour of incubation with combinations of effectors causing greater inhibition. Incorporation percentages were not significantly different between incubation conditions of the same time interval, indicating a correlation between $\sp{35}$S-amino-acid accumulation and incorporation into protein. Computer-assisted analysis of autoradiographs of $\sp{35}$S-proteins following separation by two-dimensional-micro-polyacrylamide-gel electrophoresis determined that changes in guard-cell $\sp{35}$S-protein profile were elicited by all three effectors and incubation duration. Although Ca$\sp{2+}$-dependent synthesis of proteins was discerned consistently during the fifth hour of incubations, Ca$\sp{2+}$-dependent synthesis of ABA-induced proteins was not discerned. ABA- and osmotic-induced synthesis of similar protein(s) was not discerned consistently during either radiolabelling period. These results are based on a conservative interpretation of changes in $\sp{35}$S-protein profiles that allowed for comparisons among all incubation conditions.
ABSENCES. (ORIGINAL COMPOSITION).
BROTONS, SALVADOR., Florida State University
Absences is a composition for large orchestra and narrator on thirteen poems from the "Book Absences" by the catalan poet Miguel Marti i Pol. Elaine Lilly translated the poems in English, and the score has both (Catalan and English) versions., The piece has a duration of approximately twenty-two minutes and it is conceived in a whole movement. The poems are about feelings of the poet after his wife's death. Although being a meditation on the death, the poet's viewpoint is not always dark or...
Show moreAbsences is a composition for large orchestra and narrator on thirteen poems from the "Book Absences" by the catalan poet Miguel Marti i Pol. Elaine Lilly translated the poems in English, and the score has both (Catalan and English) versions., The piece has a duration of approximately twenty-two minutes and it is conceived in a whole movement. The poems are about feelings of the poet after his wife's death. Although being a meditation on the death, the poet's viewpoint is not always dark or pessimistic. The poems offer the needed contasting thematic to make the piece interesting and varied.
ABSOLUTE LINE STRENGTHS OF THE 2-0 BAND OF CARBON-MONOXIDE AT LOW PRESSURES.
KORB, C. LAURENCE., The Florida State University
ABSOLUTE SCOTOPIC THRESHOLDS IN THE PIGEON DETERMINED BY CLASSICAL CONDITIONING OF DIRECTED MOTOR ACTION.
PASSE, DENNIS HILARY., The Florida State University
ABSOLUTELY SUMMING OPERATORS IN BANACH SPACES.
MORRELL, JOSEPH SALVADOR., The Florida State University
ABSTRACT-EXPRESSIONISM: AN ANALYSIS OF THE MOVEMENT BASED PRIMARILY UPON INTERVIEWS WITH SEVEN PARTICIPATING ARTISTS.
KASHDIN, GLADYS SHAFRAN., The Florida State University
Abused youths' attitudes toward physical punishment: A test of the intergenerational transmission of physical child abuse.
Clausen, Margaret Lynne., Florida State University
The intergenerational transmission of physical child abuse was addressed by examining the relationship between 121 male adolescent delinquents' self-reported childhood experiences with physical discipline and the intensity of the discipline they endorse for children. Childhood experiences with physical punishment were assessed through the frequency with which adolescents were punished by their parents and the magnitude of resulting injuries they had received. Endorsement of discipline was...
Show moreThe intergenerational transmission of physical child abuse was addressed by examining the relationship between 121 male adolescent delinquents' self-reported childhood experiences with physical discipline and the intensity of the discipline they endorse for children. Childhood experiences with physical punishment were assessed through the frequency with which adolescents were punished by their parents and the magnitude of resulting injuries they had received. Endorsement of discipline was defined both by intensity of physical punishment and by intensity of any punishment, irrespective of form. The influences of sex and perceived rewardingness of the administrator of the harshest physical discipline were also examined, along with subjects' attributions for the punishment they had received., Adolescents were asked to choose the discipline they (a) would use and (b) would feel like using in response to a series of parent-child scenarios in which the child was misbehaving. A statistically significant, but small, relationship was found between the magnitude of the injuries subjects reported having received as a result of punishment and the intensity of punishment they endorsed: Subjects who had received physical injuries were more likely to indicate that they would administer intense discipline to their children., Similarly, a small, but statistically significant, interaction of frequency of punishment and sex of the disciplining parent was found: Adolescents who reported having been physically punished frequently by their fathers were more likely than those punished by their mothers or those not frequently punished to indicate that they would feel like using intense physical punishment with their own children., None of the attributions had any utility for predicting adolescents' endorsements of punishment, but did suggest that adolescents generally perceive their parents' punishment as justified and well-intentioned., Overall, the results of this study do not provide strong support for postulations based upon social learning theory or theories of moral development regarding the role of early disciplinary experiences in predicting adolescents' current attitudes toward punishment.
ACADEMIC ACHIEVEMENT OF STUDENTS WHO HAVE BEEN AWARDED CREDIT FOR EXTRAINSTITUTIONAL LEARNING.
ROSE, RUFUS EDWARDS, JR., Florida State University
Most postsecondary institutions use techniques for assessing or validating extrainstitutional learning. The three major types of extrainstitutional learning are learning that is assessed by credit-by-examination programs, training for which credit is recommended by the American Council on Education, and experiential learning that is assessed individually. These techniques apply most to adult students who will make up 47% of college students by 1990. This study compared academic achievement of...
Show moreMost postsecondary institutions use techniques for assessing or validating extrainstitutional learning. The three major types of extrainstitutional learning are learning that is assessed by credit-by-examination programs, training for which credit is recommended by the American Council on Education, and experiential learning that is assessed individually. These techniques apply most to adult students who will make up 47% of college students by 1990. This study compared academic achievement of nontraditional students who had significant amounts of extrainstitutional learning with achievement of traditional students. The subjects were graduates of a university college program over an 8-year period. Achievement was measured by quality point average and other ways. Achievement of nontraditional students did not differ significantly from that of traditional students. There was negligible correlation between either age or number of extrainstitutional credits with quality point average. These findings empirically supported current national policies and institutional practices regarding recognition of extrainstitutional learning.
The academic and social integration of Black students in selected predominantly White institutions in Florida.
Thompson, Anthony Charles., Florida State University
According to the literature, academic and social integration in some formal, informal or structural format, are related elements of student persistence in higher education (Metzner & Bean, 1987; Pascarella & Terenzini, 1983; Tinto, 1987; Voorhees, 1987)., Despite the number of enrollment gains made by Black students into higher education, these students continue to experience low retention and graduation rates. In addition, most Black students currently attend predominantly White institutions...
Show moreAccording to the literature, academic and social integration in some formal, informal or structural format, are related elements of student persistence in higher education (Metzner & Bean, 1987; Pascarella & Terenzini, 1983; Tinto, 1987; Voorhees, 1987)., Despite the number of enrollment gains made by Black students into higher education, these students continue to experience low retention and graduation rates. In addition, most Black students currently attend predominantly White institutions (PWI), however, historically Black colleges and universities (HBCU) award a majority of the degrees granted to Black students (Allen, 1985)., More specifically, from 1984 through 1989, Black student enrollment in the state of Florida increased while degrees awarded decreased. Conversely, as White student enrollment increased, so did degree attainment (Florida Board of Regents, 1990). What happens to Black students inside as well as outside of the classroom, after admission to and upon entering the college or university environment?, The purpose of this study was to examine the academic and social integration of full-time, undergraduate, Black students enrolled in selected PWIs in Florida. Pascarella and Terenzini's Academic and Social Integration Inventory (ASII) was used as the measurement tool., Tinto's (1975, 1987) theory of academic and social integration as the basis for student persistence, is the conceptual framework which guides the research for this study. Tinto's model is supported in both the attrition and retention literature.
ACADEMIC CAREER PATTERNS OF FACULTY RESEARCH PUBLICATIONS.
LEWIS, STEPHAN ALEXANDER., The Florida State University
ACADEMIC EFFECTIVENESS OF ABILITY GROUPING AND A STUDENT TUTORIAL - COUNSELING PROGRAM AT MADISON COLLEGE.
SHAFER, ELIZABETH GLOVER., The Florida State University
Academic information needs and information-seeking behavior of blind or low-vision and sighted college students.
Brockmeier, Kristina Crittenberger., Florida State University
Twenty-eight blind or low-vision and fourteen matched-sample sighted students attending public post-secondary institutions in the Atlanta metropolitan area were interviewed in this descriptive research to determine their academic information needs and their information-seeking behaviors. Thirty-six of the forty-two students discussed an academic information need related to a writing assignment, five students discussed an academic information need that was based on something other than a...
Show moreTwenty-eight blind or low-vision and fourteen matched-sample sighted students attending public post-secondary institutions in the Atlanta metropolitan area were interviewed in this descriptive research to determine their academic information needs and their information-seeking behaviors. Thirty-six of the forty-two students discussed an academic information need related to a writing assignment, five students discussed an academic information need that was based on something other than a writing assignment, and one student did not have any academic information need. The academic information needs were analyzed in terms of variables such as type of vision, conditions of visual impairment, secondary school attended, gender, year in college, full or part-time status, major or program of study, and familiarity with the library., The students' information-seeking behaviors were analyzed based on which of ten potential sources of information they used to satisfy their academic information need. For all students, the most frequently used information source was the library. Few students sought information from social services or governmental agencies., The blind or low-vision students discussed their dependency on and the qualifications they sought in readers. Additionally, they identified areas in which librarians could improve service or assistance for blind or low-vision students., The study concludes with some of the researcher's observations related to working with blind or low-vision college students.
The academic preparation and performance of student-athletes participating in football and men's basketball at Florida State University from 1986-1990.
Mand, Brian Sheldon., Florida State University
Although research on the academic preparation and performance of college athletes is plentiful, the studies that have been conducted often do not distinguish the sport, race and sex of their samples. However, the academic performance of revenue-producing sports athletes has come under severe criticism, especially through the media. Some research findings do support the contention that the academic performance of football and men's basketball players, especially those of black ethnic origin,...
Show moreAlthough research on the academic preparation and performance of college athletes is plentiful, the studies that have been conducted often do not distinguish the sport, race and sex of their samples. However, the academic performance of revenue-producing sports athletes has come under severe criticism, especially through the media. Some research findings do support the contention that the academic performance of football and men's basketball players, especially those of black ethnic origin, does pale when compared to that of non-athletes and non-revenue sports athletes (Renwick, 1982; Mayo, 1982, 1986; Bartell et al., 1984, Sandon, 1984, Ervin et al. 1984; American Institutes for Research (AIR), 1988, 1989). The National Collegiate Athletic Association (NCAA, 1993) has responded to calls for reform by implementing legislation designed to return college athletic programs as "an integral part of the educational program and the athlete as an integral part of the student body" (p. 1)., The purpose of this study was to examine the relationships between the academic preparation in high school and scores on the SAT or ACT and between academic preparation, scores on the SAT or ACT and the academic performance in college of Florida State football and men's basketball players, who initially enrolled from 1986-1990 on an athletic scholarship. These relationships were determined first for all subjects and then for subjects split by ethnic origin., Multiple regression analysis concluded that high school academic grade point average was found to be the most important independent variable for predicting how subjects of both white and black ethnic origin will do on the SAT and ACT, as well as how they will perform academically during the first two years in college. Also, t-tests determined that subjects of white ethnic origin demonstrated a significantly higher level of academic preparation in high school than did subjects of black ethnic origin, but there was no such significant difference in academic performance in college., These findings are intended to assist Florida State University in making admissions decisions and with implementing retention strategies that may include noncognitive measures and academic intervention and support programs. The results and implications of this study may also be of value to the NCAA in establishing initial eligibility academic standards.
Academic success and failure: A test of its effect on the disruptive behavior of three male adolescents.
Grande, Carolyn Gerlock., Florida State University
Consecutive multielement designs were conducted to examine the effect of academic success and failure on classroom disruptiveness of three low achieving eighth graders: Larry, Jimmy, and Jeff. During 10 days, five conditions of success and of failure were randomly alternated and induced by means of written assignments. At the end of class the teacher told the student his grade without social reinforcement. Following this, the first occurrence of talking, being physical, and being out-of-seat...
Show moreConsecutive multielement designs were conducted to examine the effect of academic success and failure on classroom disruptiveness of three low achieving eighth graders: Larry, Jimmy, and Jeff. During 10 days, five conditions of success and of failure were randomly alternated and induced by means of written assignments. At the end of class the teacher told the student his grade without social reinforcement. Following this, the first occurrence of talking, being physical, and being out-of-seat was recorded in his next class during eighty, 10 second observation intervals. Interobserver reliability averaged above 80% across these measures., Daily grades, known as background variables, received by each student in classes prior to the experimental sessions, were also analyzed. Larry's teachers recorded grades on days he was notified. Jimmy's and Jeff's teachers arranged for grade notifications, if any, according to the experimental sequence. A clear relationship between background variables and experimental effect was not discernible., A functional relationship between success and failure and disruptive behavior was not demonstrated. Differences between median percentages during success and failure revealed that the notifications only slightly affected subsequent student behavior. Larry's talking behavior was unaffected. For Jimmy, a median percentage of 60% during failure indicated his talking behavior almost doubled that recorded for success of 32%. Jeff's talking behavior escalated during both conditions. Except for Jimmy, Larry's and Jeff's physical behavior appeared to increase slightly following success notifications as indicated by a difference between the medians of 5% and 9%, respectively. Jimmy's median percentages for success and failure of 12% and 19%, showed a slight difference of 7% in his physical behavior during failure. Out-of-seat behavior was minimal for all students. Median percentages for Larry's and Jimmy's out-of-seat behavior following success was zero. Following failure, median percentages were 10% and 4%, respectively. A difference of 1% between Jeff's median percentages was recorded. Debriefing sessions held for each student indicated they were pleased to have been involved in the study.
ACADEMIC WOMEN IN HOME ECONOMICS AT LAND-GRANT INSTITUTIONS.
GREENWOOD, BONNIE FAY BROOKS., The Florida State University
Academy education in antebellum Florida, 1821-1860.
Crandall, Robert Charles., Florida State University
Antebellum Florida developed an informal system of academies that served as dominant educational institutions until after the Civil War. Academies followed people; they grew in size and number as cities and towns grew. Four basic types of academies appeared in urban areas, ranging from small, simple one subject, one room academies to institutions numbering over one hundred students. Rural academies ("old-field" schools), ranged from one tutor teaching the children of one plantation owner to...
Show moreAntebellum Florida developed an informal system of academies that served as dominant educational institutions until after the Civil War. Academies followed people; they grew in size and number as cities and towns grew. Four basic types of academies appeared in urban areas, ranging from small, simple one subject, one room academies to institutions numbering over one hundred students. Rural academies ("old-field" schools), ranged from one tutor teaching the children of one plantation owner to neighborhood schools for children of surrounding plantations. These academies were smaller in size and offered fewer subjects than their urban counterparts. Academies were started to combat ignorance, educate useful citizens, and teach proper moral values to the young. Patrons were willing to educate their own, but tax supported institutions were unacceptable to them; tuition based academies for those who could afford them were a result of this conviction. Larger, urban academies offered Classical and English studies six hours a day, five days a week, ten months a year with two months vacation. Public examinations generally followed each quarter., Teachers in academies varied in origin, longevity and quality. Most came from outside Florida; many were transient, some were career educators who remained in Florida. College degrees were more common in latter decades of the era. Men outnumbered women, though women increased in numbers as academies educating young ladies increased in size and numbers. Boys and girls were educated separately; recitation was the most common method of instruction. Textbooks were scarce early in the era, later decades had a large variety available in urban Florida bookstores. The withdrawal of the South from all Northern influences from 1850 onward resulted in the exclusion of most Northern teachers from Florida academies. Despite much clamor for Southern textbooks, Northern textbooks continued to be used. The Civil War destroyed the academy system; changed conditions after the war no longer required its function.
THE ACCEPTABILITY AND EFFECTIVENESS OF MATERIALS REVISED USING INSTRUCTIONAL DESIGN CRITERIA (GAGNE).
MENGEL, NANCY S., Florida State University
The purpose of this study was to determine if postsecondary vocational teachers who reviewed a chapter taken from a traditional, commercial textbook and revised using instructional design criteria had significantly different attitudes toward adopting the chapter from teachers who reviewed the original, unrevised version. This study also assessed whether the revisions had a positive effect on student performance., Nine instructional designers followed Gagne's events of instruction to prescribe...
Show moreThe purpose of this study was to determine if postsecondary vocational teachers who reviewed a chapter taken from a traditional, commercial textbook and revised using instructional design criteria had significantly different attitudes toward adopting the chapter from teachers who reviewed the original, unrevised version. This study also assessed whether the revisions had a positive effect on student performance., Nine instructional designers followed Gagne's events of instruction to prescribe revisions of the chapter to make it more effective in teaching specified objectives. The Instructional Materials Acceptance Questionnaire was developed to measure teachers' expression of acceptance/rejection behaviors toward using the material. A criterion-referenced achievement test was developed to measure student performance on the chapter's objectives. Information was collected on the effects of reading ability on student performance on both versions of the instructional material, on the time spent by learners to complete the chapter and the test, and on learners' attitudes., There was no evidence to show that teachers who reviewed the modified chapter were more or less willing to use it than teachers who reviewed the original version. Teachers expressed slightly favorable attitudes toward using both versions of the instructional material. However, the instructional design revisions did significantly improve student performance on a criterion-referenced achivement test. Students who read the modified chapter took 28% more time to complete it than students who read the original chapter. There was no difference in the amount of time students in the two groups took to complete the test. Teachers and learners paid more attention to content than to instructional features when forming attitudes toward using either version of the instructional material.
THE ACCEPTANCE, KNOWLEDGE, AND USE OF FAMILY-PLANNING TECHNIQUES AS RELATED TO SOCIAL-CLASS MEMBERSHIP IN THE WHITE POPULATION OF A SOUTHERN COMMUNITY.
GILBERT, ROBERT I., The Florida State University
ACCOUNTING BY FRANCHISING CORPORATIONS FOR FRANCHISE SALES: REVENUE RECOGNITION AND RECEIVABLE VALUATION.
CALHOUN, CHARLES HYMAN, III., The Florida State University
Accounting changes and earnings management: Evidence from the early adoption of SFAS No. 96 'Accounting for Income Taxes'.
Eakin, Cynthia Firey., Florida State University
An empirical analysis of the characteristics of firms choosing early adoption of SFAS No. 96 was conducted. Positive accounting theory forms the basis for the analysis. Two analyses are presented. The first analysis compares characteristics of a sample of firms that adopted SFAS 96 with characteristics of a control sample of firms that did not adopt SFAS 96. The second analysis compares characteristics of firms adopting SFAS 96 in the first year possible with characteristics of firms adopting...
Show moreAn empirical analysis of the characteristics of firms choosing early adoption of SFAS No. 96 was conducted. Positive accounting theory forms the basis for the analysis. Two analyses are presented. The first analysis compares characteristics of a sample of firms that adopted SFAS 96 with characteristics of a control sample of firms that did not adopt SFAS 96. The second analysis compares characteristics of firms adopting SFAS 96 in the first year possible with characteristics of firms adopting SFAS 96 in later years. The characteristics examined in the first analysis are firm size, leverage, and dividend payout. In addition to the characteristics examined in the first analysis, the second analysis examined two variables representing each firm's return on assets., The results of the first analysis indicate that, when adoption of SFAS 96 results in an increase in net income, the firms adopting SFAS 96 are smaller and more highly leveraged than the non-adopting firms. There is no significant difference between the two groups for the dividend payout variable. The results of the second analysis indicate that, when adoption of SFAS 96 results in an increase in net income, firm size, leverage, and dividend payout are not determinants in the timing of adoption of SFAS 96. However, the level of return on assets is a determinant in the timing decision. In particular, firms having lower return on assets compared to prior years return on assets are more likely to adopt SFAS 96 in the first year possible. Firms with higher return on assets are more likely to postpone adoption. This finding is consistent with the income smoothing hypothesis, and is not consistent with the bonus maximization hypothesis proposed by Healy (1985).
ACCOUNTS RECEIVABLE MANAGEMENT: THE DEVELOPMENT OF A GENERAL TRADE-CREDIT-LIMIT ALGORITHM.
BESLEY, SCOTT., Florida State University
The purpose of this study is to construct a general credit-limit algorithm that is consistent with the firm's goal of wealth maximization under funds constraints. Specifically, the net present value (NPV) technique is employed to build a foundation for the model because its acceptability is well established in capital budgeting theory. While it is not a novel approach in receivables management, it is rarely used to specify credit-limits. Yet, the application of NPV in the derivation of a...
Show moreThe purpose of this study is to construct a general credit-limit algorithm that is consistent with the firm's goal of wealth maximization under funds constraints. Specifically, the net present value (NPV) technique is employed to build a foundation for the model because its acceptability is well established in capital budgeting theory. While it is not a novel approach in receivables management, it is rarely used to specify credit-limits. Yet, the application of NPV in the derivation of a credit-limit algorithm is conducive to satisfying the requisite that credit-limit decisions and accept/reject decisions are concurrent credit-granting considerations. Moreover, by incorporating mathematical programming procedures, funds limitations can be considered to ensure the resources of the firm are not incorrectly invested in receivables "loans". Therefore, it is a fundamental contention that the credit-granting decision must be approached not only on the basis of individual accounts, but also from the standpoint of receivables in aggregate., To operationalize the credit-limit model, a default-probability model is developed. The "minimum chi-square rule" is employed because it assures the quality of minimizing misclassifications. Further, this procedure is consistent with the three characteristics which are important to the derivation of a practicable credit-limit algorithm; namely, (1) theoretical consistency, for interpretive rationale, (2) parsimony, for ease of understanding, and (3) practicability, for the possibility of future application., An integral part of the dissertation is a survey of current credit-limit practices, which provides an update to existing literature. The general findings suggest that credit-limits represent a device utilized by lending firms to control exposure to the risks associated with extending credit. But the actual techniques used to establish the limits are quite subjective. This implies the more theoretically sound and sophisticated methods proposed in the academic literature are not employed in the real world.
The Accrediting Council on Education in Journalism and Mass Communications: A history, 1970-1985.
Workman, Gale A., Florida State University
The purpose of this study was to present an evolutionary history of the Accrediting Council on Education in Journalism and Mass Communications (ACEJMC) for the years from 1970 to 1985. This study, together with a 1970 dissertation chronicling the history of the Council from its conception through 1969 provides a comprehensive history of accreditation for journalism education., Between 1970 and 1985 ACEJMC underwent major changes in personnel, policy and procedure, with most of the changes...
Show moreThe purpose of this study was to present an evolutionary history of the Accrediting Council on Education in Journalism and Mass Communications (ACEJMC) for the years from 1970 to 1985. This study, together with a 1970 dissertation chronicling the history of the Council from its conception through 1969 provides a comprehensive history of accreditation for journalism education., Between 1970 and 1985 ACEJMC underwent major changes in personnel, policy and procedure, with most of the changes occurring in 1983 and 1984. Therefore, the primary focus of this study was the years 1983 and 1984. This study identified the changes that occurred, the catalysts for the changes, the key figures who effected the changes and how the changes affected ACEJMC., Changes included modifications in the agency's name, voting policy, appeals procedure, accrediting standards, organizational structure and financial management,, Catalysts were external pressure from the U.S. Department of Education, changing trends in journalism education and in the marketplace, a demand for more openness in Council matters and the get-things-done leadership style of ACEJMC President Joseph Shoquist., Among the many key figures who affected the changes were Council presidents Don Carter and Joseph Shoquist, as well as Executive Director Roger Gafke. Transcripts of oral interviews with each of these men are included as appendices to this study., The changes that occurred in ACEJMC's personnel, policy and procedures from 1970-1985 put the Council in a position to administer journalism accreditation throughout the end of the twentieth century.
ACCUMULATION OF NUCLEAR FISSION PRODUCTS BY VEGETABLE CROPS AND THEIR REMOVAL DURING PROCESSING.
WEAVER, CONNIE MARIE., The Florida State University
ACHIEVEMENT AND ATTITUDES OF INTERMEDIATE AGE CHILDREN IN GRADES FOUR, FIVE, AND SIX RELATIVE TO THE READING COMPREHENSION OF POETRY.
HAYFORD, JANE MORRIS., Florida State University
Whether intermediate-age children like poetry and whether boys and girls express themselves in a similar manner regarding their likes and dislikes of poetry were two of the research questions addressed in this research. Two other research questions were concerned with the possible existence of a difference in reading achievement among intermediate grades and between sexes in ability to comprehend general reading material and in ability to comprehend poetry as well as in attitude toward...
Show moreWhether intermediate-age children like poetry and whether boys and girls express themselves in a similar manner regarding their likes and dislikes of poetry were two of the research questions addressed in this research. Two other research questions were concerned with the possible existence of a difference in reading achievement among intermediate grades and between sexes in ability to comprehend general reading material and in ability to comprehend poetry as well as in attitude toward reading prose and in attitude toward reading poetry. The last two primary research questions were concerned with whether teachers could predict how students would perform on a poetry test and whether students' expressed preferences for poems would correlate with their performance when reading and answering questions on those poems. Upon analyzing the obtained data, it was found that children expressed favorable attitudes toward poetry. With the exception of boys in the sixth grade, boys and girls both expressed positive attitudes more frequently than was expected, making categorized data statistically significant. Reading comprehension achievement showed statistically significant differences by grades for prose and for poetry, but there were differences by grade by sex only on achievement when reading poetry. Regarding attitudes toward reading, there were no differences by grades or by grades by sex toward reading prose; there were differences expressed by students by grades and the main effect of grades only on the two-way ANOVA by grade by sex toward reading poetry. Teachers were able to predict student performance in comprehension on a poetry test but student preferences were not correlated with student performance on their choices for best-liked and least-liked poems. One hundred and ninety-four students participated in the research with a randomly selected group of 62 students from the three grades who participated in the Q-sort for poem preferences.
ACHIEVEMENT AND EQUITY IN PUBLIC AND PRIVATE SECONDARY SCHOOLS: AN ANALYTICAL AND EMPIRICAL RESPONSE TO THE CONTINUING DEBATE.
BICKEL, ROBERT NORMAN., Florida State University
In 1966, James Coleman and his associates published a controversial monograph entitled Equality of Educational Opportunity. The two most durable conclusions reported in this still-influential application of the input-output model of school effectiveness were as follows: schooling is ineffective as an agency of social mobility, and one school is about as effective as another in promoting academic achievement., In 1982, however, Coleman and a new set of colleagues published a comparison of...
Show moreIn 1966, James Coleman and his associates published a controversial monograph entitled Equality of Educational Opportunity. The two most durable conclusions reported in this still-influential application of the input-output model of school effectiveness were as follows: schooling is ineffective as an agency of social mobility, and one school is about as effective as another in promoting academic achievement., In 1982, however, Coleman and a new set of colleagues published a comparison of public and private high schools, entitled High School Achievement. In contrast with Coleman's earlier work, Coleman, Hoffer, and Kilgore concluded that some high schools are able to promote social mobility, and some high schools are superior to others in promoting academic achievement. Generally, Coleman, Hoffer, and Kilgore concluded, private high schools are superior to public high schools on both counts., My review of the input-output literature provides the perpective needed for an improved empirical response to both issues, and for reconciling the differences between Equality of Educational Opportunity and High School Achievement. I use Scholastic Aptitude test (SAT) and College Board Achievement (CBAT) data to compare all public and private high schools in Florida in 1982-83 and 1983-84, and in the U.S. in 1983-84., Using multiple regression analysis, I find that public and private high schools are equally effective in promoting achievement in English and American history. Public schools, however, enjoy a small but consistent advantage in promoting mathematics achievement., With regard to English, mathematics, and American history achievement, I find no differences between public and private high schools in facilitating social mobility by severing ties between achievement and socially ascribed traits, such as family income and race., My analyses are superior to previous work by Coleman and others in that I more adequately deal with selectivity bias, regression model specification, curriculum sensitivity of outcome measures, and stability of results from one data set to another.
ACHIEVEMENT MOTIVATION THROUGH GROUP HYPNOTHERAPY.
HUNT, WILLIAM M., The Florida State University
ACHIEVEMENT OF TRAINABLE MENTALLY RETARDED STUDENTS AS RELATED TO STUDENT/STAFF RATIOS.
ZAMMIT, STEPHEN JAMES., The Florida State University
Theses and Dissertations (9219) + -
Digitized Theses and Dissertations 1952-2002 (9219) + -
Music (358) + -
Literature, Modern (193) + -
Education, Teacher Training (184) + -
Physics, Nuclear (161) + -
Sociology, Criminology and Penology (159) + -
Psychology, Experimental (158) + -
Political Science, General (153) + -
Library Science (151) + -
Home Economics (147) + -
Chemistry, Organic (127) + -
Literature, American (125) + -
Psychology, General (117) + -
Psychology, Social (115) + -
Chemistry, Biochemistry (113) + -
Education, Music (112) + -
Speech Communication (112) + -
Chemistry, Physical (103) + -
Florida State University (4204) + -
, Smith, Robert Pleas, Jr. (1) + -
AARON, SHIRLEY LOUISE. (1) + -
ABAL, JOSEPH ALVAREZ. (1) + -
ABBASSI, SYROUS. (1) + -
ABBOTT, DOUGLAS ARNOLD. (1) + -
ABDEL-BAR, NADIA MOKHTAR. (1) + -
ABDEL-HAMEED, MOHAMED SADEK. (1) + -
ABDEL-MONEIM, ATEF MOHAMED. (1) + -
ABDELAZIZ, ABDELAZIZ MOHAMED. (1) + -
ABDELGHANY, ABDELGHANY MOHAMED. (1) + -
ABDELRHMAN, MOHAMED BUSHARA. (1) + -
ABDO, KHALIL MOHAMED IBRAHIM. (1) + -
ABDULAHAD, DHARI TOMA. (1) + -
ABDULKADER, ABDULLAH ABDULAZIZ. (1) + -
ABED, ADNAN SALIM. (1) + -
ABELL, JOSEPH NEIL. (1) + -
ABERCROMBIE, VIRGINIA MCGUIRE. (1) + -
ABHYANKAR, SUDHIR BHALCHANDRA. (1) + -
ABINGTON, FLORENCE S. (1) + -
ABO-ELKHAIR, MEDHAT EL-SAYED MAHROUS. (1) + -
ABO-GNAH, YAHYA S. (1) + -
ABODERIN, ADEWUYI OYEYEMI. (1) + -
ABOLFOTOUH, ZAHRA MAHIN-DOKHT. (1) + -
ABOLHASSANI, MOHSEN. (1) + -
ABRAHAM, ANSLEY ALLYN, JR. (1) + -
ABRAHAM, MICHAEL R. (1) + -
ABRIL, MARIO S. (1) + -
ABUGHALYA, HASHMI HADI. (1) + -
|
CommonCrawl
|
Helminth lifespan interacts with non-compliance in reducing the effectiveness of anthelmintic treatment
Sam H. Farrell1 &
Roy M. Anderson1
Parasites & Vectors volume 11, Article number: 66 (2018) Cite this article
The success of mass drug administration programmes targeting the soil-transmitted helminths and schistosome parasites is in part dependent on compliance to treatment at sequential rounds of mass drug administration (MDA). The impact of MDA is vulnerable to systematic non-compliance, defined as a portion of the eligible population remaining untreated over successive treatment rounds. The impact of systematic non-compliance on helminth transmission dynamics - and thereby on the number of treatment rounds required to interrupt transmission - is dependent on the parasitic helminth being targeted by MDA.
Here, we investigate the impact of adult parasite lifespan in the human host and other factors that determine the magnitude of the basic reproductive number R 0 , on the number of additional treatment rounds required in a target population, using mathematical models of Ascaris lumbricoides and Schistosoma mansoni transmission incorporating systematic non-compliance. Our analysis indicates a strong interaction between helminth lifespan and the impact of systematic non-compliance on parasite elimination, and confirms differences in its impact between Ascaris and the schistosome parasites in a streamlined model structure.
Our analysis suggests that achieving reductions in the level of systematic non-compliance may be of particular benefit in mass drug administration programmes treating the longer-lived helminth parasites, and highlights the need for improved data collection in understanding the impact of compliance.
The neglected tropical diseases (NTDs) have become an increasing focus of research over the past decade [1]. The soil-transmitted helminths (STH), a group of nematode parasites, and schistosomiasis, caused by the Schistosoma blood flukes, both impose a considerable health burden on poor populations in the developing world [2]. STH infections are acquired through ingestion of eggs or larvae, via unwashed hands, contaminated food or eating utensils, or through penetration of the skin by hookworm larvae. Mild infections may be symptomless but heavier parasite loads are thought to contribute to nutritional deficiencies (which may become severe) as well as serious gastro-intestinal issues, and may cause complications requiring surgical intervention. Schistosomiasis is acquired through contact with contaminated water sources, where infective cercarial larvae penetrate the skin. Infection may induce various symptoms depending on the Schistosoma species including anaemia and nutritional deficiencies, intestinal problems and liver damage leading to portal hypertension and a range of symptoms in genital and reproductive systems in both women and men. Low-cost and safe drug interventions are available to treat STH and schistosome infections which enables broad-scale mass drug administration (MDA) programmes to effectively reduce the population burden of infection and concomitant infection. Current WHO treatment guidelines for STH and schistosome infections aim primarily for effective control of morbidity in school-aged children (STH and schistosomes) and women of reproductive age (STH) [2, 3]. Studies to determine the feasibility of interrupting STH transmission by MDA alone in endemic areas are underway [4, 5]. Regardless of the goal of mass treatment, sufficient coverage of the at-risk population is critical to success; WHO targets are to achieve coverage of 75% or more.
Much thought is now being given to the potential role of treatment patterns amongst individuals, as well as population MDA coverage. The focus has begun to shift to the question of individual compliance to treatment over successive MDA rounds, as well as how many are treated. If the same eligible individuals are left untreated over successive rounds, a reservoir of infection may be created hampering the effectiveness of MDA in the population as a whole [6, 7].
We have previously demonstrated that in a stochastic individual based model of helminth transmission and control by MDA, the degree of impact of this systematic non-compliance with treatment on the dynamics of transmission varies between Ascaris lumbricoides (roundworm, an STH) and Schistosoma mansoni species [8]. We hypothesised that a critical factor in the impact of systematic non-compliance is parasite lifespan, as intuitively a longer adult parasite lifespan in the human host implies a greater impact of untreated individuals. However, a comparative study between the two distinct diseases is not well suited to isolation of the impact of this single factor. In addition, the difficulties of pinpointing time of acquisition and death of individual worms, and the necessity of treating known infection wherever possible, make parasite lifespan in the human host difficult to measure in practice [9]. Careful consideration of its impact is therefore warranted.
In this paper we address these issues directly and investigate dynamic interactions between treatment compliance and estimated parasite lifespan, varying the interdependent variables controlling transmission intensity (the basic reproductive number R 0 ) to gain a clearer picture of the factors affecting the impact of systematic non-compliance. In order to ensure cross-disease applicability of any findings we again investigate transmission of both A. lumbricoides and S. mansoni parasites.
The stochastic individual-based model implementation used in the work motivating this study permits probabilistic forecasts of parasite transmission elimination, as well as allowing for heterogeneity in individual human host compliance with treatment at each round of MDA [8]. This comparative analysis showed different responses to systematic non-compliance between diseases. Here, using a more straightforward deterministic implementation [10, 11], we focus more closely on the critical parameters that interact with compliance in two helminth diseases separately; analysing differences in model outcomes if parasite lifespan were to be longer or shorter than our estimates. The mean prediction of the stochastic model converges to the deterministic model predictions as population sample size increases [8].
Stated in terms of the mathematics of dynamical systems, the models' long-term behaviour in the absence of disturbance (such as through treatment) is determined by the existence of two attractors. One is the endemic state. The other is a state of zero infection. The dioecious nature of the parasites means that for reproduction a male and female must both be present in a human host, producing a breakpoint in transmission dynamics. Once parasite abundance in the community is reduced below a certain level transmission intensity will be inadequate to sustain the parasite population and will decay to the extinction state without further intervention. The position of the breakpoint is heavily dependent on the degree of parasite aggregation within the human host population, as measured inversely by the negative binomial parameter k [12]. As a metric to assess how parasite lifespan in the human hosts impacts the importance of compliance in MDA rounds we choose the minimum number of treatment rounds required to break parasite transmission, i.e. to cross this transmission breakpoint.
A key parameter determining observed epidemiological patterns for all helminth parasites is the basic reproductive number R 0 . For macroparasites that are dioecious it is defined as the average number of female offspring produced by an adult female worm in the human host that themselves survive to reproductive maturity [12]. By definition, transmission cannot continue if R 0 falls below unity in value. However, the magnitude of R 0 must be a little above unity in value given the impact of the sexual mating function for dioecious parasites (which are either polygamous or monogamous) on the breakpoint in transmission, the details of which are described in previous publications [10,11,12].
The magnitude of R 0 is determined by many population parameters in the parasite's life cycle. As shown previously [11], one definition for STH parasites is given in Eq. 1 below:
$$ {R}_0=\frac{z\uplambda \uppsi}{\upmu_2\overline{a}}\underset{a=0}{\overset{\infty }{\int }}\rho (a)S(a)\underset{x=0}{\overset{a}{\int }}\beta (x){e}^{-\sigma \left(a-x\right)} dxda $$
The key parameters contributing to R 0 include σ which defines the adult parasite death rate in the human host (1/σ defines adult parasite life expectancy), the per adult parasite egg production rate in the human host λ, the severity of density dependence in parasite fecundity in the human hosts z, the instantaneous rate ψ at which infectious stages (eggs or larvae) enter the environmental reservoir, the mean human host age \( \overline{a} \), and the death rate of infectious material in the environmental reservoir µ2 (where 1/µ2 is infectious stage life expectancy) [10, 11]. The term S(a) represents the probability that a human host survives to age a (S(a) is the survival function), ρ(a) defines relative contribution of host age group a to the infectious pool of parasite eggs or larvae and β(a) is the host age-dependent infection rate from this pool of infectious material [11]. The age structure of the host population requires us to consider how human hosts of different ages contribute to and have contact with infectious eggs or larvae in the environment. The age-dependent functions can be estimated from epidemiological data recording age specific parasite intensity and prevalence patterns [11]. Note that while the R 0 equation is critical in transmission intensity, these same parameters are found elsewhere in our model and so model behaviour cannot be derived from Eq. 1 alone.
Two compliance settings are examined; random compliance (people are treated at random at each round of MDA) and systematic non-compliance (people are either always treated at every round, or never). In order to minimise any interactions of compliance-related dynamics with host age, we treat all age groups as having the same coverage (though in reality typically not all age groups are eligible). Treatment is assumed to be annual. Treatment coverage is fixed at 75% of the total population at each round, in accordance with WHO targets [2, 3]. This means that in the systematic non-compliance settings, 25% of the population never receives treatment. The Ascaris parasite death rate σ is set at 1/year (i.e. mean lifespan 1 year) [12] and the S. mansoni death rate σ is set at 0.1754/year (i.e. mean lifespan 5.7 years) [13]. Parameters are otherwise as described by Farrell et al. [8].
Note from the definition of R 0 in Eq. 1 that model parameters are interdependent in determining its overall value as a measure of transmission success or intensity. As such, any changes to parasite lifespan must include concomitant adjustment to at least one other parameter to keep R 0 constant. Numerical analyses are performed to assess how adjusting σ (the parasite death rate i.e. inverse of parasite lifespan) - while also adjusting in turn the parameters λ, µ2 or ψ in order to hold R 0 constant - influences how many treatment rounds are required for elimination in the two compliance settings. Additionally we adjust R 0 itself, while holding all parameters other than σ constant. This analysis is performed separately for a short- (Ascaris lumbricoides) or longer-lived (Schistosoma mansoni) parasites.
A systematic non-compliance setting, in which a portion of the population is never treated, increases the number of treatment rounds required before parasite transmission is interrupted. Our predictions here (Fig. 1) and elsewhere [8] are consistent with this conclusion. The number of additional treatment rounds required in simulation is a measure of the impact of systematic non-compliance under varying conditions.
The minimum number of annual treatment rounds required for interruption of transmission. The number of annual MDA treatments required before the transmission breakpoint is crossed, in random compliance and systematic non-compliance settings. Treatment set at 75% coverage in all age groups. The parasite death rate σ has been decreased or increased by 50%, corresponding to a lengthened or shortened parasite lifespan, respectively. In turn each of the other key terms in the transmission intensity (R 0 ) equation have been adjusted accordingly in order to assess how each influences the impact of the two different compliance patterns. ∞ elimination is impossible regardless of the number of annual treatment rounds. *after crossing the breakpoint infection remains for a long period (several decades) before eventual elimination
We find a strong and consistent interaction between parasite lifespan and compliance setting, while controlling R 0 by adjusting other parameters or allowing R 0 itself to vary. This applies to both parasites studied. In all simulations a longer estimated parasite lifespan is associated with a much greater number of additional rounds required to overcome the effects of systematic non-compliance. A shorter lifespan is associated with fewer additional treatment rounds in the schistosome, and proportionally fewer in Ascaris (with respect to a slightly higher number of treatment rounds required for elimination in a random setting). R 0 is the key factor in the model's transmission intensity, and inspection of Eq. 1 indicates that adjustments to λ, ψ and µ2 should have comparable effects. This is the case; the slight discrepancy in outcome when adjusting µ2 is due to this parameter's additional appearance elsewhere in the model [11].
Consistent with the predictions of a more complex stochastic individual-based implementation of this model we have already reported [8], we see a difference in behaviour between the two types of parasitic helminth infections. The impact of systematic non-compliance is much larger for the longer lived Schistosoma mansoni infection. In the presence of 25% systematic non-compliance, a higher estimate of this parasite's lifespan would imply decades of annual treatment before the transmission breakpoint is reached (this study). In contrast, the difference between random and systematic non-compliance settings is generally rather lower for Ascaris, the shorter lived parasite. Though always a hindrance to elimination, the impact of systematic non-compliance is highly dependent on parasite species.
This strongly supports the conclusion that the difference in the impact of non-compliance between the two parasitic infections is due to the substantially longer schistosome parasite lifespan in the human host.
In general terms, compliance with treatment in patients suffering from chronic diseases averages 50% in developed countries, and is likely considerably lower in some developing countries with very poor healthcare systems and fewer resources [14]. A recent review of treatment uptake among the neglected tropical diseases covering STH, lymphatic filariasis, schistosomiasis, onchocerciasis, and trachoma found widely varying definitions of "compliance" and treatment coverage, and reported rates of compliance ranging from 19.5% to 99% [7].
Shuford et al. [7] note the much greater availability of individual compliance data with respect to MDA treatment for lymphatic filariasis and onchocerciasis, with a more limited number of studies of compliance with STH and schistosome treatment. Both Shuford et al. [7] and a recent review of community engagement in anti-malarial MDA [15] report concerns regarding inconsistent and unclear definitions of coverage and adherence. Estimates of systematic non-compliance require longitudinal surveys to ascertain treatment compliance at the individual level over successive rounds. In general, cost is a major disincentive within monitoring and evaluation programmes connected to MDA, and no such studies have yet been published with respect to STH or schistosomiasis. However, the need for these data has motivated collection of relevant individual-level longitudinal compliance information in the STH-focused DeWorm3 study currently being conducted [7].
Helminth infections that go untreated during an MDA programme may effectively create an inaccessible reservoir of infection in the human population that releases infective stages to sustain parasite transmission in the entire population. In the longer-lived helminth parasites such as schistosome and the filarial worms [12, 16] such an effect would be more pronounced given the longer parasite lifespan in the untreated people creating a sustained output of infective stages.
Predictions of an individual stochastic model indicate a potentially critical role for lifespan in interrupting parasite transmission when in combination with systematic non-compliance [8]. A variable impact of systematic non-compliance has also been identified in a stochastic model of lymphatic filariasis, in which impact is correlated with other factors including exposure to infective mosquito bites and use of insecticide-treated bednets [17]. In this paper we have extended these analyses to assess in detail how altering lifespan (and other parameters that contribute to transmission intensity) influences the effect of persistent non-compliance.
The average helminth parasite lifespan in the human host is difficult to measure accurately and published data compilations depend on fragmentary reports of egg excretion or microfilaremia post departure from endemic regions in the absence of reinfection [12]. Furthermore, an average lifespan in the human host does not provide information regarding a parasites' individual developmental details such as time to maturation and hence egg production post-entry to the definitive host, or variation in egg production as worms age in the human host. With respect to modelling population-level transmission between hosts (who will generally be infected with a number of parasites), these details are crudely described in our models in the absence of more precise population biological data. Nevertheless, uncertainty over the length of these parasites' lifespans in the human host adds to the need to take a close look at the sensitivity of model outcomes with respect to parameter assignments for this attribute.
We have shown that increases in parasite lifespan in the human host in a model of helminth transmission, do exacerbate the effect of systematic non-compliance with treatment in the population. Decreases to lifespan have the opposite effect, facilitating interruption of transmission. These effects persist despite accounting for concomitant changes elsewhere in the models' parameters. Bearing in mind the difficulty of accurately estimating parasite lifespan in the human host, if these were somewhat longer than our default "standard" estimate it would be likely that interruption of S. mansoni transmission - when in the presence of even a modest level of systematic non-compliance - is substantially more difficult in a practicable time-frame than existing models would otherwise suggest.
Although a simplified, purely systematic compliance setting has been employed here in focussing on the transmission dynamics, more complex treatments of individual compliance somewhere between the two limits we examined are possible which give finer control over the precise degree of non-compliance over time [18, 19]. When data becomes available from studies such as DeWorm3 and TUMIKIA [4], measured compliance probability distributions derived from multiple rounds of MDA can be examined with respect to a variety of confounding effects such as age and gender.
The sensitivity of the interaction between parasite lifespan and compliance with treatment requires that future modelling studies of anthelmintic MDA treatment impact take into account uncertainties around estimates of lifespan. Quantitative analyses here confirm a hypothesised strong interaction between helminth parasite lifespan in the human host and the impact of systematic non-compliance on parasite transmission. This suggests that the effectiveness of mass drug administration programmes targeting the longer-lived helminths (such as the filarial worms and the schistosome parasites) may be particularly improved through efforts to reduce systematic non-compliance. Further data, particularly longitudinal studies measuring individual compliance over multiple rounds of MDA stratified by age and gender, are critical as efforts are made towards elimination of STH and schistosomiasis. With this, and standardisation of terminology in reporting of MDA coverage and compliance, we can improve our understanding of the impact of current MDA based control programmes for the helminth NTDs and ultimately improve the design, monitoring and evaluation of public health measures.
World Health Organisation. Accelerating work to overcome the global impact of neglected tropical diseases : a roadmap for implementation : executive summary. 2012. http://apps.who.int/iris/handle/10665/70809. Accessed 17 Nov 2017.
World Health Organisation. Helminth control in school age children: a guide for managers of control programmes - 2nd ed. 2011. http://apps.who.int/iris/bitstream/10665/44671/1/9789241548267_eng.pdf. Accessed 17 Nov 2017.
World Health Organisation. Guideline: preventive chemotherapy to control soil-transmitted helminth infections in at-risk population groups. 2017. http://apps.who.int/iris/handle/10665/258983. Accessed 17 Nov 2017.
Brooker SJ, Mwandawiro CS, Halliday KE, Njenga SM, Mcharo C, Gichuki PM, et al. Interrupting transmission of soil-transmitted helminths: a study protocol for cluster randomised trials evaluating alternative treatment strategies and delivery systems in Kenya. BMJ Open. 2015;5(10):e008950.
Ásbjörnsdóttir KH, Means AR, Werkman M, Walson JL. Prospects for elimination of soil-transmitted helminths. Curr Opin Infect Dis. 2017;30(5):482–8.
Esterre P, Plichart C, Sechan Y. The impact of 34 years of massive DEC chemotherapy on Wuchereria bancrofti infection and transmission: the Maupiti cohort. Tropical Med Int Health. 2001;6(3):190–5.
Shuford KV, Turner HC, Anderson RM. Compliance with anthelmintic treatment in the neglected tropical diseases control programmes: a systematic review. Parasit Vectors. 2016;9:29.
Farrell SH, Truscott J, Anderson R. The importance of patient compliance in repeated rounds of drug treatment for elimination of intestinal helminths. Parasit Vectors. 2017;10:291.
Anderson RM, Turner HC, Farrell SH, Truscott JE. Studies of the transmission dynamics, mathematical model development and the control of schistosome parasites by mass drug administration in human communities. Adv Parasitol. 2016;94:199–246.
Anderson R, Medley G. Community control of helminth infections of man by mass and selective chemotherapy. Parasitology. 1985;90:629–60.
Truscott JE, Turner HC, Farrell SH, Anderson RM. Soil-transmitted helminths: mathematical models of transmission, the impact of mass drug administration and transmission elimination criteria. Adv Parasitol. 2016;94:133–98.
Anderson R, May R. Infectious diseases of humans: dynamics and control. Oxford: Oxford University Press; 1992.
Fulford AJ, Butterworth AE, Ouma JH, Sturrock RF. A statistical approach to schistosome population dynamics and estimation of the life-span of Schistosoma mansoni in man. Parasitology. 1995;110(3):307–16.
World Health Organisation. Adherence to long-term therapies : evidence for action. 2003. http://apps.who.int/iris/handle/10665/42682. Accessed 17 Nov 2017.
Adhikari B, James N, Newby G, von Seidlein L, White NJ, NPJ D, et al. Community engagement and population coverage in mass anti-malarial administrations: a systematic literature review. Malar J. 2016;15:523.
Brooker S, Bethony J, Hotez PJ. Human hookworm infection in the 21st century. Adv Parasitol. 2004;58:197–288.
Irvine MA, Reimer LJ, Njenga SM, Gunawardena S, Kelly-Hope L, Bockarie M, et al. Modelling strategies to break transmission of lymphatic filariasis - aggregation, adherence and vector competence greatly alter elimination. Parasit Vectors. 2015;8:547.
Dyson L, Stolk WA, Farrell SH, Hollingsworth DT. Measuring and modelling the effects of systematic non-adherence to mass drug administration. Epidemics. 2017;18:56–66.
Plaisier AP, Stolk WA, van Oortmarssen GJ, Habbema JDF. Effectiveness of annual ivermectin treatment for Wuchereria bancrofti infection. Parasitol Today. 2000;16(7):298–302.
The authors would like to thank James Truscott, Julia Dunn and Alison Ower for conversations on the study design and comments on the manuscript.
SF and RA gratefully acknowledge funding of the NTD Modelling Consortium by the Bill and Melinda Gates Foundation in partnership with the Task Force for Global Health. The views, opinions, assumptions or any other information set out in this article are solely those of the authors.
The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.
London Centre for Neglected Tropical Disease Research, Department of Infectious Disease Epidemiology, St Mary's Campus, Imperial College London, London, W2 1PG, UK
Sam H. Farrell & Roy M. Anderson
Sam H. Farrell
Roy M. Anderson
SF conducted simulations and analysis. Both authors contributed to the manuscript. Both authors read and approved the final version of the manuscript.
Correspondence to Sam H. Farrell.
Roy Anderson is a Non-Executive Director of GlaxoSmithKline (GSK). GSK had no role in the funding of this research or this publication. Sam Farrell declares that he has no competing interests.
Farrell, S.H., Anderson, R.M. Helminth lifespan interacts with non-compliance in reducing the effectiveness of anthelmintic treatment. Parasites Vectors 11, 66 (2018). https://doi.org/10.1186/s13071-018-2670-6
Accepted: 23 January 2018
Soil-transmitted helminths
Mass drug administration
Systematic non-compliance
Transmission interruption
The LCNTDR Collection: Advances in scientific research for NTD control
|
CommonCrawl
|
Mathematical modelling and a systems science approach to describe the role of cytokines in the evolution of severe dengue
S. D. Pavithra Jayasundara1,
S. S. N. Perera1,
Gathsaurie Neelika Malavige2 &
Saroj Jayasinghe3
Dengue causes considerable morbidity and mortality in Sri Lanka. Inflammatory mediators such as cytokines, contribute to its evolution from an asymptotic infection to severe forms of dengue. The majority of previous studies have analysed the association of individual cytokines with clinical disease severity. In contrast, we view evolution to Dengue Haemorrhagic Fever as the behaviour of a complex dynamic system. We therefore, analyse the combined effect of multiple cytokines that interact dynamically with each other in order to generate a mathematical model to predict occurrence of Dengue Haemorrhagic Fever. We expect this to have predictive value in detecting severe cases and improve outcomes. Platelet activating factor (PAF), Sphingosine 1- Phosphate (S1P), IL-1β, TNFα and IL-10 are used as the parameters for the model. Hierarchical clustering is used to detect factors that correlated with each other. Their interactions are mapped using Fuzzy Logic mechanisms with the combination of modified Hamacher and OWA operators. Trapezoidal membership functions are developed for each of the cytokine parameters and the degree of unfavourability to attain Dengue Haemorrhagic Fever is measured.
The accuracy of this model in predicting severity level of dengue is 71.43% at 96 h from the onset of illness, 85.00% at 108 h and 76.92% at 120 h. A region of ambiguity is detected in the model for the value range 0.36 to 0.51. Sensitivity analysis indicates that this is a robust mathematical model.
The results show a robust mathematical model that explains the evolution from dengue to its serious forms in individual patients with high accuracy. However, this model would have to be further improved by including additional parameters and should be validated on other data sets.
Dengue is a mosquito borne viral disease transmitted by female mosquitoes of the species Aedes aegypti and Aedes albopictus. In the recent decades there has been a dramatic increase of the dengue incidences around the world [1]. Each year around 500,000 people with severe dengue are hospitalized, with a large proportion being children. Out of those affected around 2.5% result in death [1]. Dengue has been a national concern in Sri Lanka with several outbreaks occurring and the incidence and severity of these epidemics keeps increasing [2]. Most infected person are asymptomatic, and develop dengue fever (DF), while a minority proceed to serious forms of dengue, dengue haemorrhagic fever (DHF) or dengue shock syndrome (DSS), which can be fatal [3]. A key mechanism of severity is leakage of fluid from blood vessels to surrounding tissues and the resultant drop in volumes within the vascular compartment and hypotension. This occurs for about 48 h and is referred to as critical phase [4]. At present there is no specific drugs against the illness. Therefore, early clinical diagnosis and careful body fluid management is critical to care of the severe ill [5]. In relation to early diagnosis, attempts have been made to identify early markers of dengue and cytokines that predict severity [6–8].
Increased vascular permeability is a main cause of DHF and cytokines, inflammatory lipid mediators and dengue NS1 antigen are thought to significantly contribute to this increase in vascular permeability [9–11]. Hence, several studies have attempted to identify the relationships between cytokines and dengue. In this study, we have attempted to use several cytokines and other inflammatory mediators to develop a mathematical model to predict the likelihood of developing DHF. For this model we have chosen three cytokines and two inflammatory lipid mediators due to their association with vascular leak in dengue and also with severe clinical disease. Sphingosine 1-phosphate (S1P), is a signalling lipid mediator and is considered to be important in maintaining endothelial barrier integrity [12]. It was shown that levels of S1P, were found to be low in DHF patients especially during the critical phase of acute dengue [13]. IL-1β was also found to associate with increase in vascular permeability and is thought to be predominantly released from platelets in patients with acute dengue [14]. IL-1β is shown to be released from dengue virus infected monocytes, which is thought to be due to the activation of the inflammasome [15, 16]. IL-10 levels have also shown to be higher in patients with DHF especially during secondary infections [17, 18]. In addition, it was recently shown that higher concentrations of NS1 antigen and serum IL-10 levels are associated with severe clinical disease in acute dengue infection [6, 19]. However, although IL-10 levels were found to be significantly higher in patients with DHF, it was not a good predictive marker when used alone due to the high variability [6]. Although TNF-α was initially found to be associated with DHF [20], more recent studies again has shown variable results [21, 22]. A main drawback of these studies is that they focus on the association of individual cytokines with clinical disease severity. However, when identifying markers of DHF it is important to take into consideration the dependencies and interaction of inflammatory mediators [23].
In recent times there has been an interest in the utility of a systems science approach that captures the combined and inter-related effects of multiple parameters in determining severity of illnesses [24]. Our study is an attempt to take a systems science view of severity and develop a mathematical model to capture the combined effect of multiple inflammatory mediators that are elevated in dengue. Therefore, in this study our objective is to develop a mathematical model that can detect patients proceeding to DHF level at an early stage by analysing the combined effect from the parameters sphingosine 1-phosphate (S1P), Interleukin- 1β (IL-1β), Tumor Necrosis Factor (TNF-α), Platelet Activating Factor (PAF) and Interleukin -10 (IL-10). It was recently shown that higher concentrations of NS1 antigen and serum IL-10 levels are associated with severe clinical disease in acute dengue infection [19]. The current study uses some of the published data and other sources to model the impact of multiple immune and other variables in predicting severity of dengue.
In our study a fuzzy logic based model is proposed to analyse the combined effect of inflammatory mediators to determine severity level of dengue. Fuzzy logic is now commonly used to model biological problems as it has the strength to handle imprecise information and uncertainties associated with decision making [25].
Preliminary analysis
The sample used for preliminary analysis and model validation consists of 11 adult patients with DF and 25 adult patients with DHF, recruited from the Colombo South Teaching Hospital, Sri Lanka. Model validation was supported through pre-existing data in [13] and [19]. The classification as to DF or DHF is performed according to 2011 WHO guidelines [1]. The patients in the sample are admitted at varying time points from onset of fever ranging from 72 to 144 h from onset of fever. Data are collected at several time points for a particular individual patient, each time point being 12 h apart. The number of times a patient is measured differs from individual to individual and hence there are missing values as not all time points are measured for all of the patients. Missing values are handled using multiple imputation.
Hierarchical clustering is performed on the parameter variables and the resulting dendrogram at 96 h from onset of fever is shown in Fig. 1. The clusters are formed at increasing level of dissimilarity and squared Euclidian Distance is used. SPSS statistical software is used to cluster the variables. It can be seen that S1P and IL-1β merges first, being the closest pair of clusters and TNF-α, IL-10 and PAF shows similar behaviour, resulting in two main clusters and the same two clusters could be seen for 96 and 120 h of onset of illness for DHF patients. These clustering output is used in deciding how to aggregate parameters with the Hamacher operator.
Dendrogram resulting in hierarchical clustering performed on the five cytokine and inflammatory mediators S1P, IL-1β, TNF-α, PAF and IL-10 on DHF patients at 96 h from onset of illness. SPSS statistical software is used to cluster the variables and squared Euclidian Distance is used as the distance measure. In the model development these parameters are combined with the Hamacher product and OWA operator according to this clustering output
Our model to determine dengue severity by analysing the combined effect of cytokines and inflammatory mediators is modelled using fuzzy logic concepts. With the ambiguity and vagueness associated with decision making in dengue and medicine in general, fuzzy logic is commonly used in modelling these phenomena as it has the ability to explain the uncertainties associated with these complex systems. Fuzzy models have the ability to handle these imprecise components and perform with high accuracy as fuzzy models are robust to variation in symptom parameters [26]. Dengue pathogenesis is complex and still not fully understood [27]. Disease severity can depend on dengue serotype [28] and it is believed that antibody dependent enhancement can also play a role in determining severity [29]. Disease diagnosis itself include uncertainties as symptoms could vary from patient to patient, similar symptoms can be common to various diseases and human reasoning itself is imprecise [25]. Therefore, strict rules as in classical logic is not suitable to handle biological problems that involve inherent uncertainty. Also, fully stochastic models cannot be adopted as the underlying probability distributions are unknown [30]. Fuzzy logic provides a platform to interpret vague human descriptions in natural linguistic terms and can successfully handle imprecision and uncertainty and is a useful modelling tool especially under limited data [31].
Study in [32] has used fuzzy expert system to detect asthma and chronic obstructive pulmonary disease. Using parameters such as fever, nocturnal symptoms, oral steroids etc. the model produced a scale of 1-10 to measure the severity level of asthma, tuberculosis and chronic obstructive pulmonary disease. Fuzzy expert system for diabetes has also been developed [33]. In this system triangular membership functions with Mamdani Inference was used and it achieved an accuracy of 85.03%, which was higher than previously developed methods to detect diabetes. Two approaches based on Artificial Neural Networks (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) were used for identifying heart disease from a large data set on patients in [34]. In another study, the fuzzy system for detecting heart disease had a 94% accuracy to that of a medical expert's decision [35]. Similar fuzzy expert systems to diagnose liver disorders are also available [36]. This expert system used triangular and trapezoidal membership functions and the achieved accuracy was 91%. Fuzzy IF-Then rule based study was carried out for the diagnosis of the haemorrhage and brain tumour disease to determine the probability of disease [37]. All these fuzzy expert systems are rule based and MATLAB fuzzy logic tool box was used, but in our model we didn't use a rule based approach, rather we used fuzzy intersection operator, Hamacher product and the Ordered Weighted Aggregation (OWA) operator.
Approach to model development
In classical logic every statement is either true or false. But, in medical diagnosis it is not possible to make decisions based on these crisp distinctions. In fuzzy logic this strict convention is reduced and allows partial membership in a set. The degree to which a particular member belongs to a set is denoted by the degree of fuzziness and this is mapped through a fuzzy membership function [38]. Each element of a fuzzy set is mapped into a real number in the interval [0, 1].
The row values of cytokine and inflammatory mediator values are 'fuzzified' through their respective membership functions. As the objective of this study is to consider the combined effect from the parameters, Hamacher and OWA operators are selected as suitable fuzzy operators to combine parameters. Impact from the important variables to the model is intensified through fuzzy 'concentration'. The model outputs a final value which measures the unfavourability to attain severe dengue. Based on this index value it can be decided whether this patient is a potential DF or a DHF patient.
For technical details of fuzzy set, membership functions, Hamacher operator and concentration see Additional file 1 (A1).
Membership function development
The five inflammatory mediators S1P, IL-1β, TNF-α, PAF and IL-10 are analysed in combination. When developing membership functions, knowledge acquisition from interviews with medical experts is a common practice [31, 39, 40]. Furthermore, previous studies that are carried out to determine the influence from these individual cytokines on dengue disease severity are also used to determine membership values. This enabled to develop our model independently of sample data. Since the rate of change of cytokines is not significant over time, trapezoidal membership functions are used to 'fuzzify' the input parameter values. Trapezoidal membership functions are commonly used to model problems in biology because of its easy construction and interpretation [35, 36]. In our model the membership functions measure how unlikely it is to develop DHF.
Above 50% of patients with DHF have shown S1P levels below 0.5 μM at some time point in their illness and only 10% of DF patients show S1P levels below 0.5 μM. For the membership function for S1P the cut off value for DHF patients is chosen as 0.5 μM. Also it shows that when compared with DF patients, DHF patients have significantly lower S1P levels throughout the course of illness [13]. The membership function for S1P is, μ S (x),
$$ {\mu}_S(x)=\left\{\begin{array}{c}\hfill 0\kern2em ; x\le 0.5\hfill \\ {}\hfill x-0.5\kern2em ;0.5< x<1.5\hfill \\ {}\hfill 1\kern1.75em ; x\ge 1.5\hfill \end{array}\right. $$
DF patients have shown to have an IL-1β level ranging from 0 to 33.7 pg/ml with a median of 30.5 pg/ml and DHF patients an IL-1β range of 0–62.3 pg/ml with a median of 33.5 pg/ml [41]. Therefore, the membership function for IL-1β is, μ β (x),
$$ {\mu}_{\beta}(x)=\left\{\begin{array}{c}\hfill 1\kern1.25em ; x\le 30.5\hfill \\ {}\hfill \frac{33.5- x}{3}\kern1.25em ;30.5< x<33.5\hfill \\ {}\hfill 0\kern1.5em ; x\ge 33.5\hfill \end{array}\right. $$
IL-10 concentration of DHF patients has shown a median 110.8, SD ± 27.1 pg/ml and DF patients a median of 15.5, SD ± 5.3 pg/ml. IL-10 levels are significantly elevated in DHF patients than in DF patients [42]. The membership function for IL-10 is, μ 10(x),
$$ {\mu}_{10}(x)=\left\{\begin{array}{c}\hfill 1\kern1.25em ; x\le 20\hfill \\ {}\hfill \frac{110- x}{90}\kern1.25em ;20< x<110\hfill \\ {}\hfill 0\kern1.5em ; x\ge 110\hfill \end{array}\right. $$
The mean value for TNF-α for DF patients is indicated as 14.10, SD ± 24.0 pg/ml and for DHF patients mean 29.95, SD ± 39.5 pg/ml. TNF-α is higher in DHF and shock patients than in DF patients [43]. In the model the membership function for TNF-α is, μ α (x),
$$ {\mu}_{\alpha}(x)=\left\{\begin{array}{c}\hfill 1\kern1.25em ; x\le 15\hfill \\ {}\hfill \frac{30- x}{15}\kern1.25em ;15< x<30\hfill \\ {}\hfill 0\kern1.5em ; x\ge 30\hfill \end{array}\right. $$
PAF levels are found to be significantly higher in DHF patients [10]. Also in that study 72% of DF patients never showed a rise of PAF level above 100 ng/ml and median PAF level for DHF patients is 335.2 ng/ml while DF patients indicate to have a median value of 47.63 ng/ml. Therefore, the membership function for PAF in the model is, μ P (x),
$$ {\mu}_P(x)=\left\{\begin{array}{c}\hfill 1\kern1.25em ; x\le 10\hfill \\ {}\hfill \frac{100- x}{90}\kern1.25em ;10< x<100\hfill \\ {}\hfill 0\kern1.5em ; x\ge 100\hfill \end{array}\right. $$
The trapezoidal-shaped membership functions are illustrated in Fig. 2.
Model membership functions for S1P, IL-1β, IL-10 TNF-α and PAF
Choice of fuzzy operator
Since the overall combined effect from S1P, IL-1β, TNF-α, IL-10 and PAF is considered to determine the severity level, the proposed operator must satisfy certain properties [44]. Let Y, Z be two cytokine parameters and U Y (x), U Z (x) measure their respective degree of unfavourability to attain DHF. Then,
If Y is favourable to DHF and Z is favourable to DHF, then U Y ∩ Z (x) < min(U Y (x), U Z (x)).
If U Y (x) < U Z (x) < 1, then the effect that a decrease of U Y (x) has on U Y ∩ Z (x) may depend on U Z (x).
If U Y (x) and U Z (x) < 1, then the effect that an increase of favourability level of Y has on U Y ∩ Z (x) can be erased by a decrease of favourability of Z.
Since the Hamacher product as defined in A1-(1) possesses these three properties it is used in our model to combine the effect from cytokine parameters. Hamacher operator has been previously used successfully to combine the effect that rain fall and temperature can have on dengue disease transmission [30].
Development of the model
With clustering results as shown in Fig. 1 we are able to divide the five parameters mainly into two groups; one with S1P and IL-1β and the other with TNF-α, IL-10 and PAF. Therefore, the Hamacher product is separately used on the two main cytokine groups as shown in Eqs. (6), (7) and (8).
In the model the three parameters TNF-α, PAF and IL-10 are subjected to 'concentration' as previous studies clearly indicate that these parameters are significantly elevated in DHF patients than in DF patients [10, 17, 43]. Therefore, in order to amplify the effect from these cytokines and to allocate them a higher weight in the model, the membership values of TNF-α, PAF and IL-10 are concentrated.
The Hamacher product between S1P and IL-1β is
$$ \mathrm{H}1=\frac{\mu_S(x)\ast {\mu}_{\beta}(x)}{\mu_S(x)+{\mu}_{\beta}(x)-{\mu}_S(x)\ast {\mu}_{\beta}(x)} $$
where μ S (x), μ β (x) are the membership values of S1P and IL-1β obtained from (1) and (2) respectively.
The Hamacher product between TNF-α and IL-10 is
$$ \mathrm{H}=\frac{\mu_{\alpha}{(x)}^{\gamma}*{\mu}_{10}{(x)}^{\upvarphi}}{\mu_{\alpha}{(x)}^{\upgamma}+{\mu}_{10}{(x)}^{\upvarphi}-{\mu}_{\alpha}{(x)}^{\upgamma}*{\mu}_{10}{(x)}^{\upvarphi}}\kern5em \mathrm{where}\kern0.5em \upgamma, \upvarphi >1 $$
Where μ 10(x) , μ α (x) are the membership values of TNF-α and IL-10 obtained from (3) and (4) respectively. Here TNF-α and IL-10 are concentrated by γ amount.
The Hamacher product between TNF-α, IL-10 and PAF is
$$ \mathrm{H}2=\frac{\mathrm{H}\ast {\mu}_p{(x)}^{\updelta}}{\mathrm{H}+{\mu}_p{(x)}^{\updelta}-\mathrm{H}\ast {\mu}_p{(x)}^{\updelta}}\kern8em \mathrm{where}\kern0.5em \updelta >1 $$
where μ p (x) is the membership values of PAF obtained from (5) and H is obtained in (7). Here PAF values are concentrated by δ amount. The Hamacher operator value resulting from S1P, IL-1β (6) and the Hamacher operator value resulting from TNF-α, IL-10 and PAF (8) is combined through the OWA operator defined in A1-(2).
So the OWA operator used in the model is
$$ \mathrm{O}\mathrm{W}\mathrm{A}=\uplambda \ast \mathrm{MAXIMUM}\left(\mathrm{H}1,\mathrm{H}2\right)+\left(1\hbox{-} \uplambda \right)\ast \mathrm{MINIMUM}\left(\mathrm{H}1,\mathrm{H}2\right) $$
where λ is OWA weight defined in A1-(3).
In the model the parameters TNF-α, PAF and IL-10 are concentrated by 1.1, 1.2 and 1.1 respectively. Accordingly, the model parameter values of γ, δ and φ are 1.1, 1.2 and 1.1 respectively. PAF is concentrated more than the other two as it plays a highly significant role in determining severity. Concentration is limited to a small amount as otherwise it would affect the operator values of DF patients.
The optimal λ value is determined by analysing the accuracy of the model for various values of λ. Table 1 summarizes these results. λ = 0.2 and λ = 0.3 make the DF operator values too biased towards DHF patients. In fact, when λ = 0.2 at 120 h from onset of illness 7 out of 8 DF patients are misclassified. When λ = 0.4, model performs well when all three time points are considered and it is better in classifying DF patients. Therefore, optimal λ is chosen as 0.4. Thus, from A1- (3) and A1-(4) by letting l = 0.3 and m = 0.8 the OWA weights chosen for the model are 0.4 and 0.6.
Table 1 λ values and model performance
The 'orness measure' of the model as calculated according to A1-(5) is 0.4. This means that the OWA operator does not work entirely as an AND operator and to some degree (orness measure of 0.4) it works as an OR operator. Hamacher product acts as an AND operator, thus it further reduces the operator values making the model to be too biased towards DHF patients. But, by using OWA operator to aggregate the Hamacher product of S1P, IL-1β and the Hamacher product of TNF-α, IL-10 and PAF it allows to compensate the over intensification caused from Hamacher product to a certain extent and provide a better way to distinguish DF and DHF patients.
Construction of ambiguous region
The overall ambiguous region is determined by using ambiguous regions of individual cytokines. In this region it cannot be determined specifically whether the patient is DHF or a DF patient. The ambiguous levels of individual cytokines are determined using the cytokine values which result in around 0.5 degree of membership values. As the membership functions are developed independently of sample data, the resulting overall ambiguous region also becomes independent of sample data. The individual ambiguous levels that are used for each parameter are 0.9–1.2 pg/ml for S1P, 30.7–31 pg/ml for IL-1β, 17.5–19 pg/ml for TNF-α, 38–40.5 pg/ml for IL-10 and 48–50 ng/ml for PAF. Separate ambiguous levels of the cluster S1P,IL-1β and the cluster from IL-10,PAF,TNF-α is given in Fig. 3(a) and (b) respectively and the final ambiguous region of the model is displayed in Fig. 3(c).
Ambiguous region for first cluster of cytokines (S1P, IL-1β) (a), second cluster of cytokines (IL-10, TNF-α, PAF) (b), final ambiguous region in model (c). Ambiguous region is indicated by the white colour region. This is the region in where it is not possible to make a precise decision as to whether the patient is severe or non-severe based on the model value
Algorithm of the fuzzy decision support system
INPUT - Input the fuzzy set for S1P, IL-1β, TNF-α, PAF and IL-10
OUTPUT - Operator value which measures unfavourability to attain DHF.
Input the crisp values (raw patient data) on cytokines S1P, IL-1β, TNF-α, PAF and IL-10.
Generate the fuzzy membership values for each cytokine using respective membership functions.
Concentrate the membership values of TNF-α, PAF and IL-10 by 1.1, 1.2 and 1.1 respectively.
Obtain Hamacher product (H1) of the variables S1P and IL-1β,
where μ S (x), μ β (x) are the membership values of S1P and IL-1β respectively.
Obtain Hamacher product (H) of the variables TNF-α and IL-10
$$ \mathrm{H}=\frac{\mu_{\alpha}{(x)}^{1.1}\ast {\mu}_{10}{(x)}^{1.1}}{\mu_{\alpha}{(x)}^{1.1}+{\mu}_{10}{(x)}^{1.1}-{\mu}_{\alpha}{(x)}^{1.1}\ast {\mu}_{10}{(x)}^{1.1}} $$
where μ α (x), μ 10(x) are the membership values of TNF-α and IL-10 respectively.
Obtain Hamacher product (H2) of the variable PAF and H
$$ \mathrm{H}2=\frac{\mathrm{H}\ast {\mu}_p{(x)}^{1.2}}{\mathrm{H}+{\mu}_p{(x)}^{1.2}-\mathrm{H}\ast {\mu}_p{(x)}^{1.2}} $$
where μ p (x) is the membership values of PAF and H is obtained in step 5.
Obtain the OWA operator of H1 and H2 with weights 0.4 and 0.6
Output final operator value measuring unfavourability to attain DHF.
MATLAB codes are provided in Additional File 2.
Model operator regions
Three main regions are identified in the model; non severe (DF), severe (DHF) and ambiguous region. If the model output value is below 0.36 the patient is considered as DHF and if the model output value is above 0.51 the patient is considered as DF. A region of ambiguity is detected in the model for the value range 0.36 to 0.51. In this region it cannot be determined specifically whether the patient is DHF or a DF patient.
Model validation
The model is validated at 96, 108 and 120 h from onset of fever using DF and DHF patients in the sample as shown in Figs. 4, 5 and 6 respectively. The model's validation at these time points is justified as the critical phase occurs often after the third day of fever, usually around the fifth or sixth day of illness with defervescence [45]. The model is validated using sample data collected from Colombo South Teaching Hospital. The sample included data for S1P, IL-1β, TNF-α, IL-10 and PAF collected at various time points from onset of illness. The data collected from patients were given as input to the model, which would then output a value that measure the disease severity level. This was done separately for 96, 108 and 120 h from onset of fever. Depending on this model output value it can be determined whether the patient is DF, DHF or in the ambiguous region. In the sample data set the medical experts had made a final diagnosis of the patient and have made the classification as to whether the patient is DF or DHF based on the 2011 WHO guidelines [1]. So, as to assess the validity of our developed model we compared the model output result with the medical expert's result. Then if the model decision and the medical expert's decision are the same it was considered as a correct output from the model and if the two decisions differ, it was considered as an incorrect decision from the model (misclassification).
Model validation results for DHF (left) and DF (right) patients at 96 h from onset of fever. H1 refers to the Hamacher result of S1P and IL-1β and H2 refers to the Hamacher result of TNF-α, PAF and IL-10. In the scale, values closer to 1 (blue shaded area) represent non severe (DF) region and the scale values closer to 0 (red shaded area) represent severe region (DHF). The ambiguous region in where it is difficult to make a precise decision as to whether the patient is heading towards severe or non-severe region is indicated in the white region. The black dots represent the model output result of each patient and the severity level can be determined depending on the region the patient falls into. For the figure on left (DHF) we expect more patients to fall in the red shaded area while for the figure on right (DF) we expect more patients to fall in the blue shaded area
Model validation results for DHF (left) and DF (right) patients at 108 h from onset of fever. H1 refers to the Hamacher result of S1P and IL-1β and H2 refers to the Hamacher result of TNF-α, PAF and IL-10. The interpretation of the colours, regions and dots of this figure are as same as that is given in Fig. 4
Using these validation results the accuracy of the model is determined. The accuracy of the model is calculated as
$$ \mathrm{Accuracy}=\frac{\mathrm{correctly}\ \mathrm{classified}\ \mathrm{DHF} + \mathrm{correctly}\ \mathrm{classified}\ \mathrm{DF}\ }{\mathrm{Total}\ \mathrm{DHF} + \mathrm{Total}\ \mathrm{DF}}. $$
When determining the accuracy the patients that fall in the ambiguous region are not considered as misclassified as ambiguity does not suggest an incorrect classification. The model's accuracy at 96,108 and 120 h from onset of fever is displayed in Table 2.
Table 2 Model accuracy for 96, 108 and 120 h from onset of fever
Both DHF and DF patients in the test sample are validated using the model. Table 3 illustrates the distribution of DF and DHF patients according to the region which they are categorized by the model. At 120 h from onset of fever there are four DHF patients (21.05%) with model operator values above 0.6. However, when the model operator values over time for these patients is observed, it can be seen that at time points before 120 h the model has indeed identified them as DHF patients, revealing the model's capability to perform as an early predictive marker.
Table 3 Distribution of DF and DHF patients in severe, ambiguous and non-severe regions at 96, 108 and 120 h from onset of fever
Also, as seen from Table 3, high percentage of DF patients has fallen into severe and ambiguous regions. Furthermore, in four instances, patients with DF have shown operator values below 0.36. This indicates that the model is slightly biased at moving patients to severe region.
The model's behaviour as it changes over time is analysed for individual patients. Fig. 7 shows this behaviour for three DHF and three DF patients. On admission most DHF patients as seen in Fig. 7 (b) and (c) show an operator value in the ambiguous region or in non-severe region, but as time progresses they move onto severe region and remains in this region for some time. However as shown in Fig. 7 (a) and (c) as they reach their final time point, they indeed move onto non-severe region. Therefore, from Fig. 7 it can be seen that the model indeed follows the expected clinical behaviour of DHF patients.
Change of model values over time for 3 DHF patients (a, b, c) and 3 DF (d, e, f) patients. Ambiguous region is shaded. Region above the shaded area is non-severe region and the area below is severe region
Sensitivity analysis is performed in order to determine how the model operator values and the categorization of patients would change when the degrees of fuzziness are changed. In Fig. 8 the boundary values of each of the membership functions are changed by a small amount and the behaviour of the lower and upper limit of the ambiguous regions are analysed. The existing ambiguous region has a lower limit of 0.36 and upper limit of 0.51. As it can be seen from Fig. 8 as the boundary values of each of the membership functions are changed by a small amount the lower and upper limit of ambiguous regions do not change rapidly indicating a robust model.
Change of lower and upper boundary values of the ambiguous region when the cut off (boundary) values of the membership functions are changed within a small range of the cytokines IL-1β (a), IL-10 (b), PAF (c), S1P (d), TNF- α (e). The blue line represents the lower level of the ambiguous region and the red line represents the upper level of the ambiguous region. Behaviour of the ambiguous region for a change in the lower cut off value of the membership function is displayed in the left of figure and the behaviour of the ambiguous region for a change in the upper cut off value of the membership function is displayed in the right of figure of each (a), (b), (c), (d) and (e)
In the model the parameters TNF-α, PAF and IL-10 are concentrated by 1.1, 1.2 and 1.1 respectively. A sensitivity analysis is performed on the weights on which the parameters are concentrated in order to determine how the ambiguous regions would change accordingly. As it can be seen from Fig. 9 as the concentration weights of the three parameters are changed by a small amount the lower and upper limit of ambiguous regions do not change rapidly indicating a robust model.
Behaviour of ambiguous region for a change in concentration weights for IL-10 (a), PAF (b), TNF-α (c). The blue line represents the lower level of the ambiguous region and the red line represents the upper level of the ambiguous region
From Figs. 8 and 9 it can be concluded that this is a robust model and the classification of patients as to whether DF or DHF would not change when the model parameters are subjected to a small change from their existing values.
The model developed to predict the dengue severity performs well with considerable accuracy at all time points with the highest accuracy of 85.00% being achieved at 108 h from onset of fever. At 108 h from onset of fever, none of the DHF patients are misclassified or have fallen into the ambiguous region. This is important as at this time point the model does not succumb to the more serious error of misclassifying DHF patients. However, model performance at 96 h from onset of fever needs to be further improved as early detection would help clinicians to institute appropriate treatment before the patient enters the critical phase of infection [45]. At 96 h from onset of illness, 43.5% of DHF patients are classified as non-severe, though they are correctly classified at the next time point of 108 h. This discrepancy is likely to be due to the cytokine changes not being maximal at 96 h.
Also, in the model DF patients tend to get classified as either ambiguous or severe. Although our approach eliminates the possibility of classifying severe patients as non-severe, this is not ideal as when non severe patients are classified as severe we would not be able to meet up with the optimal resource allocation. The model is biased towards DHF detection because of the use of Hamacher product. The Hamacher product with the intersection operation, is able to intensify the risk level when the combined effect of cytokines is considered. To reduce this over intensification to a certain extent and to provide a better way to distinguish between DF and DHF patients the OWA operator is used as it compensates the over intensification caused due to the use of Hamacher product as it works with an 'orness measure'.
Majority of the previous studies that have been conducted to analyse the association of cytokines and inflammatory mediators on dengue severity have focused on analysing the effect with respect to individual cytokines [6, 10, 13, 17, 18, 43]. However, as it was discussed in the introduction section, it is of importance to consider the combined effect from cytokines as the interactions, inter dependencies and compensations between parameters can have an impact in determining disease severity than when it is analysed individually [23]. As, the Hamacher product possess these properties it was used in our study to consider the cumulative effect from these parameters [44].
Several effective models have been developed based on fuzzy rules for the detection of dengue severity level. In the model in [46] the Mamdani fuzzy inference system is based on physical symptoms and laboratory reports as inputs. The clinical symptoms include fever, gastro intestinal symptoms, headache, body aches, skin rash, and retro-orbital pain. The system gives the output as "no dengue", "probable dengue" and "confirmed dengue". In the mobile application for dengue detection using fuzzy logic, the inputs are fever, skin rash, spontaneous haemorrhaging and tourniquet test [39]. These symptom-based models are useful and work with accuracy, especially in a field setting. However, these symptoms are not specific to dengue (e.g. other diseases such as chikungunya too have very similar symptoms) making it difficult to determine in the presence of co-existing epidemics [5]. In contrast, our model is based on cytokines and is more applicable in a health care setting where blood sampling is available. It is based on our understanding of the pathogenesis. Severe dengue affects the function of endothelial cells and inflammatory mediators are known to play a role in dengue disease severity [6, 7, 9, 10, 13, 18]. Therefore, our model is more objective as it relies on the measurements of the blood, rather than on symptoms. To our knowledge this is the first attempt at developing a fuzzy logic based decision system for dengue severity prediction based on combined interaction of cytokines and inflammatory mediators.
An ANFIS approach is used in [47] to construct diagnostic models using symptoms of dengue patients. In this study, initially an ANFIS model is developed and then it is further improved by using clustering algorithm. This model achieved an accuracy of 86.13%. ANFIS uses properties of ANN in developing fuzzy membership functions. An ANN approach in [48] classified the risk of dengue patients with an accuracy of 96.27%. To use ANN and ANFIS techniques it requires a larger data set as the data set has to be divided as training data set and testing data set and the model is trained using this sample training data set. The models based on these methods have acquired higher accuracy than our model. However, with the limited data set that we have, (at 96 h from onset of fever number of DHF patients 17, DF patients 4. At 108 h from onset of fever number of DHF patients 16, DF patients 4 and at 120 h from onset of fever number of DHF patients 19, DF patients 8) ANN or ANFIS method is not feasible. As we could not afford to use the sample data to develop the model, the model membership values were determined through previous studies [10, 13, 41–43] thus, making it being independent of sample data. This gave us the opportunity to fully utilize the limited data set for model validation. Also, with our approach no overfitting error occurs as the model is independent of data.
Boruta algorithm, which works well on significantly larger data sets was used in [23] to incorporate the effect of interdependency between cytokines. A classification and regression tree (CART) analysis performed on a cohort of Thai children analysed at 72 h from onset of illness achieved a 97% sensitivity in detecting patients who proceeded into DSS [49]. This decision tree algorithm used white blood cell count, percent monocytes, platelet count and haematocrit to make decisions. CART decision tree based on clinical and laboratory parameters including platelets, IL-10 and lymphocyte resulted in a model with an accuracy of 84.6% for DHF and 84.0% for DF and identified IL-10 and platelet counts as the most informative parameters [50]. Even with limited data with the fuzzy approach that we took, we were able to achieve an overall accuracy of 85.00% at 108 h from onset of fever. If much larger data set was present we could have adopted an approach of decision tree or ANN as a better selective approach on how the variables could be combined with the Hamacher and OWA operator and can look into improving the accuracy of our model. However, the limited small data set that we have, restricted us from using these machine learning techniques.
As we are working with a small sample size and in order to generalize our model performance we compared our model with previously developed models that are based on different techniques and also, performed a sensitivity analysis. Sensitivity analysis is highly important when working with a small sample size as, in these limited sample sizes, a small change in the patient decision can hugely affect the overall model performance. However, from Figs. 8 and 9, it can be seen that for a small change in the cut off values of the membership functions and the concentration levels the patient categorization remains unchanged. Therefore, even though we are working with a small sample size, sensitivity results indicate that the model is robust to change.
Although this mathematical model performs with high accuracy and is robust there are certain limitations and further improvements that can be incorporated to the model. As previous studies have shown that S1P levels are significantly correlated with platelet counts in DHF patients [10] and IL-10 levels are significantly and inversely correlated with lymphocyte counts [6], the performance of the model when cytokines are modelled with other clinical parameters such as lymphocyte and platelets need to be further analysed. Also for better generalization, the model needs to be further validated on other larger data sets and also on samples which include children, as the tested sample only consisted of adult patients and had only 11 DF patients.
This study is an attempt to build a mathematical model, to address the combined effect of cytokines and immune mediators S1P, IL-1β, TNF-α, PAF and IL-10, and determine the severity of dengue at an early stage. We developed a mathematical model using fuzzy logic operators, Hamacher and OWA operators. Our model is different from a majority of previous studies as, rather than considering the individual effect of cytokines, the combined effect from several cytokines is considered.
The model performs well in 96, 108 and 120 h from onset of fever and performs best with an accuracy of 85% at 108 h from onset of fever. With the high accuracy level of the model it could be used as a useful asset to determine patients proceeding to DHF level at an early stage, and thereby to reduce the mortality rate and make optimal use of available resources. However, the model's tendency to overestimate the risk of DF patients is a concern. Sensitivity analysis indicates that the model is robust.
DF:
DHF:
Dengue haemorrhagic fever
IL-10:
Interleukin -10
IL-1β:
Interleukin- 1β
PAF:
Platelet activating factor
S1P:
Sphingosine 1-phosphate
TNF-α:
World Health Organization Regional Office for South-East Asia. Comprehensive guidelines for prevention and control of dengue and dengue haemorrhagic fever. Revised and expanded ed. New Delhi: WHO Regional Office for South- East Asia; 2011.
Sirisena PDNN, Noordeen F. Dengue control in Sri Lanka – challenges and prospects for improving current strategies. Sri Lankan J Infect Dis. 2016;6:2–16.
Bäck AT, Lundkvist Å. Dengue viruses – an overview. J Infect Ecol Epidemiol. 2013;03(1). http://dx.doi.org/10.3402/iee.v3i0.19839.
Malavige GN, Ogg GS. T cell responses in dengue viral infections. J Clin Virol. 2013;58:605–11.
World Health organization and Special Programme for Research and Training in Tropical Diseases (TDR). Dengue guidelines for diagnosis, treatment, prevention and control: new edition. 2009.
Malavige GN, Gomes L, Alles L, Chang T, Salimi M, Fernando S, et al. Serum IL-10 as a marker of severe dengue infection. BMC Infect Dis. 2013;13:341.
Bozza FA, Cruz OG, Zagne SM, Azeredo EL, Nogueira RM, Assis EF, et al. Multiplex cytokine profile from dengue patients: MIP-1beta and IFN-gamma as predictive factors for severity. BMC Infect Dis. 2008;8:86.
Mangione JN, Huy NT, Lan NT, Mbanefo EC, Ha TT, Bao LQ, et al. The association of cytokines with severe dengue in children. Trop Med Health. 2014;42:137–44.
Appanna R, Wang SM, Ponnampalavanar SA, Lum LC, Sekaran SD. Cytokine factors present in dengue patient sera induces alterations of junctional proteins in human endothelial cells. Am J Trop Med Hyg. 2012;87:936–42.
Jeewandara C, Gomes L, Wickramasinghe N, Gutowska-Owsiak D, Waithe D, Paranavitane S, et al. Platelet activating factor contributes to vascular leak in acute dengue infection. PLoS Negl Trop Dis. 2015;9:2.
Beatty PR, Puerta-Guardo H, Killingbeck SS, Glasner DR, Hopkins K, Harris E. Dengue virus NS1 triggers endothelial permeability and vascular leak that is prevented by NS1 vaccination. Sci Transl Med. 2015;07:304.
Darwish I, Liles CW. Emerging therapeutic strategies to prevent infection-related microvascular endothelial activation and dysfunction. Virulence. 2013;04:572–82.
Gomes L, Fernando S, Fernando RH, Wickramasinghe N, Shyamali NLA, Ogg GS, et al. Sphingosine 1-phosphate in acute dengue infection. PLoS One. 2014;9:11.
Hottz ED, Medeiros-de-Moraes IM, Vieira-de-Abreu A, de Assis EF, Vals-de-Souza R, Castro-Faria-Neto HC, et al. Platelet activation and apoptosis modulate monocyte inflammatory responses in dengue. J Immunol. 2014;193:1864–72.
Callaway JB, Smith SA, McKinnon KP, de Silva AM, Crowe Jr JE, Ting JP. Spleen tyrosine kinase (Syk) mediates IL-1β induction by primary human monocytes during antibody-enhanced dengue virus infection. J Biol Chem. 2015;290:28.
Wu MF, Chen ST, Yang A, Lin WW, Lin Y, Chen N, et al. CLEC5A is critical for dengue virus–induced inflammasome activation in human macrophages. Blood. 2013;121:95–106.
Perez A, García G, Sierra B, Alvarez M, Vázquez S, Cabrera M, et al. IL-10 levels in dengue patients: some findings from the exceptional epidemiological conditions in Cuba. J Med Virol. 2004;73:230–4.
Green S, Vaughan D, Kalayanarooj S, Nimmannitya S, Suntayakorn S, Nisalak A, Ennis F. Elevated plasma interleukin-10 levels in acute dengue correlate with disease severity. J Med Virol. 1999;59:329–34.
Adikari TN, Gomes L, Wickramasinghe N, Salimi M, Wijesiriwardena N, Kamaladasa NL, et al. Dengue NS1 antigen contributes to disease severity by inducing interleukin (IL)-10 by monocytes. Clin Exp Immunol. 2016;184:90–100.
Hober D, Poli L, Roblin B, Gestas P, Chungue E, Granic G, et al. Serum levels of tumor necrosis factor-alpha (TNF-alpha), interleukin-6 (IL-6), and interleukin-1beta (IL-1beta) in dengue-infected patients. Am J Trop Med Hyg. 1993;48:324–31.
Ferreira AX, de Oliverira SA, Gandini M, Ferreira LC, Correa D, Abiraude FM, et al. Circulating cytokines and chemokines associated with plasma leakage and hepatic dysfunction in Brazilian children with dengue fever. Acta Trop. 2015;149:138–47.
Malavige GN, Huang L, Salimi M, Gomes L, Jayaratne SD, Ogg GS. Cellular and cytokine correlates of severe dengue infection. PLoS One. 2012;7:11.
Singla M, Kar M, Sethi T, Kabra S, Lodha R, Chandele A, et al. Immune response to dengue virus infection in pediatric patients in New Delhi, India—association of viremia, inflammatory mediators and monocytes with disease severity. PLOS Negl Trop Dis. 2016;10:03.
Jayasinghe S. Complexity science to conceptualize health and disease: is It relevant to clinical medicine? Mayo Clin Proc. 2012;87:314–19.
Massad E, Burattini MN, Ortega NRS. Fuzzy logic and measles vaccination: designing a control strategy. Int J Epid. 1999;28:550–57.
Bosl WJ. Systems biology by the rules: hybrid intelligent systems for pathway modeling and discovery. BMC Syst Biol. 2007;1:13.
de la Cruz Hernández SI, Puerta-Guardo HN, Flores Aguilar H, González Mateos S, López Martinez I, Ortiz-Navarrete V, et al. Primary dengue virus infections induce differential cytokine production in Mexican patients. Mem Inst Oswaldo Cruz. 2016;111:161–67.
Vicente C, Herbinger K, Fröschl G, Malta Romano C, de Souza Areias Cabidelle A, Cerutti Junior C. Serotype influences on dengue severity: a cross-sectional study on 485 confirmed dengue cases in Vitória, Brazil. BMC Infect Dis. 2016;16:320.
Guzman MG, Vazquez S. The complexity of antibody-dependent enhancement of dengue virus infection. Viruses. 2010;2:2649–62.
Wickramaarachchi WPTM, Perera SSN, Jayasinghe S. Investigating the impact of climate on dengue disease transmission in urban Colombo: A Fuzzy logic model. In: 4th annual international conference on computational mathematics, computational geometry & statistics (CMCGS), Proceedings: 20-24. Singapore: Global Science and Technology Forum; 2015. 10.5176/2251-1911_CMCGS15.10.
Chomej P, Bauer K, Bitterlich N, Hui DS, Chan KS, Gosse H, et al. Differential diagnosis of pleural effusions by fuzzy-logic-based analysis of cytokines. Respir Med. 2004;98:308–17.
Anand SK, Kalpana R, Vijayalakshmi S. Design and implementation of a fuzzy expert system for detecting and estimating the level of asthma and chronic obstructive pulmonary disease. Middle East J Sci Res. 2013;14:1435–44.
Kalpana M, Kumar AS. Fuzzy expert system for diabetes using fuzzy verdict mechanism. Int J Adv Networking Appl. 2011;03:1128–34.
Abushariah M, Alqudah A, Adwan O, Yousef R. Automatic heart disease diagnosis system based on artificial neural network (ANN) and adaptive neuro-fuzzy inference systems (ANFIS) approaches. J Softw Eng Appl. 2014;07:1055–64.
Adeli A, Neshat M. A fuzzy expert system for heart disease diagnosis. In Proceedings of International Multi Conference of Engineers and Computer Scientists. Hong Kong: IMECS Conference Proceedings; 2010.
Neshat M, Yaghobi M, Naghibi MB, Esmaelzadeh A. Fuzzy expert system design for diagnosis of liver disorders. In Knowledge Acquisition and Modeling, 2008. KAM'08. International Symposium on 2008 Dec 21 (pp. 252-256). Wuhan: IEEE.
Baig F, Khan MS, Noor Y, Imran M, Baig F. Design model of fuzzy logic medical diagnosis control system. Int J Comput Sci Eng. 2011;03:2093–108.
Zadeh LA. Fuzzy Sets. Inf Control. 1965;08:338–53.
Salman A, Lina Y, Simon C. Computational Intelligence Method for Early Diagnosis Dengue Haemorrhagic Fever Using Fuzzy on Mobile Device. In EPJ Web of Conferences 2014 (Vol. 68, p. 00003). Jakarta: EDP Sciences.
Legowo N, Kanigoro B, Salman AG, Syafii M. Adaptive Neuro Fuzzy Inference System for Diagnosing Dengue Hemorrhagic Fever. In Asian Conference on Intelligent Information and Database Systems 2015 Mar 23 (pp. 440-447). Springer International Publishing. doi: 10.1007/978-3-319-15702-3_43.
Houghton-Trivino N, Salgado D, Rodriguez J, Bosch I, Castellanos J. Level of soluble ST2 in serum associated with severity of dengue due to alpha stimulation tumor necrosis factor. J Gen Virol. 2010;91:697–706.
Chen LC, Lei HY, Liu CC, Shiesh SC, Chen SH, Liu HS, et al. Correlation of serum levels of macrophage migration inhibitory factor with disease severity and clinical outcome in dengue patients. Am J Trop Med Hyg. 2006;74:142–7.
Kittugul L, Temprom W, Sujirara D, Kittugul C. Determination of tumor necrosis factor- alpha in dengue virus infected patients by sensitive biotin-streptravidin enzyme-linked immunosorbent assay. J Virol Methods. 2000;90:51–7.
Lemaire J. Fuzzy Insurance. ASTIN Bull. 1990;20:34–45.
Ministry of Health Sri Lanka. National guidelines: Guidelines on management of dengue fever and dengue hemorrhagic fever in adults. 2010. http://www.epid.gov.lk/web/attachments/article/141/Guidelines%20for%20the%20management%20of%20DF%20and%20DHF%20in%20adults.pdf. Accessed 15 Jan 2015.
Saikia D, Dutta JC. Early diagnosis of dengue disease using fuzzy inference system. In 2016 International Conference on Microelectronics, Computing and Communications (MicroCom), Durgapur, India, 2016. doi: 10.1109/MicroCom.2016.7522513.
Faisal T, Taib M, Ibrahim F. Adaptive Neuro-Fuzzy Inference System for diagnosis risk in dengue patients. Expert Syst Appl. 2012;39:4483–95.
Ibrahim F, Faisal T, Mohamad Salim M, Taib M. Non-invasive diagnosis of risk in dengue patients using bioelectrical impedance analysis and artificial neural network. Med Biol Eng Comput. 2010;48:1141–48.
Potts J, Gibbons R, Rothman A, Srikiatkhachorn A, Thomas S, Supradish P, et al. Prediction of dengue disease severity among pediatric Thai patients using early clinical laboratory indicators. PLoS Negl Trop Dis. 2010;4:8.
Brasier AR, Ju H, Garcia J, Spratt HM, Victor SS, Forshey BM, et al. A three-component biomarker panel for prediction of dengue hemorrhagic fever. Am J Trop Med Hyg. 2012;86:341–8.
Validation data was funded through Centre for Dengue Research, University of Sri Jayawardenapura. No funding is provided for the design of the model, analysis and in writing the manuscript.
All raw data on inflammatory mediators used in model validation of this study are included in the published articles [13] and [19] and is given in Additional file 3.
SDPJ – Developed the methodology, designed the study, analysis and interpretation of data, wrote the manuscript. SSNP - Developed the methodology, designed the study, analysis and interpretation of data. GNM- Provided the patient databases for mathematical modelling and was involved in conceptualization of the study. SJ- Designed the study, analysis and interpretation of data. All authors read and approved the final manuscript.
No competing interests exist.
The study did not involve human participants and for model validation this used pre-existing data that could be found in published articles [13] and [19] where the ethics approval for these previous studies were obtained from the Ethics Review Committee of the University of Sri Jayawardenapura.
Research and Development Centre for Mathematical Modelling, University of Colombo, Colombo, Sri Lanka
S. D. Pavithra Jayasundara & S. S. N. Perera
Centre for Dengue Research, Faculty of Medicine, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
Gathsaurie Neelika Malavige
Department of Clinical Medicine, Faculty of Medicine, University of Colombo, Colombo, Sri Lanka
Saroj Jayasinghe
S. D. Pavithra Jayasundara
S. S. N. Perera
Correspondence to S. D. Pavithra Jayasundara.
Additional file 1:
Theoretical Framework. (DOCX 20 kb)
MATLAB code for model. (DOCX 34 kb)
Validation Data. (XLSX 20 kb)
Jayasundara, S.D.P., Perera, S.S.N., Malavige, G.N. et al. Mathematical modelling and a systems science approach to describe the role of cytokines in the evolution of severe dengue. BMC Syst Biol 11, 34 (2017). https://doi.org/10.1186/s12918-017-0415-3
Combined effect
|
CommonCrawl
|
An IND-CCA2 secure post-quantum encryption scheme and a secure cloud storage use case
Peng Zeng1,
Siyuan Chen2 &
Kim-Kwang Raymond Choo3
Code-based public key encryption (PKE) is a popular choice to achieve post-quantum security, partly due to its capability to achieve fast encryption/decryption. However, code-based PKE has larger ciphertext and public key sizes in comparison to conventional PKE schemes (e.g., those based on RSA). In 2018, Lau and Tan proposed a new rank metric code-based PKE scheme, which has smaller public key and ciphertext sizes compared to other code-based PKE schemes. They also proved that their scheme achieves IND-CPA security, assuming the intractability of the decisional rank syndrome decoding problem. It is known that IND-CCA2 security is the strongest and most popular security assurance for PKE schemes. Therefore, in this paper, we obtain a new code-based PKE scheme from Lau and Tan's scheme, in order to inherit the underlying small public key and ciphertext sizes. However, our new scheme is shown to achieve IND-CCA2 security, instead of the weaker IND-CPA security. Specifically, the respective public key size and ciphertext size in our new scheme are 15.06 KB and 1.37 KB under 141-bit security level, and 16.76 KB and 1.76 KB under 154-bit security level. We then present a use case for the proposed scheme, that is for secure cloud storage.
With rapid advances in Internet and information and communication technologies (ICT; e.g., computation devices, speed of communication and device processors), our society is becoming increasingly reliant on technologies. This is particularly true for technologically advanced countries. This has also given rise to a significant increase in the amount of data generated, and needed to be processed and stored (e.g., using distributed storage servers such as cloud server). Cloud storage service providers such as Amazon's S3 offer users public cloud storage services, where users can backup, access and/or process their data anywhere from some computing devices (e.g., Android and iOS devices) connected to the Internet. Popularity of such services is also driven by financial costs, as it may be cheaper to pay for what you use than building, maintaining and securing one's local storage services. One downside, however, is the potential data privacy and integrity risks since data is now outsourced to some public cloud service providers that may be semi-trusted. Thus, massive techniques are proposed for ensuring the data security and decreasing the useless data such as semi-supervised learning, spammer detection framework and cryptographic protocol [1,2,3,4].
Generally, a cloud storage system should ensure availability (e.g., for any legitimate customer, (s)he can reach his/her uploaded data from some Internet-connected devices), reliability (of user data outsourced to the cloud), efficient retrieval (of user data outsourced to the cloud), data sharing (between authorized users), security (i.e., both confidentiality and integrity [5]), and other features required/stated in the service level agreements (SLAs). Cryptographic schemes, such as proxy re-encryption, attribute-based encryption, searchable encryption [6,7,8,9], have been designed to achieve several of these features.
Nowadays, most existing cryptographic schemes are number-theoretic-based [10,11,12], which are not resilient against Shor's quantum attack algorithm [13, 14] using quantum computers. Code-based public key encryption (PKE) proposed by McEliece [15] in 1978 is widely considered to be post-quantum secure [16]. There are two key advantages in McEliece-like code-based PKE schemes, namely: encryption/decryption speed is very fast, and its security relies on hard problems in coding theory which is believed to resist quantum computing attacks. However, such schemes are not widely used in practice, partly due to large public key size (in comparison to classic number-theoretic-based schemes). Hence, how to design a code-based PKE scheme with small public key size is an area of active research. In 2017, for example, Loidreau proposed a code-based PKE scheme from rank metric [17], which can significantly reduce the size of public keys compared to Hamming metric encryption schemes. More recently in 2018, Lau and Tan introduced a public key generation approach and a new rank metric code-based PKE scheme (hereafter refer to LT scheme) to further reduce the public key size [18]. The LT scheme was also shown to be indistinguishability against chosen plaintext attacks (IND-CPA) secure, based on the decisional rank syndrome decoding (DRSD) assumption.
For a PKE scheme, however, the strongest and the most popular security notion is indistinguishability against adaptive chosen ciphertext attacks (IND-CCA2). This is the gap we seek to address in this paper. Our contribution in this paper include three aspects. Firstly, we propose a new IND-CCA2 secure PKE scheme based on the LT scheme. While the semantic security in our PKE scheme has been improved to be IND-CCA2 secure, the public key and ciphertext sizes remains small. Secondly, we give the formal proof of our PKE scheme under the DRSD assumption. Thirdly, we present a use case of our PKE scheme in secure cloud storage.
The rest of the paper is organized as follows. Prior to presenting our scheme, we introduce relevant background materials in "Preliminaries" section and the LT scheme [18] in "Revisiting the LT scheme" section. We then present our IND-CCA2 secure code-based PKE scheme in "Our IND-CCA2 secure PKE scheme" section, following by its security proof and performance evaluation in "Security proof" and "Efficiency" sections. For example, we show that the public key size in our new scheme is 15.06 KB under 141-bit security level which is acceptable in many today's applications. We use cloud storage as a use case and explain its potential utility in "A secure cloud storage use case" section. Finally, we conclude this paper in "Conclusion" section.
In this section, we revisit the definitions, notations and materials relevant to rank metric codes, which is required in the understanding of our proposed scheme. In the rest of this paper, we will use the notations and the functions as we defined in Table 1.
Table 1 Some notations in this paper
Definition 1
(Linear code and its generator matrix) An \([n, k]_{q}\) linear code \({\mathcal {C}}\) is a linear subspace of \({\mathbb {F}}_q^n\) with dimension k, and a matrix \(G\in {\mathbb {F}}_q^{k\times n}\) is called a generator matrix of \({\mathcal {C}}\) if its row vectors form a basic of \({\mathcal {C}}\).
(Rank metric) Let \({\mathbb {F}}_{q^m}\) be an extension field of \({\mathbb {F}}_q\) for some positive integer m, and \(\beta = (\beta _0, \beta _1, \ldots , \beta _{m-1})\) a basis of \({\mathbb {F}}_{q^m}\) over \({\mathbb {F}}_q\). For each vector \({\mathbf {a}} = (a_0, a_1, \ldots , a_{n-1}) \in {\mathbb {F}}_{q^m}^n\), we associate it with an \(m \times n\) matrix \(M({\mathbf {a}}) = (a_{ij})_{0\le i \le m-1, 0 \le j \le n-1}\in {\mathbb {F}}_q^{m\times n}\) s.t. \(a_j = \sum _{i=0}^{m-1}m_{ij}\beta _i\), \(j=0, 1, \ldots , n-1\). The rank of \({\mathbf {a}}\), denoted by \(\Vert {\mathbf {a}}\Vert\), is defined as the rank of the matrix \(M({\mathbf {a}})\), and the rank distance between any two vectors \({\mathbf {x}}\) and \({\mathbf {y}}\) in \({\mathbb {F}}_{q^m}^n\) is defined by \(\Vert {\mathbf {x}}-{\mathbf {y}}\Vert\).
(Circulant matrix) Given a vector \({\mathbf {c}} = (c_0, c_1, \ldots , c_{n-1}) \in {\mathbb {F}}_{q^m}^n\), we associate it with an \(n \times n\) circulant matrix \(Rot({\mathbf {c}})=(c_{ij})_{0\le i,j \le n-1}\in {\mathbb {F}}_{q^m}^{n\times n}\) satisfying \(c_{ij} = c_{(i-j) \mod n}\) for \(0 \le i \le n-1\), \(0 \le j \le n-1\). That is, we have
$$\begin{aligned} Rot({\mathbf {c}})=\left( \begin{array}{ccccc} c_0 &{} c_{n-1} &{} \cdots &{} c_2 &{} c_1 \\ c_1 &{} c_0 &{} c_{n-1} &{} c_3 &{} c_2 \\ \vdots &{} c_1 &{} c_0 &{} \ddots &{} \vdots \\ c_{n-2} &{} \vdots &{} \ddots &{} \ddots &{} c_{n-1} \\ c_{n-1} &{} c_{n-2} &{} \cdots &{} c_1 &{} c_0 \\ \end{array} \right) . \end{aligned}$$
Further, we use the notation \(Rot_k({\mathbf {c}})\) to denote a \(k\times n\) matrix consisting of the first k rows of \(Rot({\mathbf {c}})\).
(Decisional rank syndrome decoding (DRSD) problem) For a given full rank matrix \(A\in {\mathbb {F}}_{q^m}^{k\times n}\) and an integer t, we define \({\mathcal {S}}_{A, t, 0}=\{(A, {\mathbf {y}})~|~{\mathbf {y}}\xleftarrow {\$}{\mathbb {F}}_{q^m}^n\}\) and \({\mathcal {S}}_{A, t, 1}=\{(A, {\mathbf {xA}}+{\mathbf {e}})~|~{\mathbf {x}}\in {\mathbb {F}}_{q^m}^k, {\mathbf {e}}\in {\mathbb {F}}_{q^m}^n \ \text{ and }\ \Vert {\mathbf {e}}\Vert =t \}\).
Input: a full rank matrix \(A\in {\mathbb {F}}_{q^m}^{k\times n}\), an integer t, and a vector \({\mathbf {s}} \in {\mathbb {F}}_{q^m}^n\).
Output: a bit \(b \in \{0,1\}\) s.t. \((A, {\mathbf {s}}) \in {\mathcal {S}}_{A, t, b}\).
Let \({\mathcal {A}}\) be a probabilistic polynomial time (PPT) algorithm for the DRSD problem. With respect to a security parameter \(\lambda\), we define the advantage of \({\mathcal {A}}\) as
$$\begin{aligned} \text{ Adv }_{{\mathcal {A}}}^{{\text {DRSD}}}(\lambda )= \left| \Pr [{\mathcal {A}}(A, t, {\mathbf {s}})=b~|~b \xleftarrow {\$}\{0,1\}, (A, {\mathbf {s}})\xleftarrow {\$}{\mathcal {S}}_{A, t, b}]-\frac{1}{2}\right| . \end{aligned}$$
In [19], the DRSD problem is proven to be NP hard in the worst case; thus, it is reasonable to assume that \(\text{ Adv }_{{\mathcal {A}}}^{{\text {DRSD}}}(\lambda )\) is negligible with regard to \(\lambda\).
(IND-CPA attack) To describe the IND-CPA attack we first introduce the power of the adversary, denoted by \({\mathcal {A}}\), in the attack.
Chosen Plaintext Adversary: \({\mathcal {A}}\) can choose the plaintext adaptively and he can get the corresponding ciphertext of the chosen plaintext through the encryption mechanism.
We define an IND-CPA attack as an experiment. The experiment runs the key generation algorithm and gets a public-private key pair (pk, sk). An IND-CPA adversary uses pk to get several message-ciphertext pairs and stores them in a table. A distinguisher \({\mathcal {D}}\) gets pk and the message-ciphertext table. \({\mathcal {D}}\) randomly chooses two messages \(m_0\) and \(m_1\) (not in the message-ciphertext table) and sends them to the encryption mechanism. The encryption mechanism randomly chooses a message \(m_b\), \(b\in \{0,1\}\), and sends the corresponding ciphertext \(c_b\) to \({\mathcal {D}}\). \({\mathcal {D}}\) takes as input \((pk, m_0, m_1, c_b)\) and outputs a guess \(b'\) of b. If \(b' = b\), the experiment successes. Otherwise, the experiment failures. The probability that \({\mathcal {D}}\) succeeds in the IND-CPA experiment is denoted by \(\Pr _{CPA}\).
A PKE scheme is called IND-CPA secure if \(|\Pr _{CPA} - \frac{1}{2}|\) is negligible.
Revisiting the LT scheme [18]
The LT scheme [18] consists of four PPT algorithms, namely: Setup, KGen, Enc, and Dec which are described as follows.
Setup(\(1^{\lambda }\)): Given a security parameter \(\lambda\), the algorithm chooses a prime power q, a group of integers \((n, m, k, k_0, k_1, t)\) s.t. \(n>k, ~k_0 = \lfloor \frac{k}{2}\rfloor , ~k_1 = k- k_0, t \le \lfloor \frac{n-k}{2}\rfloor\) according to \(\lambda\). The algorithm then outputs the public parameter \(params = (q, n, m, k, k_0, k_1, t)\).
KGen(params): Give the public parameter params, the algorithm
Chooses a generator matrix \(G \in {\mathbb {F}}_{q^m}^{k \times n}\) of a \([n, k]_{q^m}\) linear code \({\mathcal {C}}\) with error correcting ability t and an efficient decoding algorithm \(\mathcal {DEC}(\cdot )\);
Chooses a vector \({\mathbf {x}} \xleftarrow {\$}{\mathbb {F}}_{q^m}^n\) s.t. \(\Vert {\mathbf {x}}\Vert = n\);
Chooses two invertible matrices \(Q \in {\mathbb {F}}_{q^m}^{k \times k}\) and \(P \in {\mathbb {F}}_q^{n \times n}\) uniformly at random;
Computes \(\hat{G} = QG + Rot_k({\mathbf {x}})P\).
The public/private key pair is (\(pk = (\hat{G}, {\mathbf {x}}),sk = (Q, G, P, \mathcal {DEC}(\cdot ))\)).
The algorithms Enc and Dec are listed in Figs. 1 and 2, respectively.
Encryption algorithm in LT scheme
Decryption algorithm in LT scheme
As mentioned previously, the LT scheme was proven to be IND-CPA secure under the DRSD assumption in [18].
Our IND-CCA2 secure PKE scheme
In this section, we convert the LT scheme into an IND-CCA2 secure PKE scheme in which two error vectors come from a random value. This allows the new scheme to avoid the expansion of ciphertext.
Our new IND-CCA2 secure PKE scheme consists of the following four algorithms:
Setup(\(1^{\lambda }\)): Given a security parameter \(\lambda\), the algorithm
Chooses a prime number power q, a group of integers \((n, m, k, k_0, k_1, t, t_0, t_1)\) s.t. \(n>k, ~k_0 = \lfloor \frac{k}{2}\rfloor , ~k_1 = k- k_0,\ t \le \lfloor \frac{n-k}{2}\rfloor\), \(~t_0 = \lfloor \frac{t}{2}\rfloor , ~t_1 = t- t_0\);
Chooses a random vector \(\mathbf {con}\) over \({\mathbb {F}}_{q^m}\) with \(Len(\mathbf {con}) \le \frac{1}{2}\ell\), where \(\ell _0=\lfloor \log _{q^m}\left( {\begin{array}{c}n\\ t_0\end{array}}\right) \rfloor\), \(\ell _1=\lfloor \log _{q^m}\left( {\begin{array}{c}n\\ t_1\end{array}}\right) \rfloor\), and \(\ell =\ell _0+\ell _1\);
Outputs the public parameter \(params = (q, n, m, k, k_0, k_1, t, t_0, t_1, \mathbf {con})\).
KGen(params): Give the public parameter \(params= (q, n, m, k, k_0, k_1, t, t_0, t_1, \mathbf {con})\), the algorithm
Chooses a generator matrix \(G \in {\mathbb {F}}_{q^m}^{k \times n}\) of an \([n, k]_{q^m}\) linear code \({\mathcal {C}}\) with error correcting ability t and an efficient decoding algorithm \(\mathcal {DEC}(\cdot )\);
The public key is \(pk = (\hat{G}, {\mathbf {x}})\) and the private key is \(sk = (Q, G, P, \mathcal {DEC}(\cdot ))\).
Similar to in "Revisiting the LT scheme" section, we list the algorithms Enc and Dec of our proposed PKE scheme in Figs. 3 and 4, respectively.
Encryption algorithm in our scheme
Decryption algorithm in our scheme
Security proof
In this section, we prove the correctness and IND-CCA2 security of our code-based PKE scheme presented in the preceding section.
According to the Enc and Dec algorithms of our scheme, we can easily get:
$$\begin{aligned} {\mathbf {u}}_4'=\, & {} Rt_{k_1}(\mathcal {DEC}({\mathbf {c}}_1 - {\mathbf {c}}_0 P)Q^{-1})\\=\, & {} Rt_{k_1}(\mathcal {DEC}(({\mathbf {m}}_r | {\mathbf {u}}_4) \hat{G} + {\mathbf {e}}_1 - ({\mathbf {m}}_r | {\mathbf {u}}_4) Rot_k({\mathbf {x}})P - {\mathbf {e}}_0 P)Q^{-1})\\=\, & {} Rt_{k_1}(\mathcal {DEC}(({\mathbf {m}}_r | {\mathbf {u}}_4)QG + {\mathbf {e}}_1 - {\mathbf {e}}_0 P)Q^{-1})\\=\, & {} Rt_{k_1}(({\mathbf {m}}_r | {\mathbf {u}}_4)QQ^{-1})\\=\, & {} {\mathbf {u}}_4. \end{aligned}$$
Then, we can compute the error vectors \({\mathbf {e}}_0\) and \({\mathbf {e}}_1\) correctly with the vector \({\mathbf {u}}_4\) and get \({\mathbf {u}}_3' = Conv^{-1}_{t_0}({\mathbf {e}}_0)|Conv^{-1}_{t_1}({\mathbf {e}}_1) = {\mathbf {u}}_3\). From \({\mathbf {u}}_3\) and \({\mathbf {u}}_4\), we can recover \({\mathbf {u}}_1\) and \({\mathbf {u}}_2\) correctly and obtain the message \({\mathbf {m}}\) by computing
$$\begin{aligned} {\mathbf {v}} = {\mathbf {u}}_2 - {\mathcal {H}}({\mathbf {u}}_1) \quad \text{ and }\quad {\mathbf {m}}|\mathbf {con} = {\mathbf {u}}_1 - Rd_{k_1+Len({\mathbf {con}})}({\mathbf {v}}). \end{aligned}$$
This concludes the correctness proof of our scheme.
IND-CCA2 security
The IND-CCA2 security for any PKE scheme can be defined as follows.
(IND-CCA2 security) For any two-stage (find stage and guess stage) adversary \({\mathcal {A}} =({\mathcal {A}}_1, {\mathcal {A}}_2)\) against a PKE scheme \({\mathcal {E}}=(\textsf {Setup}, \textsf {KGen}, \textsf {Enc}, \textsf {Dec})\), the IND-CCA2 security is modeled as an experiment \(\mathbf{Exp }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CCA2}}(\lambda )\) with respect to a security parameter \(\lambda\).
\(\mathbf{Exp }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CCA2}}(\lambda )\):
\(params \leftarrow \textsf {Setup}(\lambda )\)
\((pk, sk) \leftarrow \textsf {KGen}(params)\)
\(({\mathbf {m}}_0, {\mathbf {m}}_1, \text{ state })\leftarrow {\mathcal {A}}_1^{\textsf {Dec}(sk,\cdot )}(pk)\), where \({\mathbf {m}}_0\) and \({\mathbf {m}}_1\) have the same length
\(b \xleftarrow {\$}\{0,1\}\)
\({\mathbf {c}}' \leftarrow \textsf {Enc}(pk,{\mathbf {m}}_b)\)
\(b' \leftarrow {\mathcal {A}}_2^{\textsf {Dec}(sk,\cdot )}({\mathbf {c}}',\text{ state })\). Note that \({\mathcal {A}}_2\) is not allowed to make decryption query on \({\mathbf {c}}'\).
Outputs 1 if \(b' = b\), otherwise outputs 0.
The advantage, \(\text{ Adv }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CCA2}}(\lambda )\), of \({\mathcal {A}}\) in \(\mathbf{Exp }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CCA2}}(\lambda )\) can be defined by
$$\begin{aligned} \text{ Adv }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CCA2}}(\lambda ) = \left| 2\Pr [\mathbf{Exp }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CCA2}}(\lambda ) = 1] - 1\right| . \end{aligned}$$
A PKE scheme \({\mathcal {E}}=(\textsf {Setup}, \textsf {KGen}, \textsf {Enc}, \textsf {Dec})\) is IND-CCA2 secure if for any PPT \({\mathcal {A}}\), \(\text{ Adv }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CCA2}}(\lambda )\) is negligible with respect to \(\lambda\). In addition, the above experiment can become an IND-CPA experiment if \({\mathcal {A}}\) is not allowed to make decryption query. We denote by \(\mathbf{Exp }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CPA}}(\lambda )\) the IND-CPA experiment and the advantage of \({\mathcal {A}}\) is defined by
$$\begin{aligned} \text{ Adv }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CPA}}(\lambda ) = \left| 2\Pr [\mathbf{Exp }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CPA}}(\lambda ) = 1] - 1\right| . \end{aligned}$$
We divide the security proof of our scheme into two phases following the approach in [20]. First, we give the IND-CPA security proof based on the IND-CPA security of the LT scheme. Then, we prove the IND-CCA2 security of our scheme in the random oracle model.
Theorem 1
Assume that \({\mathcal {A}}\) is an attacker in the experiment \(\mathbf{Exp }_{{\mathcal {A}}, {\textsf {PKE}}}^{{\textsf {IND-CPA}}}(\lambda )\) with advantage \(\epsilon\) against the LT scheme. Then, we have
$$\begin{aligned} \text{ Adv }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CPA}}(\lambda ) \le \varepsilon + \frac{q_{Rd}}{q^{mr}}, \end{aligned}$$
where \(q_{Rd}\) is the query times to oracle Rd and r is the length of the output vectors of the function Rd.
We assume that \({\mathcal {C}}\) is a challenger, who is responsible for the generation of the public parameters params and a public-private key pair (pk, sk) by running the algorithms \(\textsf {Setup}\) and \(\textsf {KGen}\), respectively. It is obvious that \({\mathcal {A}}\) should have the advantage \(\text{ Adv }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CPA}}(\lambda )\) against our scheme if \({\mathcal {C}}\) simulates two oracles \({\mathcal {H}}\) and Rd correctly. After receiving \({\mathbf {m}}_0\) and \({\mathbf {m}}_1\) from \({\mathcal {A}}\), \({\mathcal {C}}\) randomly chooses a vector \(\mathbf {v}\), a bit b and computes \({\mathcal {H}}\) and Rd as
$$\begin{aligned} Rd({\mathbf {v}}) = {\mathbf {u}}_1 - ({\mathbf {m}}_b|\mathbf {con}) \quad \text{ and }\quad {\mathcal {H}}({\mathbf {u}}_1) = {\mathbf {u}}_2 - {\mathbf {v}}. \end{aligned}$$
If \({\mathcal {A}}\) makes a query to Rd on a vector \({\mathbf {v}}'\), \({\mathcal {C}}\) first checks whether \({\mathbf {v}}' = {\mathbf {v}}\). If yes, \({\mathcal {C}}\) outputs \(Rd({\mathbf {v}}) = {\mathbf {u}}_1 - ({\mathbf {m}}_b|\mathbf {con})\). Otherwise, \({\mathcal {C}}\) outputs a random vector. If \({\mathcal {A}}\) makes queries to \({\mathcal {H}}\) on a vector \({\mathbf {u}}'_1\), \({\mathcal {C}}\) first checks whether \({\mathbf {u}}'_1 = {\mathbf {u}}_1\). If yes, \({\mathcal {C}}\) outputs \({\mathcal {H}}({\mathbf {u}}_1) = {\mathbf {u}}_2 - {\mathbf {v}}.\) Otherwise, \({\mathcal {C}}\) outputs a random vector. It is obvious that \({\mathcal {C}}\) can not simulate the Rd correctly if \({\mathcal {C}}\) doesn't know \({\mathbf {u}}_1\). Hence, \({\mathcal {C}}\) could simulate Rd and \({\mathcal {H}}\) correctly if and only if \({\mathcal {A}}\) made queries to \({\mathcal {H}}\) on \({\mathbf {u}}_1\) first and then to Rd on \({\mathbf {v}}\). We define two events \(\textsf {Evt}_1\) and \(\textsf {Evt}_2\) as follows:
\(\textsf {Evt}_1\): the event that \({\mathbf {v}}\) is queried to Rd in \(q_{Rd}\) queries before \({\mathbf {u}}_1\) is queried to \({\mathcal {H}}\).
\(\textsf {Evt}_2\): the event that \({\mathbf {u}}_1\) is queried to \({\mathcal {H}}\) in \(q_{{\mathcal {H}}}\) times before \({\mathbf {v}}\) is queried to Rd.
It is obvious that the probability \(\Pr [\textsf {Evt}_1 \wedge \textsf {Evt}_2] = 0\) from the definition of \(\textsf {Evt}_1\) and \(\textsf {Evt}_2\) and we have
$$\begin{aligned} \Pr [\textsf {Evt}_1 \vee \textsf {Evt}_2] = \Pr [\textsf {Evt}_1] + \Pr [\textsf {Evt}_2]. \end{aligned}$$
While the event \(\lnot \textsf {Evt}_1 \wedge \lnot \textsf {Evt}_2\) happens, \({\mathcal {A}}\) can not get any information about the connection between \({\mathbf {m}}_b|\mathbf {con}\) and \({\mathbf {u}}_1|{\mathbf {u}}_2\). Hence, the advantage of \({\mathcal {A}}\) is negligible in this case. For another case that \(\textsf {Evt}_1 \vee \textsf {Evt}_2\) happens, \({\mathcal {A}}\) can guess b correctly. Then we have the inequation \(2\Pr [\mathbf{Exp }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CPA}}(\lambda )=1] \le 2\Pr [\textsf {Evt}_1\vee \textsf {Evt}_2] + 1 - \Pr [\textsf {Evt}_1\vee \textsf {Evt}_2]\). That is, we have
$$\begin{aligned} \Pr [\mathbf{Exp }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CPA}}(\lambda )=1]\le \frac{1 + \Pr [\textsf {Evt}_1\vee \textsf {Evt}_2]}{2}. \end{aligned}$$
According to Eq. (2), we have
$$\begin{aligned} \Pr [\mathbf{Exp }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CPA}}(\lambda )=1] = \frac{\text{ Adv }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CPA}}(\lambda )+1}{2} \end{aligned}$$
$$\begin{aligned} \frac{\text{ Adv }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CPA}}(\lambda )+1}{2} \le \frac{1 + \Pr [\textsf {Evt}_1\vee \textsf {Evt}_2]}{2}, \end{aligned}$$
i.e.,
$$\begin{aligned} \text{ Adv }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CPA}}(\lambda ) \le \Pr [\textsf {Evt}_1\vee \textsf {Evt}_2]. \end{aligned}$$
As we know, b and \(\mathbf {v}\) are generated randomly by \({\mathcal {C}}\), so \({\mathcal {A}}\) could not obtain any information about them if the event \(\textsf {Evt}_1\vee \textsf {Evt}_2\) did not happen. For instead, \({\mathcal {C}}\) can recover \({\mathbf {u}}_4\) from \({\mathbf {c}}\) if the event \(\textsf {Evt}_2\) happens. In the other words, while event \(\textsf {Evt}_2 \wedge \lnot \textsf {Evt}_1\) happens, \({\mathcal {C}}\) could use \({\mathcal {A}}\) to break the LT scheme. Thus, we have
$$\begin{aligned} \Pr [\textsf {Evt}_2 \wedge \lnot \textsf {Evt}_1] = \varepsilon . \end{aligned}$$
On the other hand, the probability that \(\mathbf {v}\) is exactly queried to Rd is \(\frac{1}{q^{mr}}\). Hence, the probability that \(\textsf {Evt}_1\) happens during the \(q_{Rd}\) queries on Rd is
$$\begin{aligned} \Pr [\textsf {Evt}_1] \le 1 - \left( 1-\frac{1}{q^{mr}}\right) ^{q_{Rd}}\le \frac{q_{Rd}}{q^{mr}}. \end{aligned}$$
According to Eqs. (3), (5), (6), (7) we can get:
$$\begin{aligned} \varepsilon= & {} \Pr [\textsf {Evt}_2 \wedge \lnot \textsf {Evt}_1] \\= & {} \Pr [\textsf {Evt}_2]\\= & {} \Pr [\textsf {Evt}_2 \vee \textsf {Evt}_1] - \Pr [\textsf {Evt}_1]\\\ge & {} \text{ Adv }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CPA}}(\lambda ) - \frac{q_{Rd}}{q^{mr}}. \end{aligned}$$
That is, we have
$$\begin{aligned} \text{ Adv }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CPA}}(\lambda ) \le \varepsilon + \frac{q_{Rd}}{q^{mr}}. \end{aligned}$$
\(\square\)
Assume that \({\mathcal {A}}\) is an attacker in the experiment \(\mathbf{Exp }_{{\mathcal {A}}, {\textsf {PKE}}}^{{\textsf {IND-CCA2}}}(\lambda )\) who has the advantage \(\epsilon\) against the LT scheme. Then, we have
$$\begin{aligned} \text{ Adv }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CCA2}}(\lambda ) \le \varepsilon + \frac{q_{Rd}}{q^{mr}}+ \frac{q_{{\mathcal {D}}}}{q^{2mn}}, \end{aligned}$$
where \(q_{Rd}\) and \(q_{{\mathcal {D}}}\) are the query times to the oracles Rd and \({\mathcal {D}}\), respectively, and r is the length of the output vectors of the function Rd.
We assume that \({\mathcal {C}}\) is a challenger, who is responsible for the generation of the public parameters params and a public-private key pair (pk, sk) by running the algorithms \(\textsf {Setup}\) and \(\textsf {KGen}\), respectively. It is clear that \({\mathcal {A}}\) should have the advantage \(\text{ Adv }_{{\mathcal {A}}, {\textsf {PKE}}}^{{\textsf {IND-CCA2}}}(\lambda )\) against our scheme if \({\mathcal {C}}\) simulates oracles \({\mathcal {H}}\), Rd and the decryption oracle \({\mathcal {D}}\) correctly. \({\mathcal {C}}\) simulates \({\mathcal {H}}\) and Rd in the same way of the proof of Theorem 1. Then, \({\mathcal {C}}\) uses the plaintext-extractor described in [21] to construct the decryption oracle via the following steps.
The plaintext-extractor takes as input a ciphertext \({\mathbf {c}}\).
For \(q_{Rd}\) times queries to Rd, the input and the output pairs of Rd are denoted by \(({\mathbf {v}}_i, V_i)\), \(1\le i \le q_{Rd}\). Similarly, the input and the output pairs of \({\mathcal {H}}\) are denoted by \(({\mathbf {u}}_{1j}, U_{1j})\), \(1\le j \le q_{{\mathcal {H}}}\).
The plaintext-extractor finds the pair \(({\mathbf {v}}_i, V_i) = ({\mathbf {u}}_2 - U_{1j}, {\mathbf {u}}_{1j} - ({\mathbf {m}}|\mathbf {con}))\) s.t.
$$\begin{aligned} {\mathbf {e}}_0=\, & {} Conv_{t_0}(Lt_{\ell _0}(Lt_{\ell _0+\ell _1}({\mathbf {u}}_{1j}|{\mathbf {u}}_2))), \end{aligned}$$
$$\begin{aligned} {\mathbf {e}}_1=\, & {} Conv_{t_1}(Rt_{\ell _1}(Lt_{\ell _0+\ell _1}({\mathbf {u}}_{1j}|{\mathbf {u}}_2)) \end{aligned}$$
are two error vectors of \({\mathbf {c}}\) and \(Rt_{k}({\mathbf {u}}_{1j}|{\mathbf {u}}_2)\) is the plaintext of \({\mathbf {c}}\) in the LT scheme.
If the plaintext-extractor cannot find the pair \(({\mathbf {v}}_i, V_i)\) or \(({\mathbf {u}}_{1j}, U_{1j})\), it rejects the ciphertext \({\mathbf {c}}\).
Based on the above construction, we know that the plaintext-extractor cannot simulate \({\mathcal {D}}\) unless \({\mathcal {A}}\) made queries \({\mathbf {u}}_{1j}\) to \({\mathcal {H}}\) and \({\mathbf {v}}_i\) to Rd before the ciphertext \({\mathbf {c}}\) is queried to \({\mathcal {D}}\). It should be noted that in this case we have Eqs. (8) and (9). For further analysis, we define a new event, denoted by \(\textsf {Evt}_3\), as
\(\textsf {Evt}_3\): the event that \({\mathcal {A}}\) queries the appropriate ciphertext \({\mathbf {c}}\) to \({\mathcal {D}}\) before \({\mathcal {A}}\) queries \({\mathbf {u}}_{1j}\) to \({\mathcal {H}}\) and \({\mathbf {v}}_i\) to Rd.
It is obvious that the probability that the appropriate ciphertext \({\mathbf {c}}\) is queried to \({\mathcal {D}}\) is \(\frac{1}{q^{2mn}}\). Hence, we get the probability that the event \(\textsf {Evt}_3\) occurs in at most \(q_{{\mathcal {D}}}\) queries satisfies
$$\begin{aligned} \Pr [\textsf {Evt}_3] \le 1- \left( 1 - \frac{1}{q^{2mn}}\right) ^{q_{{\mathcal {D}}}} \le \frac{q_{{\mathcal {D}}}}{q^{2mn}}. \end{aligned}$$
As in the proof of Theorem 1, \({\mathcal {C}}\) could simulate these oracles correctly while the event \(\textsf {Evt}_2\) (equivalent to event \(\textsf {Evt}_2 \wedge \lnot \textsf {Evt}_3 \wedge \lnot \textsf {Evt}_1\)) happened. Then, we have
$$\begin{aligned} \varepsilon= & {} \Pr [\textsf {Evt}_2 \wedge \lnot \textsf {Evt}_3 \wedge \lnot \textsf {Evt}_1]\\= & {} \Pr [\textsf {Evt}_2 \wedge \lnot \textsf {Evt}_1] - \Pr [\lnot \textsf {Evt}_2 \wedge \textsf {Evt}_3 \wedge \textsf {Evt}_1] \\\ge & {} \Pr [\textsf {Evt}_2 \wedge \lnot \textsf {Evt}_1] - \Pr [\textsf {Evt}_3]\\\ge & {} \Pr [\textsf {Evt}_2] - \Pr [\textsf {Evt}_3]\\\ge & {} \Pr [\textsf {Evt}_2 \vee \textsf {Evt}_1]- \Pr [\textsf {Evt}_1] - \Pr [\textsf {Evt}_3]\\\ge & {} \text{ Adv }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CCA2}}(\lambda ) - \frac{q_{Rd}}{q^{mr}}- \frac{q_{{\mathcal {D}}}}{q^{2mn}}. \end{aligned}$$
That is, we have \(\text{ Adv }_{{\mathcal {A}}, \textsf {PKE}}^{\textsf {IND-CCA2}}(\lambda ) \le \varepsilon + \frac{q_{Rd}}{q^{mr}}+ \frac{q_{{\mathcal {D}}}}{q^{2mn}}.\)
In this section, we analyze the efficiency of our proposed PKE scheme by comparing with two recently proposed schemes (i.e., Wang's scheme [22] and Loidreau's scheme [17]), in terms of public key and ciphertext sizes. It should be noted that both Loidreau's scheme and our proposed scheme are based on rank metric codes, while Wang's scheme is based on Hamming metric codes. For a better understanding of our strengths, we introduce a family of the most representative rank metric codes called Gabidulin codes first.
(Gabidulin codes) For an element \(a\in {\mathbb {F}}_{q^m}\) and an integer \(i\in {\mathbb {Z}}\), we denote \(a^{[i]}=a^{q^{i}}\). Further for a vector \(\mathbf {a} = (a_0, a_1, \ldots , a_{n-1})\in {\mathbb {F}}_{q^m}^n\), we denote \(\mathbf {a}^{[i]}= (a_0^{[i]}, a_1^{[i]}, \ldots , a_{n-1}^{[i]})\). Then, an \([n,k]_{{\mathbb {F}}_{q^m}}\) Gabidulin code \({\mathcal {C}}({\mathbf {a}})\) for a vector \(\mathbf {a} = (a_0, a_1, \ldots , a_{n-1}) \in {\mathbb {F}}_{q^m}^n\) is defined with the following k by n generator matrix:
$$\begin{aligned} \left( \begin{array}{cccc} a_0^{[0]} &{} a_1^{[0]} &{} \cdots &{} a_{n-1}^{[0]} \\ a_0^{[1]} &{} a_1^{[1]} &{} \cdots &{} a_{n-1}^{[1]} \\ \vdots &{} \vdots &{}\ddots &{} \vdots \\ a_0^{[k-1]} &{} a_1^{[k-1]} &{} \cdots &{} a_{n-1}^{[k-1]} \\ \end{array} \right) . \end{aligned}$$
According to the analysis in [18, 23,24,25], a secure code-based PKE scheme should be able to resist several structural attacks, such as key recovery attack, reduction attack and moore decomposition attack [23]. In particular, we can obtain the lower bound of the computation costs that an adversary needs to break a PKE scheme based on Gabidulin codes. The lower bound, known as the PQ-security level [17, 26], can be computed by the following formula:
$$\begin{aligned} m^3 \cdot 2^{ (t-1)\lfloor \frac{k\cdot \min \{m,n\}}{2n} \rfloor }, \end{aligned}$$
where t is the correction ability of the underlying Gabidulin codes.
As the same in the LT scheme [18], we choose two sets of parameters \(params_1=(q = 2, m = 75, n = 73, k = 21, t_0 = t_1 = 13)\) and \(params_2=(q = 2, m = 85, n = 83, k = 18, t_0 = t_1 = 16)\), which can achieve the security levels \(2^{141} \approx 75^3 \cdot 2^{\cdot (13-1) \lfloor \frac{21\cdot \min \{75,73\}}{146} \rfloor }\) and \(2^{154} \approx 85^3 \cdot 2^{\cdot (16-1) \lfloor \frac{18\cdot \min \{85,83\}}{166} \rfloor }\), respectively, according to Eq. (11). Recall that the ciphertext in our scheme is of the form \({\mathbf {c}} = ({\mathbf {c}}_0, {\mathbf {c}}_1)\), where \({\mathbf {c}}_0\) and \({\mathbf {c}}_1\) are two vectors of the same length n over \({\mathbb {F}}_{q^m}\). For the first parameter set \(params_1\), we have the ciphertext size of \(2\cdot 73\cdot 75\ \text{ bits }=10950 \ \text{ bits }\approx 1.37\ \text{ KB }\). Similarly, we can get the ciphertext size of \(2\cdot 83\cdot 85\ \text{ bits }=14110\ \text{ bits } \approx 1.76\ \text{ KB }\) for the second parameter set \(params_2\).
As to the public key, it has the form of \(pk = (\hat{G}, {\mathbf {x}})\) in our scheme, where \(\hat{G}\) is a \(k\times n\) matrix over \({\mathbb {F}}_{q^m}\) and \({\mathbf {x}}\) is a vector of the length n over \({\mathbb {F}}_{q^m}\). For the first parameter set \(params_1\), we have the public key size of \(75\cdot (21+1)\cdot 73\ \text{ bits }=120450 \ \text{ bits }\approx 15.06\ \text{ KB }\). Similarly, we can get the public key size of \(85\cdot (18+1)\cdot 83\ \text{ bits }=134045\ \text{ bits } \approx 16.76\ \text{ KB }\) for the second parameter set \(param_2\).
We note that the ciphertext and public key sizes of our IND-CCA2 secure scheme are exactly the same to the ones of the IND-CPA secure LT scheme. Compared to Loidreau's scheme [17], which is a recently proposed PKE scheme based on rank metric codes, our PKE scheme has a significant advantage in terms of public key size and ciphertext size (see Table 2).
For a fair comparison between our rank metric codes-based PKE scheme and Wang's scheme from Hamming metric codes [22], we choose two new sets of parameters \(params_3 = (q = 2, m = 71, n = 69, k = 19, t_0 = t_1 = 13)\) and \(params_4 = (q = 2, m = 101, n = 99, k = 22, t_0 = t_1 = 17)\), which can achieve the security levels \(71^3 \cdot 2^{\cdot (13-1) \lfloor \frac{19\cdot \min \{71,69\}}{138} \rfloor }\approx 2^{133}\) and \(101^3 \cdot 2^{\cdot (17-1) \lfloor \frac{22\cdot \min \{101,99\}}{198} \rfloor } \approx 2^{196}\), respectively, according to Eq. (11). For the parameter set \(params_3\), we have the ciphertext size of \(2\cdot 69\cdot 71\ \text{ bits }=9798 \ \text{ bits }\approx 1.22\ \text{ KB }\) and public key size of \(71\cdot (19+1)\cdot 69\ \text{ bits }=97890\ \text{ bits } \approx 12.25\ \text{ KB }\). Similarly, we can get the ciphertext size of \(2\cdot 99\cdot 101\ \text{ bits }=19998\ \text{ bits } \approx 2.50\ \text{ KB }\) and public key size of \(101\cdot (22+1)\cdot 99\ \text{ bits }=229977\ \text{ bits } \approx 28.75\ \text{ KB }\) for the parameter set \(params_4\). From Table 2, we observe that our code-based PKE scheme also has a significant advantage to Wang's scheme in terms of public key size.
Table 2 Performance evaluation of our scheme, Loidreau's scheme [17] and Wang's scheme [22]: a comparative summary
A secure cloud storage use case
In this section, we explain how our proposed code-based PKE scheme can be deployed in a cloud storage system. We use the notations \(\textsf {CBEnc}(\cdot )/\textsf {CBDec}(\cdot )\) and \(\textsf {SEnc}(\cdot )/\textsf {SDec}(\cdot )\) to denote the encryption/decryption algorithms in our proposed code-based PKE scheme and a secure symmetric encryption scheme, respectively. In our cloud storage system, we assume that there are two users (Alice and Bob) and multiple storage servers. Alice splits her data into some data blocks and sends each block to storage servers (one data block can be sent to several storage severs). Bob can get the data blocks from the cloud and recover them into the original data (see Fig. 5).
High level cloud storage architecture
If Alice wants to store data M in the cloud and shares it with Bob, she can select a random session key K for the symmetric encryption algorithm and encrypts it under Bob's public key \(pk_{B}\) to get a ciphertext \(EK=\textsf {CBEnc}(pk_{B}, K)\). Then, Alice can split the data M into m blocks \(M=M_1\Vert M_2\Vert \cdots \Vert M_m\) and uses K to encrypt them to get \(C_i = \textsf {SEnc}(K, M_i)\), \(1\le i \le m\). Next, Alice computes the hash values \(h_i=Hash(C_i)\), \(1\le i\le m\), and saves them in her local storage, where \(Hash(\cdot )\) is a secure collision-resistant hash function. Then, Alice sets the information \(ID_{A}\), EK, and \(EH_i = \textsf {SEnc}(K, h_i)\) to each ciphertext block \(C_i\), \(1\le i\le m\), as the header data (see Fig. 6). Finally, she sends the header data and ciphertext \(C_i\), \(1\le i \le m\), to m storage servers. If Alice wants to check the data integrity of some ciphertext block \(C_i\), she can ask the corresponding storage server to compute \(Hash(C_i)\) and send it back to her. If the received \(Hash(C_i)\) matches her local hash value for \(C_i\), she is assured that the data are intact and have not modified by the cloud storage server.
Structure of uploaded data
On the other hand, if Bob wants to get the data M from Alice via the cloud storage system, he can find the corresponding m ciphertext data blocks \(C_i\), \(1\le i\le m\), and their header data \((ID_A, EK, EH_i)\) by searching for \(ID_{A}\) in the cloud. Then, Bob can download them and runs \(\textsf {CBDec}(sk_B, EK)\) to recover the symmetric key K with his private key \(sk_B\). Next, for each \(1\le i\le m\), Bob uses the obtained symmetric key K to decrypt \(EH_i\) and \(C_i\) to get \(Hash(C_i)\) and \(M_i\), respectively. To verify the data integrity of \(M_i\), Bob computes \(Hash(C_i)\) and checks whether it is equal to \(h_i\) obtained from \(EH_i\). If yes, it means that \(C_i\) has not been modified by the storage server and the corresponding \(M_i\) is the original data from Alice. Otherwise, Bob asks the corresponding \(SS_i\) of the non-matching \(C_i\) to send the data again.
From the above description, we find that the security of the proposed cloud storage system relies mainly on the security of our code-based PKE scheme and the symmetric encryption scheme. As mentioned in "Introduction" section, our scheme is able to resist quantum computer-facilitated attacks and achieves IND-CCA2 security as demonstrated in "Security proof" section. Clearly, Alice can access her data stored in the cloud using any devices by searching for the data with her ID. Moreover, both Alice and Bob can check the integrity of the downloaded data from the cloud. In the event that the data were modified by the cloud storage server or during the communication, Bob can detect this and then proceed to download the corresponding blocks again. Thus, the proposed cloud storage system provides availability, reliability, efficient retrieval and data sharing, even in the post-quantum era.
Code-based public key cryptography (PKC) can potentially be deployed in real-world applications to ensure quantum computing attack resilience. Existing code-based PKC schemes are broadly categorized into those based on Hamming metric codes and those based on rank metric codes, and it is believed that the latter generally has smaller public key and ciphertext sizes than the former under the same security level.
In this paper, we presented a new rank metric codes-based public key encryption scheme from Lau and Tan's scheme [18], and hence inherits the latter's small public key and ciphertext size properties. However, our new scheme achieves IND-CCA2 secure (the highest security assurance possible), as shown in this paper. We also analyzed its efficiency in term of the public key size and the ciphertext size. Finally, we presented a use case to demonstrate its utility in practice.
Future work includes constructing more rank metric codes-based cryptographic primitives, such as proxy re-encryption and attribute-based encryption, to achieve other desirable properties in practical applications. We also intend to implement and evaluate a prototype of the proposed/extended scheme in collaboration with a (small) cloud storage service provider.
The datasets used and analysed in this study are available from the corresponding author on reasonable request.
Rathore S, Sharma PK, Park JH (2017) XSSClassifier: an efficient XSS attack detection approach based on machine learning classifier on SNSs. J Inf Process Syst 13(4):1014–1028
Rathore S, Park JH (2018) Semi-supervised learning based distributed attack detection framework for IoT. Appl Soft Comput 72:79–89
Rathore S, Loia V, Park JH (2018) SpamSpotter: an efficient spammer detection framework based on intelligent decision support system on Facebook. Appl Soft Comput 67:920–932
Rathore S, Sangaiah AK, Park JH (2018) A novel framework for internet of knowledge protection in social networking services. J Comput Sci 26:55–65
Kamara S, Lauter K (2010) Cryptographic cloud storage. In International conference on financial cryptography and data security. Springer, Berlin, pp 136–149
Liu Q, Wang G, Wu J (2014) Time-based proxy re-encryption scheme for secure data sharing in a cloud environment. Inf Sci 258:355–370
Mishra B, Jena D (2018) CCA secure proxy re-encryption scheme for secure sharing of files through cloud storage. In 2018 fifth international conference on emerging applications of information technology (EAIT). IEEE, Piscataway, pp 1–6
Li J, Zhang Y, Chen X, Xiang Y (2018) Secure attribute-based data sharing for resource-limited users in cloud computing. Comput & Secur 72:1–12
Liu Z, Li T, Li P, Jia C, Li J (2018) Verifiable searchable encryption with aggregate keys for data sharing system. Future Gener Comput Syst 78:778–788
Chu CK, Chow SSM, Tzeng WG, Zhou J, Deng RH (2014) Key-aggregate cryptosystem for scalable data sharing in cloud storage. IEEE Trans Parallel Distrib Syst 25(2):468–477
Haeger ES, Schurig K, Cenname M, et al. (2015) Crypto proxy for cloud storage services: U.S. Patent 9,137,222[P]. 2015-9-15
Xu L, Wu X, Zhang X (2012) CL-PRE: a certificateless proxy re-encryption scheme for secure data sharing with public cloud. In: Proceedings of the 7th ACM symposium on information, computer and communications security. ACM, New York, pp 87–88
Shor PW (1994) Algorithms for quantum computation: Discrete logarithms and factoring. Proceeding of FOCS 1994, Santa Fe, New Mexico, November 20–22, 1994, pp 124-134
Shor PW (1999) Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM Rev 41(2):303–332
MathSciNet MATH Article Google Scholar
Mceliece RJ (1978) A public-key cryptosystem based on algebraic coding theory. Deep Space Netw Prog Rep 42–44(1978):114–116
Bernstein DJ (2009) Introduction to post-quantum cryptography. In: Post-quantum cryptography, Springer, Berlin. pp 1–14
Loidreau P (2017) A new rank metric codes based encryption scheme. In: Lange T, Takagi T (eds.) PQCrypto 2017. LNCS, 10346, pp 3–17
Lau TSC, Tan CH (2018) A new encryption scheme based on rank metric codes. In: Australasian conference on information security and privacy. pp 750–758
Gaborit P, Hauteville A, Phan DH, Tillich J-P (2017) Identity-based encryption from codes with rank metric. In: Katz J, Shacham H (eds.) CRYPTO 2017. LNCS, 10403, pp 194–224
Kobara K, Imai H (2001) Semantically secure McEliece public-key cryptosystems-conversions for McEliece PKC. Int Workshop Pract Theory Public Key Cryptogr Public Key Cryptogr 1992:19–35
MathSciNet MATH Google Scholar
Bellare M, Rogaway P (1994) Optimal asymmetric encryption. Eurocrypto 950(6):92–111
Wang Y (2016) Quantum resistant random linear code based public key encryption scheme RLCE. IEEE Int Symp Inf Theory (ISIT) 2016:2519–2523
Horlemann-Trautmann AL, Marshall K, Rosenthal J (2015) Extension of overbeck's attack for gabidulin based cryptosystems. Des Codes Cryptogr 86(2):1–22
Otmani A, Kalachi HT, Ndjeya S (2018) Improved cryptanalysis of rank metric schemes based on Gabidulin codes. Des Codes Cryptogr 86(9):1983–1996
Gaborit P, Zémor G (2016) On the hardness of the decoding and the minimum distance problems for rank codes. IEEE Trans Inf Theory 62(12):7245–7252
Bernstein DJ (2010) Grover vs. mceliece. Post-Quantum Cryptography 2010. Lecture Notes Comput Sci 6061:73–80
MATH Article Google Scholar
We would like to thank the reviewers for their valuable comments.
The work is supported in part by the National Key R&D Program of China under Grant No. 2017YFB0802300, the NSFC-Zhejiang Joint Fund for the Integration of Industrialization and Informatization under Grant No. U1509219, the Shanghai Natural Science Foundation under Grant No. 17ZR1408400, the National Natural Science Foundation of China under Grant Nos. 61601129, 11701179, the Shanghai Science and Technology Commission Program under Grant No. 18511105700, and the Shanghai Sailing Program under Grant No. 17YF1404300.
Shanghai Key Laboratory of Trustworthy Computing, East China Normal University, Shanghai, China
Peng Zeng
College of Computer Science and Technology, Shanghai University of Electric Power, Shanghai, China
Siyuan Chen
Department of Information Systems and Cyber Security, The University of Texas at San Antonio, San Antonio, TX, 78249, USA
Kim-Kwang Raymond Choo
The authors have contributed significantly to the research work presented of this manuscript. All authors read and approved the final manuscript.
Correspondence to Siyuan Chen.
Zeng, P., Chen, S. & Choo, KK.R. An IND-CCA2 secure post-quantum encryption scheme and a secure cloud storage use case. Hum. Cent. Comput. Inf. Sci. 9, 32 (2019). https://doi.org/10.1186/s13673-019-0193-6
Rank metric codes
|
CommonCrawl
|
Existence of positive solutions for a semilinear Schrödinger equation in \(\mathbb{R}^{N}\)
Houqing Fang1,2 &
Jun Wang2
Boundary Value Problems volume 2015, Article number: 9 (2015) Cite this article
In this paper, we study the existence of multi-bump solutions for the semilinear Schrödinger equation \(-\Delta u+(1+\lambda a(x))u=(1-\lambda b(x))|u|^{p-2}u\), \(\forall u\in H^{1}(\mathbb{R}^{N})\), where \(N\geq1\), \(2< p<2N/(N-2)\) if \(N\geq3\), \(p>2\) if \(N=2\) or \(N=1\), \(a(x)\in C(\mathbb{R}^{N})\) and \(a(x)>0\), \(b(x)\in C(\mathbb{R}^{N})\) and \(b(x)>0\). For any \(n\in\mathbb{N}\), we prove that there exists \(\lambda(n)>0\) such that, for \(0<\lambda<\lambda(n)\), the equation has an n-bump positive solution. Moreover, the equation has more and more multi-bump positive solutions as \(\lambda\rightarrow0\).
Introduction and main results
In this paper we study the following time independent semilinear Schrödinger equation:
$$ (\mathcal{S}_{\lambda})\quad -\Delta u+\bigl(1+\lambda a(x)\bigr)u=\bigl(1- \lambda b(x)\bigr)|u|^{p-2}u,\quad \forall u\in H^{1}\bigl( \mathbb{R}^{N}\bigr), $$
where \(N\geq1\), \(2< p<2^{*}\), 2∗ is the critical Sobolev exponent defined by \(2^{*}=\frac{2N}{N-2}\) if \(N\geq3\) and \(2^{*}=\infty\) if \(N=2\) or \(N=1\), and \(\lambda>0\) is a parameter.
This kind of equation arises in many fields of physics. For the following nonlinear Schrödinger equation:
$$ i\hbar\frac{\partial\varphi}{\partial t}=-\frac{\hbar^{2}}{2m}\Delta\varphi+N(x) \varphi-f\bigl(x,\vert \varphi \vert \bigr)\varphi, $$
where i is the imaginary unit, Δ is the Laplacian operator, and \(\hbar>0\) is the Planck constant. A standing wave solution of (1.1) is a solution of the form
$$ \varphi(x,t)=u(x)e^{-\frac{iEt}{\hbar}}, \quad u(x)\in\mathbb{R}. $$
Thus, \(\varphi(x,t)\) solves (1.1) if and only if \(u(x)\) solves the equation
$$ -\hbar^{2}\Delta u+V(x)u=g(x,u), $$
where \(V(x)=N(x)-E\) and \(g(x,u)=f(x,|u|)u\). The function V is called the potential of (1.2). If \(g(x,u)=(1-\lambda b(x))|u|^{p-2}u\), then (1.2) can be written as
$$ -\hbar^{2}\Delta u+V(x)u=\bigl(1-\lambda b(x) \bigr)|u|^{p-2}u. $$
If \(\hbar=1\) and \(V(x)=1+\lambda a(x)\), then (1.3) is reduced to (\(\mathcal{S}_{\lambda}\)).
The nonlinear Schrödinger equation (\(\mathcal{S}_{\lambda}\)) models some phenomena in physics, for example, in nonlinear optics, in plasma physics, and in condensed matter physics, and the nonlinear term simulates the interaction effect, called the Kerr effect in nonlinear optics, among a large number of particles; see, for example, [1, 2]. The case of \(p=4\) and \(N=3\) is of particular physical interest, and in this case the equation is called the Gross-Pitaevskii equation; see [3].
The limiting equation of (\(\mathcal{S}_{\lambda}\)) is
$$ -\Delta u+u=|u|^{p-2}u, \quad u\in H^{1}\bigl( \mathbb{R}^{N}\bigr), $$
as \(\lambda\rightarrow0\). It is well known that (1.4) has a unique positive radial solution z, which decays exponentially at ∞. This z will serve as a building block to construct multi-bump solutions of (\(\mathcal{S}_{\lambda}\)). For \(n\in\mathbb{N}\), let \(y_{1},\ldots,y_{n}\in\mathbb{R}^{N}\) be the sufficiently separated points. The profile of the function \(\sum_{i=1}^{n}z(x-y_{i})\) resembles n bumps and accordingly a solution of (\(\mathcal{S}_{\lambda}\)) which is close to \(\sum_{i=1}^{n}z(x-y_{i})\) in \(H^{1}(\mathbb{R}^{N})\) is called an n-bump solution.
As we know, multi-bump solutions arise as solutions of (1.2) as \(\hbar\rightarrow0\), under the assumption that V has several critical points; see for example [4–7]. Particularly, in the interesting paper [5], the authors proved that the solutions of (1.2) have several peaks near the point of a maximum of V. These peaks converge to the maximum of V as \(\hbar\rightarrow0\). Actually, there have been enormous studies on the solutions of (1.2) as \(\hbar\rightarrow0\), which exhibit a concentration phenomenon and are called semiclassical states. In the early results, most of the researchers focused on the case \(\inf_{x\in\mathbb{R}^{N}}V(x)>0\) and g is subcritical. Here and in the sequel, we say g is subcritical if \(g(x,u)\leq C|u|^{p-1}\) for \(2\leq p<2^{*}\) with \(2^{*}:=2N/(N-2)\) (\(N\geq3\)), and g is critical or supercritical if \(c_{1}|u|^{2^{*}-1}\leq g(x,u)\leq c_{2}|u|^{2^{*}-1}\) or only \(c_{1}|u|^{2^{*}-1}\leq g(x,u)\) for all large \(|u|\). In the case of \(\inf_{x\in\mathbb{R}^{N}}V(x)>0\), Floer and Weinstein in [8] first considered \(N=1\), \(g(u)=u^{3}\). Using the Lyapunov-Schmidt reduction argument, they proved that the system (1.2) has spike solutions, which concentrate near a nondegenerate critical point of the potential V. This result was extended to the high dimension case with \(N\geq2\) and for \(g(u)=|u|^{p-2}u\) by Oh [7, 9]. If the potential V has a nondegenerate critical point, Rabinowitz [10] obtained the existence result for (1.2) with ħ small, provided that \(0<\inf_{x\in\mathbb{R}^{N}}V(x)<\liminf_{|x|\rightarrow\infty}V(x)\). Using a global variational argument, Del Pino and Felmer [11, 12] established the existence of multi-peak solutions having exactly k maximum points provided that there are k disjoint open bounded sets \(\Omega_{i}\) such that \(\inf_{x\in\partial\Omega_{i}}V(x)>\inf_{x\in\Omega_{i}}V(x)\), each \(\Omega_{i}\) having one peak concentrating at its bottom. For the subcritical case, Refs. [1, 6, 13–15] also proved that the solutions of (1.2) are concentrated at critical points of V. There have also been recent results on the existence of solutions concentrating on manifolds; for instance, see [16–18] and the references therein.
If g is subcritical, Refs. [19, 20] first obtained the semiclassical solutions of (1.2) with critical frequency, i.e., \(\inf_{\in\mathbb{R}^{N}}V(x)=0\). They exhibited new concentration phenomena for bound states and their results were extended and generalized in [3, 21, 22]. Later, if \(\inf_{\in\mathbb{R}^{N}}V(x)=0\), Ding and Lin [23] obtained semiclassical states of (1.2) when the nonlinearity g is of the critical case. Recently, if the potentials V change sign, that is, \(\inf_{x\in\mathbb{R}^{N}}V(x)<0\), Refs. [24, 25] proved that the system (1.2) has semiclassical states.
Some researchers had also obtained multi-bump solutions for the equation
$$ -\Delta u+V(x)u=f(x,u), \quad u\in H^{1}\bigl( \mathbb{R}^{N}\bigr), $$
where V and f are \(T_{i}\) periodic in \(x_{i}\). Coti Zelati and Rabinowitz [26] first constructed multi-bump solutions for the Schrödinger equation (1.5). The building blocks are one-bump solutions at the mountain pass level and the existence of such solutions as well as multi-bump solutions is guaranteed by a nondegeneracy assumption of the solutions near the mountain pass level. Later, under the same nondegeneracy assumption, Coti Zelati and Rabinowitz in [27] constructed multi-bump solutions for periodic Hamiltonian systems. Multi-bump solutions have also been obtained for asymptotically periodic Schrödinger equations by Alama and Li [28]. For subsequent studies in this direction, for example, see [29–35] and the references therein. Recently, Refs. [36–38] also proved the existence of multi-bump solutions in other elliptic equations.
In this paper, we are interested in constructing multi-bump solutions of (\(\mathcal{S}_{\lambda}\)) with λ small enough. Similar results have been obtained in [39, 40] for the equations
$$ -\Delta u+\bigl(1+\lambda a(x)\bigr)u=|u|^{p-2}u, \quad u \in H^{1}\bigl(\mathbb{R}^{N}\bigr) $$
$$ -\Delta u+u=\bigl(1-\lambda a(x)\bigr)|u|^{p-2}u,\quad u\in H^{1}\bigl(\mathbb{R}^{N}\bigr). $$
To state the main result for (\(\mathcal{S}_{\lambda}\)), we need the following conditions on the functions a and b:
(\(\mathcal{R}_{1}\)):
\(a(x)>0\) and \(a(x) \in C(\mathbb{R}^{N})\), \(b(x)>0\) and \(b(x) \in C(\mathbb{R}^{N})\), and
$$ \lim_{|x|\rightarrow\infty}a(x)=\lim_{|x|\rightarrow\infty}b(x)=0. $$
One of the following holds: (i) \(\lim_{|x|\rightarrow\infty}\frac{\ln(a(x))}{|x|}=0\); (ii) \(\lim_{|x|\rightarrow\infty}\frac{\ln(b(x))}{|x|}=0\).
Suppose that the assumptions (\(\mathcal{R}_{1}\)) and (\(\mathcal {R}_{2}\)) hold. Then for any positive integer n there exists \(\lambda(n)>0\) such that, for \(0<\lambda<\lambda(n)\), the system (\(\mathcal{S}_{\lambda}\)) has an n-bump positive solution. As a consequence, for any positive integer n, there exists \(\lambda_{1}(n)>0\) such that, for \(0<\lambda<\lambda_{1}(n)\), the system (\(\mathcal{S}_{\lambda}\)) has at least n positive solutions.
Similar to [39, 40], the solutions in Theorem 1.1 do not concentrate near any point in the space. Instead, the bumps of the solutions we obtain are separated far apart and the distance between any pair of bumps goes to infinity as \(\lambda\rightarrow0\). The size of each bump does not shrink and is fixed as \(\lambda\rightarrow0\). This is in sharp contrast to the concentration phenomenon described above. This phenomenon has been observed by D'Aprile and Wei in [41] for a Maxwell-Schrödinger system.
We shall use the variational reduction method to prove the main results. Our argument is partially inspired by [39–42]. This paper is organized as follows. In Section 2, preliminary results are revisited. We prove Theorem 1.1 in Section 3.
Some preliminary works
Variational framework
In this section, we shall establish a variational framework for the system (\(\mathcal{S}_{\lambda}\)). For convenience of notation, let C and \(C_{i}\) denote various positive constants which may be variant even in the same line. In the Hilbert space \(H^{1}(\mathbb{R}^{N})\), we shall use the usual inner product,
$$ \langle u,v\rangle=\int_{\mathbb{R}^{N}}\nabla u\cdot\nabla v+uv, $$
and the induced norm \(\|u\|=\langle u,u\rangle^{\frac{1}{2}}\). Let \(|\cdot|_{q}\) denote the usual \(L^{q}(\mathbb{R}^{N})\)-norm and \((\cdot, \cdot)_{2}\) be the usual \(L^{2}(\mathbb{R}^{N})\)-inner product. Let \(n\in N\). We shall use \(\sum_{i< j}\) and \(\sum_{i\neq j}\) to represent summation over all subscripts i and j satisfying \(1\leq i< j\leq n\) and \(1\leq i\neq j\leq n\), respectively. Let us first introduce some basic inequalities which will be used later.
The following four lemmas are taken from [39, 40].
For \(q>1\), there exists \(C>0\) such that, for any real numbers a and b,
$$ \bigl\vert |a+b|^{q}-|a|^{q}-|b|^{q}\bigr\vert \leq C|a|^{q-1}|b|+C|b|^{q-1}|a|. $$
For \(q\geq2\), there exists \(C>0\) such that, for any \(a>0\) and \(b\in\mathbb{R}\),
$$ \bigl\vert |a+b|^{q}-a^{q}-qa^{q-1}b\bigr\vert \leq C \bigl(a^{q-2}|b|^{2}+|b|^{q} \bigr). $$
For \(q\geq2\), \(n\in N\), and \(a_{i}\geq0\), \(i=1,\ldots,n\),
$$ \Biggl(\sum_{i=1}^{n}a_{i} \Biggr)^{q}\geq\sum_{i=1}^{n}a_{i}^{q}+(q-1) \sum_{i\neq j}^{n}a_{i}^{q-1}a_{j} $$
$$ \Biggl(\sum_{i=1}^{n}a_{i} \Biggr)^{q}\geq\sum_{i=1}^{n}a_{i}^{q}+q \sum_{1\leq i< j\leq n}^{n}a_{i}^{q-1}a_{j}. $$
For \(q\geq2\), there exists \(C>0\) such that, for any \(a_{i}\geq0\), \(i = 1,\ldots,n\),
$$ \Biggl[ \Biggl(\sum_{i=1}^{n}a_{i} \Biggr)^{q-1}-\sum_{i=1}^{n}a_{i}^{q-1} \Biggr]^{\frac{q}{q-1}}\leq C\sum_{i\neq j}a_{i}^{q-1}a_{j}. $$
Recall that, for \(2< p<2^{*}\), the unique positive solution of the equation
$$ -\Delta u+u=|u|^{p-2}u, \quad u\in H^{1}\bigl( \mathbb{R}^{N}\bigr) $$
has the following properties; see, for example, [40, 43–45].
If \(2< p<2^{*}\), then every positive solution of (2.1) has the form \(z_{y}:=z(\cdot-y)\) for some \(y\in\mathbb{R}^{N}\), where \(z\in C^{\infty}(\mathbb{R}^{N})\) is the unique positive radial solution of (2.1) which satisfies, for some \(c>0\),
$$ z(r)r^{\frac{N-1}{2}}e^{r}\rightarrow c>0,\qquad z'(r)r^{\frac{N-1}{2}}e^{r} \rightarrow-c>0, \quad \textit{as } r=|x|\rightarrow\infty. $$
Furthermore, if \(\beta_{1}\leq\cdots\leq\beta_{n}\leq\cdot\cdot\cdot\) are the eigenvalues of the problem
$$ -\Delta v+v=\beta z^{p-2}v, \quad v\in H^{1} \bigl(\mathbb{R}^{N}\bigr), $$
then \(\beta_{1}=1\), \(\beta_{2}=p-1\), and the eigenspaces corresponding to \(\beta_{1}\) and \(\beta_{2}\) are spanned by z and \(\{\partial z/\partial x_{\alpha}\mid \alpha=1,\ldots,N\}\), respectively.
We shall use \(z_{y}\) as building blocks to construct multi-bump solutions of (\(\mathcal{S}_{\lambda}\)). For \(y_{i}, y_{j}\in\mathbb{R}^{N}\), the identity
$$ \int_{\mathbb{R}^{N}}z_{y_{i}}^{p-1}z_{y_{j}} = \langle z_{y_{i}},z_{y_{j}}\rangle=\int_{\mathbb{R}^{N}}z_{y_{i}}z_{y_{j}}^{p-1} $$
will be frequently used in the sequel. The following lemma is a consequence of Lemma 2.4 in [46] (see also Lemma II.2 of [47]).
There exists a positive constant \(c>0\) such that, as \(|y_{i}-y_{j}|\rightarrow\infty\),
$$ \int_{\mathbb{R}^{N}}z_{y_{i}}^{p-1}z_{y_{j}}\sim c|y_{i}-y_{j}|^{-\frac{N-1}{2}}e^{-|y_{i}-y_{j}|}. $$
For \(h>0\), \(n\geq2\), and \(n\in\mathbb{N}\), define
$$ \mathcal {D}_{h}=\bigl\{ (y_{1},\ldots,y_{n})\in \bigl(\mathbb{R}^{N}\bigr)^{n}\mid|y_{i}-y_{j}|>h \text{ for } i\neq j\bigr\} . $$
For convenience, we make the convention
$$ \mathcal{D}_{h}=\mathbb{R}^{N},\quad \text{if } n=1. $$
For \(y=(y_{1},\ldots,y_{n})\in\mathcal{D}_{h}\), denote
$$\begin{aligned}& u_{y}(x)=\sum_{i=1}^{n}z(x-y_{i}), \\& \mathcal{T}_{y}= \biggl\{ \frac{\partial z(\cdot-y_{i})}{\partial x_{\alpha}}\Bigm|\alpha=1,\ldots,N, i=1, \ldots,n \biggr\} \end{aligned}$$
$$ \mathcal{W}_{y}= \bigl\{ v\in H^{1}\bigl( \mathbb{R}^{N}\bigr)\mid\langle v,u\rangle=0, \forall u\in \mathcal{T}_{y} \bigr\} . $$
Then \(H^{1}(\mathbb{R}^{N})=\mathcal{T}_{y}\oplus\mathcal{W}_{y}\). Set \(P_{\lambda}(x)=1-\lambda b(x)\), \(V_{\lambda}(x)=1+\lambda a(x)\), \(\mathcal{N}_{\lambda}=(p-1)(-\Delta+V_{\lambda})^{-1}\), and \(\mathcal{N}_{0}=\mathcal{N}\). For \(y\in\mathcal{D}_{h}\) and \(\varphi\in H^{1}(\mathbb{R}^{N})\), define
$$ \mathcal{K}_{y}=\varphi-\sum_{i=1}^{n} \mathcal {N}\bigl(z^{p-2}(\cdot-y_{i})\varphi\bigr)+\sum _{i=1}^{n}L_{i}\varphi, $$
$$ \sum_{i=1}^{n}L_{i}\varphi=\sum _{i\neq j}\sum_{\alpha=1}^{N} \biggl\langle \mathcal {N}\bigl(z^{p-2}(\cdot-y_{j})\varphi \bigr),\frac{\partial z(\cdot-y_{i})}{\partial x_{\alpha}} \biggr\rangle \biggl\Vert \frac{\partial z(\cdot-y_{i})}{\partial x_{\alpha}}\biggr\Vert ^{-2}\frac{\partial z(\cdot-y_{i})}{\partial x_{\alpha}}. $$
Noting that \(\mathcal{K}_{y}|_{\mathcal{W}_{y}}: \mathcal {W}_{y}\rightarrow\mathcal{W}_{y}\) has the form identity-compact.
(See Lemma 2.3 of [40])
If \(h\rightarrow\infty\), then
$$ |u_{y}|^{p-2}-\sum_{i=1}^{n}z^{p-2}( \cdot-y_{i})\rightarrow0 $$
in \(L^{p/(p-2)}(\mathbb{R}^{N})\) uniformly in \(y\in\mathcal {D}_{h}\).
Let \(u, v\in H^{1}(\mathbb{R}^{N})\). If \(v\rightarrow0\), then
$$ |u+v|^{p-1}-|u|^{p-2}\rightarrow0 $$
in \(L^{p/(p-2)}(\mathbb{R}^{N})\) uniformly in u in any bounded set.
There exist \(h_{0}>0\) and \(\eta_{0}>0\) such that, for \(h>h_{0}\) and \(y\in\mathcal {D}_{h}\), \(\mathcal{K}_{y}|_{\mathcal{W}_{y}}: \mathcal {W}_{y}\rightarrow\mathcal{W}_{y}\) is invertible and
$$ \bigl\Vert (\mathcal{K}_{y}|_{\mathcal {W}_{y}})^{-1}\bigr\Vert \leq\eta_{0}. $$
Lemma 2.10
Let \(v\in H^{1}(\mathbb{R}^{N})\). If \(\lambda\rightarrow0\), \(v\rightarrow0\), and \(h\rightarrow\infty\), then
$$ \sup_{y\in\mathcal{D}_{h},\varphi\in H^{1}(\mathbb{R}^{N}),\|\varphi\|=1}\bigl\Vert \mathcal {K}_{y}\varphi- \bigl(\varphi-\mathcal {N}_{\lambda}\bigl(P_{\lambda}|u_{y}+v|^{p-2} \varphi\bigr)\bigr)\bigr\Vert \rightarrow0 $$
$$ \sup_{y\in\mathcal{D}_{h},\varphi\in H^{1}(\mathbb{R}^{N}),\|\varphi\|=1}\bigl\Vert \mathcal {K}_{y}\varphi- \bigl(\varphi-\mathcal {N}\bigl(P_{\lambda}|u_{y}+v|^{p-2} \varphi\bigr)\bigr)\bigr\Vert \rightarrow0. $$
By the definition of \(\mathcal{K}_{y}\), one has
$$\begin{aligned} \mathcal{K}_{y}\varphi-\bigl(\varphi-\mathcal {N}_{\lambda}\bigl(P_{\lambda}|u_{y}+v|^{p-2}\varphi \bigr)\bigr) =&\mathcal {N}_{\lambda}\bigl(|u_{y}+v|^{p-2} \varphi\bigr)-\sum_{j=1}^{n}\mathcal {N} \bigl(z^{p-2}(\cdot-y_{i})\varphi\bigr) \\ &{} -\lambda\mathcal {N}_{\lambda}\bigl(b(x)|u_{y}+v|^{p-2} \varphi\bigr)+\sum_{i=1}^{n}L_{i} \varphi. \end{aligned}$$
Obviously, \(\mathcal{N}_{\lambda}\rightarrow\mathcal{N}\) in \(\mathcal{L}(L^{\frac{p}{p-1}}(\mathbb{R}^{N}), H^{1}(\mathbb{R}^{N}))\) as \(\lambda\rightarrow0\). Therefore, if \(\lambda\rightarrow0\), \(v\in H^{1}(\mathbb{R}^{N})\) with \(v\rightarrow0\), and \(h\rightarrow\infty\), then for \(\psi,\varphi \in H^{1}(\mathbb{R}^{N})\), and uniformly in \(y\in\mathcal{D}_{h}\),
$$\begin{aligned}& \Biggl\vert \Biggl\langle \mathcal {N}_{\lambda} \bigl(|u_{y}+v|^{p-2}\varphi\bigr)-\sum _{j=1}^{n}\mathcal {N}\bigl(z^{p-2}( \cdot-y_{i})\varphi\bigr),\psi \Biggr\rangle \Biggr\vert \\& \quad =\bigl\vert \bigl\langle (\mathcal{N}_{\lambda}-\mathcal {N}) \bigl(|u_{y}+v|^{p-2}\varphi\bigr),\psi \bigr\rangle \bigr\vert +\bigl\vert \bigl\langle \mathcal {N}\bigl(\bigl(|u_{y}+v|^{p-2}-|u_{y}|^{p-2} \bigr)\varphi\bigr),\psi \bigr\rangle \bigr\vert \\& \qquad {}+\Biggl\vert \Biggl\langle \mathcal {N}\Biggl(\Biggl(|u_{y}|^{p-2}- \sum_{j=1}^{n}z^{p-2}( \cdot-y_{j})\Biggr)\varphi\Biggr),\psi \Biggr\rangle \Biggr\vert \\& \quad \leq\bigl\Vert (\mathcal{N}_{\lambda}-\mathcal {N}) \bigl(|u_{y}+v|^{p-2}\varphi\bigr)\bigr\Vert \|\psi\|+C\bigl\vert \bigl(|u_{y}+v|^{p-2}-|u_{y}|^{p-2} \bigr)\bigr\vert _{L^{\frac {p}{p-2}}(\mathbb{R}^{N})} \|\varphi\|\|\psi\| \\& \qquad {}+C\Biggl\vert \Biggl(|u_{y}|^{p-2}-\sum _{j=1}^{n}z^{p-2}(\cdot -y_{j}) \Biggr)\Biggr\vert _{L^{\frac{p}{p-2}}(\mathbb{R}^{N})} \|\varphi\|\|\psi\| \\& \quad \rightarrow0, \end{aligned}$$
as a consequence of Lemmas 2.7 and 2.8. Moreover, by Lemma 2.6, for \(|y_{i}-y_{j}|\rightarrow\infty\) (\(i\neq j\)), one sees that
$$ \sup_{y\in\mathcal {D}_{h}}\Biggl\Vert \sum _{i=1}^{n}L_{i}\varphi\Biggr\Vert \rightarrow0. $$
For \(\psi,\varphi\in H^{1}(\mathbb{R}^{N})\),
$$\begin{aligned} \lambda\bigl\vert \bigl\langle \mathcal {N}_{\lambda} \bigl(b(x)|u_{y}+v|^{p-2}\varphi\bigr),\psi \bigr\rangle \bigr\vert =&\lambda\bigl\vert \bigl\langle (\mathcal{N}_{\lambda}-\mathcal {N}) \bigl(b(x)|u_{y}+v|^{p-2}\varphi\bigr),\psi\bigr\rangle \bigr\vert \\ &{} +\bigl\vert \bigl\langle \mathcal{N}\bigl(b(x)|u_{y}+v|^{p-2} \varphi\bigr),\psi\bigr\rangle \bigr\vert \\ \leq& c\lambda\bigl\Vert (\mathcal{N}_{\lambda}-\mathcal {N}) \bigl(|u_{y}+v|^{p-2}\varphi\bigr)\bigr\Vert \|\psi\| \\ &{} +c\lambda\|u_{y}+v\|\|\varphi\|\|\psi\| \\ \rightarrow&0, \end{aligned}$$
as \(\lambda\rightarrow0\). We infer from (2.3)-(2.6) that, if \(\lambda\rightarrow0\), \(v\in H^{1}(\mathbb{R}^{N})\) with \(v\rightarrow0\), and \(h\rightarrow\infty\),
$$ \sup_{y\in\mathcal{D}_{h},\varphi\in H^{1}(\mathbb{R}^{N}),\|\varphi\|=1}\bigl\Vert \mathcal {K}_{y}\varphi- \bigl(\varphi-\mathcal {N}_{\lambda}\bigl(P_{\lambda}|u_{y}+v|^{p-2} \varphi\bigr)\bigr)\bigr\Vert \rightarrow0. $$
Similar to above arguments, one can easily acquire the second conclusion of this lemma. □
Clearly, the energy functional corresponding to the system (\(\mathcal{S}_{\lambda}\)) is defined by
$$ \Phi_{\lambda}(u)=\frac{1}{2}\int_{\mathbb{R}^{N}}\bigl(| \nabla u|^{2}+V_{\lambda}|u|^{2}\bigr)-\frac{1}{p} \int_{\mathbb {R}^{N}}P_{\lambda}|u|^{p}\quad \text{for } u \in H^{1}\bigl(\mathbb{R}^{N}\bigr), $$
where \(V_{\lambda}=(1+\lambda a(x))\) and \(P_{\lambda}=(1-\lambda b(x))\). It is easy to see that the critical points of \(\Phi_{\lambda}\) are solutions of (\(\mathcal{S}_{\lambda}\)). In the following, we shall use a Lyapunov-Schmidt reduction argument to find critical points of \(\Phi_{\lambda}\). The first procedure is to convert the problem of finding critical points of \(\Phi_{\lambda}\) to a finite dimensional problem, which consists of the following two lemmas.
There exist \(\lambda_{0}>0\) and \(H_{0}>0\) such that, for \(0<\lambda<\lambda_{0}\) and \(h>H_{0}\), there exists a \(C^{1}\)-map
$$ v_{h,\lambda}:\mathcal{D}_{h}\rightarrow H^{1}\bigl( \mathbb{R}^{N}\bigr), $$
depending on h and λ, such that
for any \(y\in\mathcal{D}_{h}\), \(v_{h,\lambda}\in \mathcal {W}_{y}\);
for any \(y\in\mathcal{D}_{h}\), \(\mathcal {P}_{y}\nabla\Phi_{\lambda}(u_{y}+v_{h,\lambda})=0\), where \(\mathcal {P}_{y}:H^{1}(\mathbb{R}^{N})\rightarrow\mathcal{W}_{y}\) is the orthogonal projection onto \(\mathcal{W}_{y}\);
\(\lim_{\lambda\rightarrow0,h\rightarrow\infty}\| v_{h,\lambda,y}\|=0\) uniformly in \(y\in\mathcal{D}_{h}\); \(\lim_{|y|\rightarrow\infty}\|v_{h,\lambda,y}\|=0\) if \(n=1\).
Decreasing \(\lambda_{0}\) and increasing \(H_{0}\) if necessary, we have the following result.
For \(0<\lambda<\lambda_{0}\) and \(h>H_{0}\), if \(y^{0}=(y_{1}^{0},\ldots,y_{n}^{0})\) is a critical point of \(\Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\), then \(u_{y^{0}}+v_{h,\lambda,y^{0}}\) is a critical point of \(\Phi_{\lambda}\).
Using Lemmas 2.9 and 2.10, repeating the arguments of Lemmas 2.6 and 2.7 in [40], one can easily prove Lemmas 2.11 and 2.12.
Estimates on \(\Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\) and \(v_{h,\lambda,y}\)
In order to prove Theorem 1.1 in the next section. We need first to estimate \(\Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\) and \(v_{h,\lambda,y}\). Denote \(c_{0}=\Phi_{0}(z)\), where \(\Phi_{0}\) is the functional \(\Phi_{\lambda}\) with \(\lambda=0\). Then
$$ c_{0}=\Phi_{0}(z)=\frac{1}{2}\int _{\mathbb{R}^{N}}\bigl(|\nabla z|^{2}+|z|^{2}\bigr)- \frac{1}{p}\int_{\mathbb{R}^{N}}|z|^{p}. $$
In the following, we first estimate \(\Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\). Note that
$$\begin{aligned} \Phi_{\lambda}(u_{y}+v_{h,\lambda,y}) =& \frac{1}{2}\int_{\mathbb {R}^{N}}|\nabla u_{y}+\nabla v_{h,\lambda,y}|^{2}+\frac{1}{2}\int_{\mathbb{R}^{N}} \bigl(1+\lambda a(x)\bigr)|u_{y}+v_{h,\lambda,y}|^{2} \\ &{} -\frac{1}{p}\int_{\mathbb{R}^{N}}|u_{y}+v_{h,\lambda,y}|^{p} +\frac{\lambda}{p}\int_{\mathbb{R}^{N}}b(x)|u_{y}+v_{h,\lambda,y}|^{p}. \end{aligned}$$
A direct computation shows that
$$\begin{aligned}& \Phi_{\lambda}(u_{y}+v_{h,\lambda,y}) \\& \quad = \frac{1}{2}\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}\bigl\vert \nabla z(x-y_{i})\bigr\vert ^{2}+\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}\nabla z(x-y_{i})\cdot\nabla(v_{h,\lambda,y})+ \frac{1}{2}\int_{\mathbb {R}^{N}}\bigl\vert \nabla(v_{h,\lambda,y}) \bigr\vert ^{2} \\& \qquad {} +\sum_{i< j}\int_{\mathbb{R}^{N}}\nabla z(x-y_{i})\cdot\nabla z(x-y_{j})+\frac{1}{2}\sum _{i=1}^{n}\int_{\mathbb{R}^{N}}\bigl\vert z(x-y_{i})\bigr\vert ^{2} +\frac{1}{2}\int _{\mathbb{R}^{N}}|v_{h,\lambda,y}|^{2} \\& \qquad {} +\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}z(x-y_{i})\cdot v_{h,\lambda,y}+\sum _{i<j}\int_{\mathbb{R}^{N}}z(x-y_{i})\cdot z(x-y_{j}) \\& \qquad {} +\frac{\lambda}{2}\int_{\mathbb{R}^{N}}a(x)u_{y}^{2}+ \lambda \int_{\mathbb{R}^{N}}a(x)u_{y}v_{h,\lambda,y} + \frac{\lambda}{2}\int_{\mathbb{R}^{N}}a(x) (v_{h,\lambda,y})^{2} \\& \qquad {} -\frac{1}{p}\int_{\mathbb{R}^{N}}|u_{y}+v_{h,\lambda,y}|^{p} +\frac{\lambda}{p}\int_{\mathbb{R}^{N}}b(x)|u_{y}+v_{h,\lambda,y}|^{p}. \end{aligned}$$
By Lemma 2.11, we may assume that \(\|v_{h,\lambda,y}\|\leq1\). Taking \(a=u_{y}\) and \(b=v_{h,\lambda,y}\) in Lemma 2.2, we have
$$ \frac{1}{p}\int_{\mathbb{R}^{N}}|u_{y}+v_{h,\lambda,y}|^{p}= \frac {1}{p}\int_{\mathbb{R}^{N}}|u_{y}|^{p} + \int_{\mathbb{R}^{N}}(u_{y})^{p-1}v_{h,\lambda,y}+O \bigl(\|v_{h,\lambda ,y}\|^{2}\bigr) $$
$$ \frac{1}{p}\int_{\mathbb{R}^{N}}b(x)|u_{y}+v_{h,\lambda ,y}|^{p}= \frac{1}{p}\int_{\mathbb{R}^{N}}b(x)|u_{y}|^{p} +\int_{\mathbb{R}^{N}}b(x) (u_{y})^{p-1}v_{h,\lambda,y}+O \bigl(\| v_{h,\lambda,y}\|^{2}\bigr). $$
Here and in what follows, \(O(\|v_{h,\lambda,y}\|^{2})\) satisfies
$$ \bigl\vert O\bigl(\|v_{h,\lambda,y}\|^{2}\bigr)\bigr\vert \leq C \|v_{h,\lambda,y}\|^{2} $$
for some positive constant C independent of h, λ, y. Therefore, substituting (2.9) and (2.10) into (2.8), it follows that
$$\begin{aligned}& \Phi_{\lambda}(u_{y}+v_{h,\lambda,y}) \\& \quad = \frac{1}{2}\sum _{i=1}^{n}\int_{\mathbb{R}^{N}}\bigl\vert \nabla z(x-y_{i})\bigr\vert ^{2}+\sum _{i=1}^{n}\int_{\mathbb{R}^{N}}\nabla z(x-y_{i})\cdot\nabla(v_{h,\lambda,y})+\frac{1}{2}\int _{\mathbb {R}^{N}}\bigl\vert \nabla(v_{h,\lambda,y})\bigr\vert ^{2} \\& \qquad {} +\sum_{i< j}\int_{\mathbb{R}^{N}}\nabla z(x-y_{i})\cdot\nabla z(x-y_{j})+\frac{1}{2}\sum _{i=1}^{n}\int_{\mathbb{R}^{N}}\bigl\vert z(x-y_{i})\bigr\vert ^{2} +\frac{1}{2}\int _{\mathbb{R}^{N}}|v_{h,\lambda,y}|^{2} \\& \qquad {} +\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}z(x-y_{i})\cdot v_{h,\lambda,y}+\sum _{i<j}\int_{\mathbb{R}^{N}}z(x-y_{i})\cdot z(x-y_{j}) \\& \qquad {} +\frac{\lambda}{2}\int_{\mathbb{R}^{N}}a(x)u_{y}^{2}+ \lambda \int_{\mathbb{R}^{N}}a(x)u_{y}v_{h,\lambda,y} + \frac{\lambda}{2}\int_{\mathbb{R}^{N}}a(x) (v_{h,\lambda,y})^{2} \\& \qquad {} -\frac{1}{p}\int_{\mathbb{R}^{N}}(u_{y})^{p} -\int_{\mathbb{R}^{N}}(u_{y})^{p-1}v_{h,\lambda,y} + \frac{\lambda}{p}\int_{\mathbb{R}^{N}}b(x) (u_{y})^{p} \\& \qquad {} +\int_{\mathbb{R}^{N}}b(x) (u_{y})^{p-1}v_{h,\lambda,y}+O \bigl(\| v_{h,\lambda,y}\|^{2}\bigr). \end{aligned}$$
Denote
$$\begin{aligned} \mathcal{K}_{y} =&-\sum_{i=1}^{n} \int_{\mathbb{R}^{N}}\nabla z(x-y_{i})\cdot \nabla(v_{h,\lambda,y})-\frac{1}{2}\int_{\mathbb {R}^{N}}\bigl\vert \nabla(v_{h,\lambda,y})\bigr\vert ^{2} \\ &{}-\sum _{i< j}\int_{\mathbb {R}^{N}}\nabla z(x-y_{i}) \cdot\nabla z(x-y_{j}) \\ &{} -\frac{1}{2}\int_{\mathbb{R}^{N}}(v_{h,\lambda,y})^{2} -\sum_{i=1}^{n}\int_{\mathbb{R}^{N}}z(x-y_{i}) \cdot v_{h,\lambda,y}-\sum_{i<j}\int _{\mathbb{R}^{N}}z(x-y_{i})\cdot z(x-y_{j}) \\ &{} -\lambda\int_{\mathbb{R}^{N}}a(x)u_{y}v_{h,\lambda,y} - \frac{\lambda}{2}\int_{\mathbb{R}^{N}}a(x) (v_{h,\lambda,y})^{2} +\frac{1}{p}\int_{\mathbb{R}^{N}}(u_{y})^{p} +\int_{\mathbb{R}^{N}}(u_{y})^{p-1}v_{h,\lambda,y} \\ &{} -\lambda\int_{\mathbb{R}^{N}}b(x) (u_{y})^{p-1}v_{h,\lambda,y} -\frac{1}{p}\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}z^{p}(x-y_{i}) +O\bigl( \|v_{h,\lambda,y}\|^{2}\bigr). \end{aligned}$$
$$ \Phi_{\lambda}(u_{y}+v_{h,\lambda,y})=nc_{0}+ \frac{\lambda }{2}\int_{\mathbb{R}^{N}}a(x)u_{y}^{2} +\frac{\lambda}{p}\int_{\mathbb{R}^{N}}b(x)u_{y}^{p}- \mathcal {K}_{y}. $$
Thus, in order to estimate the functional \(\Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\), it suffices to get the estimations for \(\mathcal{K}_{y}\). Since
$$ \int_{\mathbb{R}^{N}}\nabla z(x-y_{i})\cdot\nabla v+\int _{\mathbb{R}^{N}}z(x-y_{i})v=\int_{\mathbb {R}^{N}}z^{p-1}(x-y_{i})v, \quad \forall v\in H^{1}\bigl(\mathbb{R}^{N}\bigr), $$
\(\mathcal{K}_{y}\) can be rewritten as
$$\begin{aligned} \mathcal{K}_{y} =&-\sum_{i=1}^{n} \int_{\mathbb{R}^{N}} z^{p-1}(x-y_{i})v_{h,\lambda,y}- \sum_{i< j}\int_{\mathbb{R}^{N}} z^{p-1}(x-y_{i})z(x-y_{j}) \\ &{} -\lambda\int_{\mathbb{R}^{N}}a(x)u_{y}v_{h,\lambda,y} - \frac{\lambda}{2}\int_{\mathbb{R}^{N}}a(x) (v_{h,\lambda,y})^{2} +\frac{1}{p}\int_{\mathbb{R}^{N}}(u_{y})^{p} +\int_{\mathbb{R}^{N}}(u_{y})^{p-1}v_{h,\lambda,y} \\ &{} -\lambda\int_{\mathbb{R}^{N}}b(x) (u_{y})^{p-1}v_{h,\lambda,y} -\frac{1}{p}\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}z^{p}(x-y_{i}) +O\bigl( \|v_{h,\lambda,y}\|^{2}\bigr)+\lambda O\bigl(\|v_{h,\lambda,y} \|^{2}\bigr). \end{aligned}$$
Moreover, by the Hölder inequality one has
$$\begin{aligned} \lambda\biggl\vert \int_{\mathbb{R}^{N}}a(x)u_{y}v_{h,\lambda,y} \biggr\vert \leq&\lambda C\biggl(\int_{\mathbb{R}^{N}}a(x)u_{y}^{2} \biggr)^{\frac{1}{2}}\|v_{h,\lambda ,y}\| \\ \leq& C\lambda^{2}\int _{\mathbb{R}^{N}}a(x)u_{y}^{2}+C\|v_{h,\lambda,y} \|^{2} \end{aligned}$$
$$\begin{aligned} \lambda\biggl\vert \int_{\mathbb{R}^{N}}b(x) (u_{y})^{p-1}v_{h,\lambda,y} \biggr\vert \leq& C\lambda\biggl(\int_{\mathbb{R}^{N}}b(x)u_{y}^{p} \biggr)^{\frac{p-1}{p}}\| v_{h,\lambda,y}\| \\ \leq& C\lambda^{2}\int _{\mathbb{R}^{N}}b(x)u_{y}^{p}+C\|v_{h,\lambda,y} \|^{2}. \end{aligned}$$
Therefore, we have
$$\begin{aligned} \mathcal {K}_{y} =&\int_{\mathbb{R}^{N}}(u_{y})^{p-1}v_{h,\lambda,y}- \sum_{i=1}^{n}\int_{\mathbb{R}^{N}} z^{p-1}(x-y_{i})v_{h,\lambda,y}-\sum _{i< j}\int_{\mathbb{R}^{N}} z^{p-1}(x-y_{i})z(x-y_{j}) \\ &{} +\frac{1}{p}\int_{\mathbb{R}^{N}}(u_{y})^{p} -\frac{1}{p}\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}z^{p}(x-y_{i}) +O\bigl( \|v_{h,\lambda,y}\|^{2}\bigr)+\lambda O\bigl(\|v_{h,\lambda,y} \|^{2}\bigr) \\ &{} +O \biggl(\lambda^{2} \biggl(\int_{\mathbb{R}^{N}}a(x)u_{y}^{2} +\int_{\mathbb{R}^{N}}b(x)u_{y}^{p} \biggr) \biggr). \end{aligned}$$
There exist \(h_{0}>0\), \(\lambda_{0}>0\), and \(C_{i}>0\) (\(i=1,2,3\)) such that, if \(0<\lambda\leq\lambda_{0}\), \(h\geq h_{0}\), and \(y\in\mathcal{D}_{h}\), then \(\mathcal{K}_{y}\) satisfies
$$\begin{aligned}& \mathcal{K}_{y}\geq C\sum_{i< j}\int _{\mathbb{R}^{N}} z^{p-1}(x-y_{i})z(x-y_{j})-C_{1} \|v_{h,\lambda,y}\|^{2}-\lambda C_{2}\|v_{h,\lambda,y} \|^{2}-C_{3}\lambda^{2}, \\& \mathcal{K}_{y}\leq C \biggl(\sum_{i<j} \int_{\mathbb{R}^{N}} z^{p-1}(x-y_{i})z(x-y_{j})+ \|v_{h,\lambda,y}\|^{2}+\lambda \|v_{h,\lambda,y}\|^{2}- \lambda^{2} \biggr). \end{aligned}$$
From Lemmas 2.4 and 2.6, one sees that
$$\begin{aligned}& \Biggl\vert \int_{\mathbb{R}^{N}}(u_{y})^{p-1}v_{h,\lambda,y}- \sum_{i=1}^{n}\int_{\mathbb{R}^{N}} z^{p-1}(x-y_{i})v_{h,\lambda,y}\Biggr\vert \\& \quad \leq \Biggl(\int_{\mathbb{R}^{N}} \Biggl((u_{y})^{p-1}- \sum_{i=1}^{n}z^{p-1}(x-y_{i}) \Biggr)^{\frac{p}{p-1}} \Biggr)^{\frac{p-1}{p}} \biggl(\int_{\mathbb{R}^{N}}|v_{h,\lambda,y}|^{p} \biggr)^{\frac {1}{p}} \\& \quad \leq C \biggl(\int_{\mathbb{R}^{N}}\sum _{i\neq j}z^{p-1}(x-y_{i})z(x-y_{j}) \biggr)^{\frac{p-1}{p}}\|v_{h,\lambda ,y}\| \\& \quad \leq C \biggl(\int_{\mathbb{R}^{N}}\sum _{i\neq j}z^{p-1}(x-y_{i})z(x-y_{j}) \biggr)^{\frac{2(p-1)}{p}}+C\| v_{h,\lambda,y}\|^{2} \\& \quad \leq C \biggl(\int_{\mathbb{R}^{N}}\sum _{i\neq j}z^{p-1}(x-y_{i})z(x-y_{j}) \biggr)o(1)+C\|v_{h,\lambda,y}\|^{2}. \end{aligned}$$
Moreover, by Lemma 2.3, we have
$$ \int_{\mathbb{R}^{N}}u_{y}^{p}\geq \sum_{i=1}^{n}\int_{\mathbb {R}^{N}}z^{p}(x-y_{i}) +2(p-1)\sum_{1\leq i< j\leq n}\int_{\mathbb{R}^{N}}z^{p-1}(x-y_{i})z(x-y_{j}) $$
and by Lemma 2.1, one has
$$ \int_{\mathbb{R}^{N}}u_{y}^{p}\leq \sum_{i=1}^{n}\int_{\mathbb {R}^{N}}z^{p}(x-y_{i}) +C\sum_{1\leq i< j\leq n}\int_{\mathbb{R}^{N}}z^{p-1}(x-y_{i})z(x-y_{j}). $$
Here the fact
$$ \int_{\mathbb{R}^{N}}z^{p-1}(x-y_{i})z(x-y_{j})= \int_{\mathbb {R}^{N}}z(x-y_{i})z^{p-1}(x-y_{j}) $$
has been used. Substituting (2.13)-(2.15) into (2.12), one can easily get the desired conclusion. □
Next, we are in a position to estimate \(\|v_{h,\lambda,y}\|\).
\(\|v_{h,\lambda,y}\|\) satisfies
$$\begin{aligned} \|v_{h,\lambda,y}\| \leq& C\lambda \biggl(\int_{\mathbb{R}^{N}}a(x)u_{y}^{2} \biggr)^{\frac{1}{2}} +C\lambda \biggl(\int_{\mathbb{R}^{N}}b(x)u_{y}^{p} \biggr)^{\frac{p-1}{p}} \\ &{}+C \biggl(\sum_{i< j}\int _{\mathbb {R}^{N}}z^{p-1}(x-y_{i})z(x-y_{j}) \biggr)^{\frac{p-1}{p}}. \end{aligned}$$
By Lemma 2.11, for \(v\in\mathcal{W}_{y}\), one has
$$\begin{aligned} 0 =& \bigl\langle \nabla\Phi_{\lambda}(u_{y}+v_{h,\lambda ,y}),v \bigr\rangle \\ =&\sum_{i=1}^{n}\int_{\mathbb{R}^{N}} \nabla z(x-y_{i})\cdot\nabla v+\int_{\mathbb{R}^{N}} \nabla(v_{h,\lambda,y})\cdot\nabla v \\ &{} +\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}z(x-y_{i})v+\int_{\mathbb{R}^{N}}v_{h,\lambda,y}v +\lambda\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}a(x)z(x-y_{i})v \\ &{} +\lambda\int_{\mathbb{R}^{N}}a(x)v_{h,\lambda,y}v -\int _{\mathbb{R}^{N}}P_{\lambda}|u_{y}+v_{h,\lambda ,y}|^{p-2}(u_{y}+v_{h,\lambda,y})v. \end{aligned}$$
There exists \(\theta\in(0,1)\) such that
$$\begin{aligned}& \int_{\mathbb{R}^{N}}P_{\lambda}|u_{y}+v_{h,\lambda ,y}|^{p-2}(u_{y}+v_{h,\lambda,y})v \\& \quad =(p-1)\int_{\mathbb{R}^{N}}P_{\lambda}|u_{y}+ \theta v_{h,\lambda,y}|^{p-2}v_{h,\lambda,y}v +\int_{\mathbb{R}^{N}}P_{\lambda}u_{y}^{p-1}v. \end{aligned}$$
Substituting (2.17) into (2.16) yields
$$\begin{aligned}& \int_{\mathbb{R}^{N}} \nabla(v_{h,\lambda,y})\cdot\nabla v+\int _{\mathbb{R}^{N}}v_{h,\lambda,y}v-(p-1)\int_{\mathbb {R}^{N}}P_{\lambda}|u_{y}+ \theta v_{h,\lambda,y}|^{p-2}v_{h,\lambda,y}v \\& \quad =-\lambda\int_{\mathbb{R}^{N}}a(x)v_{h,\lambda,y}v-\lambda\sum _{i=1}^{n}\int_{\mathbb{R}^{N}}a(x)z(x-y_{i})v \\& \qquad {}+\int_{\mathbb{R}^{N}}P_{\lambda}u_{y}^{p-1}v- \sum_{i=1}^{n}\int_{\mathbb{R}^{N}}z^{p-1}(x-y_{i})v. \end{aligned}$$
Using the operator \(\mathcal{N}\) and \(\mathcal{P}_{y}\) defined in Section 2.1, we have
$$\begin{aligned}& \bigl\langle v_{h,\lambda,y}-\mathcal{P}_{y}\mathcal {N}\bigl(P_{\lambda}|u_{y}+\theta v_{h,\lambda,y}|^{p-2}v_{h,\lambda,y} \bigr),v\bigr\rangle \\& \quad = -\lambda\int_{\mathbb{R}^{N}}a(x)v_{h,\lambda,y}v- \lambda\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}a(x)z(x-y_{i})v \\& \qquad {} +\int_{\mathbb{R}^{N}}P_{\lambda}u_{y}^{p-1}v- \sum_{i=1}^{n}\int_{\mathbb{R}^{N}}z^{p-1}(x-y_{i})v. \end{aligned}$$
By Lemma 2.4, one has
$$\begin{aligned}& \Biggl\vert \int_{\mathbb{R}^{N}}P_{\lambda}u_{y}^{p-1}v- \sum_{i=1}^{n}\int_{\mathbb{R}^{N}}z^{p-1}(x-y_{i})v \Biggr\vert \\& \quad \leq \Biggl(\int_{\mathbb{R}^{N}}\Biggl\vert u_{y}^{p-1}-\sum_{i=1}^{n}z^{p-1}(x-y_{i}) \Biggr\vert |v| \Biggr)+\lambda\int_{\mathbb {R}^{N}}bu_{y}^{p-1}|v| \\& \quad \leq C \biggl(\int_{\mathbb{R}^{N}}\sum_{i\neq j}z^{p-1}(x-y_{i})z(x-y_{j}) \biggr)^{\frac{p-1}{p}}\|v\| +\lambda C \biggl(\int_{\mathbb{R}^{N}}bu_{y}^{p} \biggr)^{\frac{p-1}{p}}\|v\|. \end{aligned}$$
Therefore, choosing \(v=v_{h,\lambda,y}-\mathcal{P}_{y}\mathcal {N}(P_{\lambda}|u_{y}+\theta v_{h,\lambda,y}|^{p-2}v_{h,\lambda,y})\in\mathcal{W}_{y}\) in (2.18) and using Lemmas 2.9 and 2.10, we obtain, for some \(\eta>0\),
$$\begin{aligned} \begin{aligned} \eta\|v_{h,\lambda,y}\|\|v\|\leq{}&\lambda\int_{\mathbb {R}^{N}}a(x)|v_{h,\lambda,y}v| -\lambda\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}a(x)z(x-y_{i})|v| \\ &{}+C \biggl(\int_{\mathbb{R}^{N}}\sum_{i\neq j}z^{p-1}(x-y_{i})z(x-y_{j}) \biggr)^{\frac{p-1}{p}}\|v\|+\lambda C \biggl(\int_{\mathbb{R}^{N}}bu_{y}^{p} \biggr)^{\frac{p-1}{p}}\|v\|, \end{aligned} \end{aligned}$$
which implies, for \(\lambda>0\) sufficiently small,
$$\begin{aligned} \|v_{h,\lambda,y}\|\|v\| \leq& C\lambda\biggl(\int_{\mathbb{R}^{N}}a(x)u_{y}^{2} \biggr)^{\frac{1}{2}}\|v\| +\lambda C\biggl(\int_{\mathbb{R}^{N}}bu_{y}^{p} \biggr)^{\frac{p-1}{p}}\|v\| \\ &{}+C\biggl(\int_{\mathbb{R}^{N}}\sum_{i\neq j}z^{p-1}(x-y_{i})z(x-y_{j}) \biggr)^{\frac{p-1}{p}}\|v\|. \end{aligned}$$
Thus, we obtain the result. □
Proof of Theorem 1.1
The main purpose of this section is to prove Theorem 1.1. For this, we shall prove that, for \(\lambda>0\) small enough, we can choose \(\mu(\lambda)\) large enough such that the function \(\Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\) defined in Section 2.1 reaches its maximum in \(\mathcal{D}_{\mu}\) at some point \(y^{0}=(y_{1}^{0},\ldots,y_{n}^{0})\). Then \(u_{y^{0}}+v_{h,\lambda,y^{0}}\) is a solution of (\(\mathcal {S}_{\lambda}\)) by Lemma 2.12.
We shall mainly consider the case \(n\geq2\) since the case \(n=1\) is much easier. Define
$$ \gamma= \sup_{y\in(\mathbb{R}^{N})^{n}} \biggl(\int_{\mathbb{R}^{N}}b(x)u_{y}^{p}(x)+ \int_{\mathbb {R}^{N}}a(x)u_{y}^{2}(x) \biggr). $$
By Lemmas 2.1 and 2.2, there exist \(\lambda'_{0}>0\), \(h'_{0}>\), and \(C'_{i}>0\) (\(i=1,2,3\)) such that, if \(0<\lambda\leq\lambda'_{0}\), \(h\geq h'_{0}\), and \(y\in\mathcal {D}_{h}\), then \(\mathcal{K}_{y}\) satisfies
$$ \mathcal{K}_{y}\geq C'_{1}\sum _{i< j}\int_{\mathbb{R}^{N}} z^{p-1}(x-y_{i})z(x-y_{j})-C'_{2} \lambda^{2}-C'_{3}\lambda^{3}. $$
Here and in the sequel, \(C_{i}\), \(C'_{i}\), and C are various positive constants independent of λ. We choose a number k such that \(k>\max\{1,12\gamma/C'_{1}\}\). Then, for any λ satisfying
$$ 0<\lambda<\lambda'=\min \biggl\{ \frac{\|z\|_{L^{p}}}{k}, \frac {kC'_{1}}{2C'_{2}},\sqrt{\frac{kC'_{1}}{4C'_{3}}},\lambda_{0} \biggr\} , $$
there exists \(\mu^{*}=\mu^{*}(\lambda)>\mu=\mu(\lambda)>0\) such that, for \(w\in\mathbb{R}^{N}\) with \(|w|\in[\mu^{*},\mu]\),
$$ k\lambda\leq\int_{\mathbb{R}^{N}}z^{p-1}(x)z(x-w) \leq2k\lambda. $$
$$ \Gamma_{\lambda}=\bigl\{ \Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\mid y \in \mathcal {D}_{\mu}\bigr\} . $$
To obtain an n-bump solution of (\(\mathcal{S}_{\lambda}\)), it suffices to prove that \(\Gamma_{\lambda}\) is achieved in the interior of \(\mathcal{D}_{\mu}\).
Assume \(n\geq2\). Then there exists \(\lambda_{1}\in(0,\lambda')\) such that, for \(0<\lambda<\lambda_{1}\),
$$ \Gamma_{\lambda}>\sup \bigl\{ \Phi_{\lambda}(u_{y}+v_{h,\lambda ,y})\mid y \in\mathcal {D}_{\mu} \textit{ and } |y_{i}-y_{j}| \in\bigl[\mu^{*},\mu\bigr] \textit{ for some } i\neq j \bigr\} . $$
Note that \(\mu(\lambda)\rightarrow\infty\) as \(\lambda\rightarrow0\). By Lemma 2.2 and (3.3) we see that, if \(y\in\mathcal{D}_{\mu(\lambda)}\), then
$$ \|v_{\mu,\lambda,y}\|\leq C\lambda^{\frac{p-1}{p}}. $$
Suppose that \(y=(y_{1},\ldots,y_{n})\in\mathcal {D}_{\mu(\lambda)}\) and \(|y_{i}-y_{j}|\in[\mu(\lambda),\mu^{*}(\lambda)]\) for some \(i\neq j\). By (3.1)-(3.3), one has
$$ \mathcal{K}_{y}\geq C'_{1}k \lambda-C'_{2}\lambda^{2}-C'_{3} \lambda^{3}\geq\frac {1}{2}C'_{1}k \lambda-C'_{3}\lambda^{3} \geq \frac{1}{4}C'_{1}k\lambda\geq3\gamma\lambda. $$
By (2.11) and (3.5), for \(\lambda>0\) small enough, we obtain
$$\begin{aligned} \Phi_{\lambda}(u_{y}+v_{\mu,\lambda,y}) =&nc_{0}+ \frac{\lambda }{2}\int_{\mathbb{R}^{N}}a(x)u_{y}^{2} +\frac{\lambda}{p}\int_{\mathbb{R}^{N}}b(x)u_{y}^{p}- \mathcal {K}_{y} \\ \leq& nc_{0}+\gamma\lambda-3\gamma\lambda=nc_{0}-2\gamma \lambda \end{aligned}$$
for \(y=(y_{1},\ldots,y_{n})\in\mathcal{D}_{\mu(\lambda)}\) with \(|y_{i}-y_{j}|\in[\mu(\lambda),\mu^{*}(\lambda)]\) for some \(i\neq j\). On the other hand, if \(y=(y_{1},\ldots,y_{n})\in\mathcal{D}_{\mu(\lambda)}\) and \(|y_{i}-y_{j}|\rightarrow\infty\) for some \(i\neq j\), then by (2.11) and Lemmas 2.1 and 2.2, we have
$$\begin{aligned} \Phi_{\lambda}(u_{y}+v_{\mu,\lambda,y}) =&nc_{0}+ \frac{\lambda }{2}\int_{\mathbb{R}^{N}}a(x)u_{y}^{2} +\frac{\lambda}{p}\int_{\mathbb{R}^{N}}b(x)u_{y}^{p}- \mathcal {K}_{y} \\ \geq& nc_{0}+\frac{\lambda}{p} \biggl(\int_{\mathbb {R}^{N}}a(x)u_{y}^{2}+ \int_{\mathbb{R}^{N}}b(x)u_{y}^{p} \biggr) -C \lambda^{2} \\ &{} -C_{4}\lambda\|v_{\mu,\lambda,y}\|^{2}-C_{5} \|v_{\mu,\lambda ,y}\|^{2}+o(1) \\ \geq& nc_{0}+\frac{\lambda}{p} \biggl(\int_{\mathbb {R}^{N}}a(x)u_{y}^{2}+ \int_{\mathbb{R}^{N}}b(x)u_{y}^{p} \biggr) -C \lambda^{2} \\ &{} -C'_{4}\|v_{\mu,\lambda,y}\|^{2}+o(1) \\ \geq& nc_{0}+\frac{\lambda}{p} \biggl(\int_{\mathbb {R}^{N}}a(x)u_{y}^{2}+ \int_{\mathbb{R}^{N}}b(x)u_{y}^{p} \biggr) -C \lambda^{2} \\ &{} -\lambda^{2}C'_{5} \biggl(\int _{\mathbb {R}^{N}}a(x)u_{y}^{2}+\int _{\mathbb{R}^{N}}b(x)u_{y}^{p} \biggr)+o(1), \end{aligned}$$
where \(o(1)\) means some quantities which depend only on y and converge to 0 as \(|y_{i}-y_{j}|\rightarrow\infty\) for all \(i\neq j\). Therefore, for λ small enough,
$$ \liminf_{|y_{i}-y_{j}|\rightarrow\infty,\forall i\neq j}\Phi_{\lambda}(u_{y}+v_{h,\lambda,y}) \geq nc_{0}. $$
This inequality contradicts (3.6). Thus, we obtain the result. □
We choose \(y^{k}(\lambda)=(y_{1}^{k}(\lambda),\ldots,y_{n}^{k}(\lambda))\in \mathcal {D}_{\mu(\lambda)}\) such that
$$ \lim_{k\rightarrow\infty}\Phi_{\lambda}(u_{y^{k}(\lambda)}+v_{\mu ,\lambda,y^{k}(\lambda)})= \Gamma_{\lambda}. $$
Then Lemma 3.1 implies that
$$ \inf_{k}\min_{i\neq j}\bigl\vert y_{i}^{k}(\lambda)-y_{j}^{k}(\lambda) \bigr\vert \geq\mu^{*}(\lambda). $$
Therefore, for any \(1\leq i\leq n\), passing to a subsequence if necessary, we may assume either \(\lim_{k\rightarrow\infty}y_{i}^{k}(\lambda)=y_{i}^{0}(\lambda)\) with \(|y_{i}^{0}(\lambda)-y_{j}^{0}(\lambda)|\geq\mu^{*}\) for \(i\neq j\) or \(\lim_{k\rightarrow\infty}y_{i}^{k}(\lambda)=\infty\). Define
$$ \mathcal{U}(\lambda)= \bigl\{ 1\leq i\leq n\mid \bigl\vert y_{i}^{k}( \lambda)\bigr\vert \rightarrow\infty, \text{as } k\rightarrow\infty \bigr\} . $$
In the following, we shall prove that \(\mathcal {U}(\lambda)=\emptyset\) for \(\lambda>0\) sufficiently small and thus \(\Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\) attains its maximum at \((y_{1}^{0}(\lambda),\ldots,y_{n}^{0}(\lambda))\in\mathcal {D}_{\mu(\lambda)}\).
Assume \(n\geq2\). Then there exists \(\lambda(n)>0\) such that for \(\lambda\in(0,\lambda(n))\), \(\mathcal{U}(\lambda)=\emptyset\).
We adopt an argument borrowed from Lin and Liu [39, 40]. We argue by contradiction and assume that \(\mathcal{U}(\lambda)\neq\emptyset\) along a sequence \(\lambda_{m}\rightarrow0\). Without loss of generality, we may assume \(\mathcal {U}(\lambda_{m})=\{1,\ldots,j_{n}\}\) for all \(m\in\mathbb{N}\) and for some \(1\leq j_{n}< n\). The case in which \(j_{n}=n\) can be handled similarly. For convenience of notations, we shall denote \(\lambda_{m}=\lambda\), \(y_{i}^{k}=y_{i}^{k}(\lambda_{m})\), \(y^{k}=(y_{1}^{k},\ldots,y_{n}^{k})\), \(y_{*}^{k}=(y_{j_{n}+1}^{k},\ldots,y_{n}^{k})\), and \(y_{*}^{0}=(y_{j_{n}+1}^{0},\ldots,y_{n}^{0})\) for \(k=1,2,\ldots\) . Then, as \(k\rightarrow\infty\),
$$ \bigl\vert y_{1}^{k}\bigr\vert \rightarrow\infty,\qquad \ldots,\qquad \bigl|y_{j_{n}}^{k}\bigr|\rightarrow\infty $$
$$ y_{j_{n}+1}^{k}\rightarrow y_{j_{n}+1}^{0}, \qquad \ldots, \qquad y_{n}^{k}\rightarrow y_{n}^{0}. $$
$$ w_{k}=\sum_{i=1}^{n}z \bigl(x-y_{i}^{k}\bigr),\qquad w_{k,1}=\sum _{i=1}^{j_{n}}z\bigl(x-y_{i}^{k} \bigr) $$
$$ w_{k,2}=\sum_{i=j_{n}+1}^{n}z \bigl(x-y_{i}^{k}\bigr),\qquad w_{y_{*}^{0}}=\sum _{i=j_{n}+1}^{n}z\bigl(x-y_{i}^{0} \bigr). $$
Similar to (3.4), we have
$$ \Vert v_{\mu,\lambda,y^{k}}\Vert \leq C\lambda^{\frac{p-1}{p}}, \qquad \Vert v_{\mu,\lambda,y_{*}^{k}}\Vert \leq C\lambda^{\frac{p-1}{p}},\quad k=1,2, \ldots. $$
By (2.11), we obtain
$$\begin{aligned} \begin{aligned}[b] \Phi_{\lambda}(w_{k}+v_{\mu,\lambda,y^{k}})={}&nc_{0}+ \frac{\lambda }{2}\int_{\mathbb{R}^{N}}a(x)w_{k}^{2} +\frac{\lambda}{p}\int_{\mathbb{R}^{N}}b(x)w_{k}^{p}- \mathcal {K}_{y^{k}} \\ ={}&j_{n}c_{0}+\frac{\lambda}{2}\int_{\mathbb {R}^{N}}a(x)w_{k,1}^{2}+ \frac{\lambda}{2}\int_{\mathbb {R}^{N}}a(x)w_{k,2}^{2} +\lambda\int_{\mathbb{R}^{N}}a(x)w_{k,1}w_{k,2} \\ &{} +(n-j_{n})c_{0}+\frac{\lambda}{p}\int _{\mathbb {R}^{N}}b(x)w_{k}^{p}+\mathcal {K}_{y_{*}^{k}}-\mathcal{K}_{y^{k}}-\mathcal{K}_{y_{*}^{k}} \\ ={}&j_{n}c_{0}+\frac{\lambda}{2}\int_{\mathbb {R}^{N}}a(x)w_{k,1}^{2}+ \lambda\int_{\mathbb {R}^{N}}a(x)w_{k,1}w_{k,2}+ \mathcal {K}_{y_{*}^{k}}-\mathcal{K}_{y^{k}} \\ &{} +\frac{\lambda}{p}\int_{\mathbb {R}^{N}}b(x) \bigl(w_{k}^{p}-w_{k,2}^{p} \bigr)+\Phi_{\lambda}(w_{k,2}+v_{\mu ,\lambda,y_{*}^{k}}). \end{aligned} \end{aligned}$$
By Lemma 2.1, one sees
$$\begin{aligned} \int_{\mathbb{R}^{N}}b(x) \bigl(w_{k}^{p}-w_{k,2}^{p} \bigr) \leq& C_{6}\sum_{i=1}^{j_{n}} \int_{\mathbb{R}^{N}}b(x)z^{p-1}\bigl(x-y_{i}^{k} \bigr)w_{k,2} \\ &{}+C_{8}\sum_{i=1}^{j_{n}} \int_{\mathbb{R}^{N}}b(x)z^{p}\bigl(x-y_{j}^{k} \bigr) \\ &{} +C_{7}\sum_{j=j_{n}+1}^{n}\int _{\mathbb {R}^{N}}b(x)z^{p-1}\bigl(x-y_{j}^{k} \bigr)w_{k,1}. \end{aligned}$$
Therefore, since \(|y_{i}^{k}|\rightarrow\infty\), \(i=1,\ldots,j_{n}\), as \(k\rightarrow\infty\), we obtain
$$ \frac{\lambda}{2}\int_{\mathbb{R}^{N}}a(x)w_{k,1}^{2}+ \lambda\int_{\mathbb{R}^{N}}a(x)w_{k,1}w_{k,2} \rightarrow0. $$
Furthermore, by (3.9) and the condition (\(\mathcal{R}_{1}\)), we have
$$ \frac{\lambda}{p}\int_{\mathbb {R}^{N}}b(x) \bigl(w_{k}^{p}-w_{k,2}^{p}\bigr) \rightarrow0,\quad \text{as } k\rightarrow\infty. $$
From (3.8), (3.10) and (3.11), we arrive at
$$ \Phi_{\lambda}(w_{k}+v_{\mu,\lambda,y^{k}})\leq \Phi_{\lambda }(w_{k,2}+v_{\mu,\lambda,y_{*}^{k}})+j_{n}c_{0} +\mathcal{K}_{y_{*}^{k}}-\mathcal{K}_{y^{k}}+o(1). $$
Using Lemma 2.4, (3.3), and (3.7), we obtain
$$\begin{aligned}& \int_{\mathbb{R}^{N}} \Biggl\vert \sum _{i=1}^{n}z^{p-1}\bigl(x-y_{i}^{k} \bigr)- \Biggl(\sum_{i=1}^{n}z \bigl(x-y_{i}^{k}\bigr) \Biggr)^{p-1}\Biggr\vert |v_{\mu,\lambda ,y^{k}}| \\& \quad \leq C \biggl(\sum_{i< j}\int _{\mathbb {R}^{N}}z^{p-1}\bigl(x-y_{i}^{k} \bigr)z\bigl(x-y_{j}^{k}\bigr) \biggr)^{\frac{p-1}{p}}\| v_{\mu,\lambda,y^{k}}\| \\& \quad \leq C'\lambda^{\frac{2(p-1)}{p}}. \end{aligned}$$
From Lemma 2.2, (2.12), (3.7), and (3.13), one gets
$$\begin{aligned} \mathcal {K}_{y^{k}} =&\frac{1}{p}\int _{\mathbb{R}^{N}} \Biggl(\sum_{i=1}^{n}z \bigl(x-y_{i}^{k}\bigr) \Biggr)^{p} - \frac{1}{p}\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}z^{p}\bigl(x-y_{i}^{k} \bigr) \\ &{} -\sum_{i< j}\int_{\mathbb {R}^{N}}z^{p-1} \bigl(x-y_{i}^{k}\bigr)z\bigl(x-y_{j}^{k} \bigr)+O\bigl(\lambda^{\frac{2(p-1)}{p}}\bigr). \end{aligned}$$
In the same way, we have
$$\begin{aligned} \mathcal {K}_{y_{*}^{k}} =&\frac{1}{p}\int _{\mathbb{R}^{N}} \Biggl(\sum_{i=j_{n}+1}^{n}z \bigl(x-y_{i}^{k}\bigr) \Biggr)^{p} - \frac{1}{p}\sum_{i=j_{n}+1}^{n}\int _{\mathbb {R}^{N}}z^{p}\bigl(x-y_{i}^{k} \bigr) \\ &{} -\sum_{j_{n}< i<j}\int_{\mathbb {R}^{N}}z^{p-1} \bigl(x-y_{i}^{k}\bigr)z\bigl(x-y_{j}^{k} \bigr)+O\bigl(\lambda^{\frac{2(p-1)}{p}}\bigr). \end{aligned}$$
We infer from (3.14) and (3.15) that
$$\begin{aligned} \mathcal{K}_{y_{*}^{k}}-\mathcal {K}_{y^{k}} =& \frac{1}{p}\int_{\mathbb{R}^{N}}(w_{k,2})^{p}- \frac {1}{p}\int_{\mathbb{R}^{N}}(w_{k})^{p} + \frac{1}{p}\sum_{i=1}^{j_{n}}\int _{\mathbb {R}^{N}}z^{p}\bigl(x-y_{i}^{k} \bigr) \\ &{} +\sum_{i< j\leq j_{n}}\int_{\mathbb{R}^{N}}z^{p-1} \bigl(x-y_{i}^{k}\bigr)z\bigl(x-y_{j}^{k} \bigr) +O\bigl(\lambda^{\frac{2(p-1)}{p}}\bigr) \\ &{} +\sum_{i=1}^{j_{n}}\int _{\mathbb{R}^{N}}z^{p-1}\bigl(x-y_{i}^{k} \bigr)w_{k,2}. \end{aligned}$$
By Lemma 2.3, the sum of the terms except \(O(\lambda^{\frac{2(p-1)}{p}})\) on the right side of (3.16) is negative. Thus, one has
$$ \mathcal{K}_{y_{*}^{k}}-\mathcal{K}_{y^{k}}\leq O \bigl(\lambda^{\frac{2(p-1)}{p}}\bigr). $$
Letting \(k\rightarrow\infty\), by (3.12), and using (3.17), we obtain
$$ \Gamma_{\lambda}\leq j_{n}c_{0}+ \Phi_{\lambda}(w_{y_{*}^{0}}+v_{\mu,\lambda ,y_{*}^{0}})+C'_{8} \lambda^{\frac{2(p-1)}{p}}. $$
On the other hand, by Lemma 2.6 and (3.3), there exist \(C_{9}, C_{10}> 0\) such that
$$ C_{9}\lambda\leq\mu^{-\frac{N-2}{2}}e^{-\mu}\leq C_{10}\lambda, $$
which implies for λ small enough
$$ (1-\delta)\ln\frac{1}{\lambda}\leq\mu=\mu(\lambda)\leq (1+\delta) \ln\frac{1}{\lambda}, $$
where \(0<\delta<\frac{1}{p}\). We choose τ such that \(0<\tau\leq\frac{p-2}{10np}\). By (\(\mathcal{R}_{2}\)), there exists \(R>0\) such that
$$ a(x)\geq e^{-\tau|x|}, \quad |x|\geq R $$
$$ b(x)\geq e^{-\tau|x|},\quad |x|\geq R. $$
For \(\lambda>0\) small enough, define
$$ \hat{y}_{s}^{\lambda}=\biggl(10n\ln \frac{1}{\lambda}-4s\mu(\lambda ),0,\ldots,0\biggr)\in\mathbb{R}^{N}, \quad s=1,2,\ldots,n. $$
The open balls \(B(\hat{y}_{s}^{\lambda},2\mu(\lambda))\) are mutually disjoint. Thus there are \(j_{n}\) integers from \(\{1,\ldots,n\}\), denoted by \(s_{1}< s_{2}<\cdots< s_{j_{n}}\), such that
$$ \bigl\vert \hat{y}_{s_{i}}^{\lambda}-y_{j}^{0} \bigr\vert \geq2\mu(\lambda),\quad i=1,\ldots,j_{n}, j=j_{n}+1,\ldots,n. $$
Denote \(\hat{y}_{s_{i}}^{\lambda}\) by \(y_{i}^{\lambda}\) for simplicity, \(i=1,\ldots,j_{n}\). By (3.20), (3.23), and (3.24), one has
$$\begin{aligned}& R+1\leq\bigl\vert y_{i}^{\lambda}\bigr\vert \leq10n\ln\frac{1}{\lambda},\quad i=1,\ldots,j_{n}, \end{aligned}$$
$$\begin{aligned}& \bigl\vert y_{i}^{\lambda}-y_{j}^{\lambda} \bigr\vert \geq2\mu(\lambda),\quad 1\leq i< j\leq j_{n}, \end{aligned}$$
$$\begin{aligned}& \bigl\vert y_{i}^{\lambda}-y_{j}^{0} \bigr\vert \geq2\mu(\lambda),\quad i=1,\ldots, j_{n}, j=j_{n}+1,\ldots,n. \end{aligned}$$
$$ \bigl(y_{1}^{\lambda},\ldots,y_{j_{n}}^{\lambda},y_{j_{n}+1}^{0}, \ldots ,y_{n}^{0}\bigr)\in\mathcal {D}_{\mu(\lambda)}. $$
Denote \(y^{\lambda}=(y_{1}^{\lambda},\ldots,y_{j_{n}}^{\lambda },y_{j_{n}+1}^{0},\ldots,y_{n}^{0})\). Set \(w_{\lambda,1}=\sum_{i=1}^{j_{n}}z(x-y_{i}^{\lambda})\), \(w_{y_{*}^{0}}=\sum_{i=j_{n}+1}^{n}z(x-y_{i}^{0})\), and \(w_{\lambda}=w_{\lambda,1}+w_{y_{*}^{0}}\). Similar to (3.8), one has
$$\begin{aligned} \Phi_{\lambda}(w_{\lambda}+v_{\mu,\lambda,y^{\lambda}}) =&j_{n}c_{0}+\frac{\lambda}{2}\int_{\mathbb{R}^{N}}a(x)w_{\lambda ,1}^{2}+ \lambda\int_{\mathbb{R}^{N}}a(x)w_{\lambda ,1}w_{y_{*}^{0}}+ \mathcal {K}_{y_{*}^{\lambda}}-\mathcal{K}_{y^{\lambda}} \\ &{} +\frac{\lambda}{p}\int_{\mathbb{R}^{N}}b(x) \bigl(w_{\lambda }^{p}-w_{y_{*}^{0}}^{p} \bigr)+\Phi_{\lambda}(w_{y_{*}^{0}}+v_{\mu,\lambda ,y_{*}^{0}}). \end{aligned}$$
As in (3.16), we have
$$\begin{aligned} \mathcal{K}_{y_{*}^{\lambda}}-\mathcal {K}_{y^{\lambda}} =&\frac{1}{p} \int_{\mathbb {R}^{N}}(w_{y_{*}^{0}})^{p}-\frac{1}{p} \int_{\mathbb {R}^{N}}(w_{\lambda})^{p} +\frac{1}{p} \sum_{i=1}^{j_{n}}\int_{\mathbb {R}^{N}}z^{p} \bigl(x-y_{i}^{\lambda}\bigr) \\ &{} +\sum_{i< j\leq j_{n}}z^{p-1} \bigl(x-y_{i}^{\lambda}\bigr)z\bigl(x-y_{j}^{\lambda} \bigr) +O\bigl(\lambda^{\frac{2(p-1)}{p}}\bigr) \\ &{} +\sum_{i=1}^{j_{n}}\int _{\mathbb {R}^{N}}z^{p-1}\bigl(x-y_{i}^{k} \bigr)w_{y_{*}^{0}} \\ \geq&\frac{1}{p}\int_{\mathbb{R}^{N}}(w_{y_{*}^{0}})^{p} +\frac{1}{p}\sum_{i=1}^{j_{n}}\int _{\mathbb {R}^{N}}z^{p}\bigl(x-y_{i}^{\lambda} \bigr) \\ &{} -\frac{1}{p}\int_{\mathbb{R}^{N}}(w_{\lambda})^{p}+O \bigl(\lambda^{\frac{2(p-1)}{p}} \bigr). \end{aligned}$$
Together with Lemma 2.1 this implies that
$$\begin{aligned} \mathcal{K}_{y_{*}^{\lambda}}-\mathcal{K}_{y^{\lambda}} \geq& -C \sum_{i=1}^{j_{n}}\int_{\mathbb{R}^{N}}z^{p-1} \bigl(x-y_{i}^{\lambda }\bigr)w_{y_{*}^{0}} -C\sum _{i=1}^{j_{n}}\int_{\mathbb {R}^{N}}(w_{y_{*}^{0}})^{p-1}z \bigl(x-y_{i}^{\lambda}\bigr) \\ &{}-C\sum_{1\leq i< j\leq j_{n}}\int_{\mathbb{R}^{N}}z^{p-1} \bigl(x-y_{i}^{\lambda }\bigr)z\bigl(x-y_{j}^{\lambda} \bigr) +O \bigl(\lambda^{\frac{2(p-1)}{p}} \bigr). \end{aligned}$$
By Lemma 2.6, (3.20), and (3.26), one sees that
$$\begin{aligned} \int_{\mathbb{R}^{N}}z^{p-1}\bigl(x-y_{i}^{\lambda} \bigr)z\bigl(x-y_{j}^{\lambda}\bigr) \leq& C_{11}e^{-2\mu(\lambda)} \leq C_{12}e^{-2(1-\delta)\ln\frac{1}{\lambda}} \\ =&C_{12}\lambda^{2(1-\delta)} \leq C_{13} \lambda^{\frac{2(p-1)}{p}}. \end{aligned}$$
In view of (3.27), a similar argument shows that
$$ \sum_{i=1}^{j_{n}}\int _{\mathbb{R}^{N}}z^{p-1}\bigl(x-y_{i}^{\lambda } \bigr)w_{y_{*}^{0}} +\sum_{i=1}^{j_{n}}\int _{\mathbb {R}^{N}}(w_{y_{*}^{0}})^{p}z\bigl(x-y_{i}^{\lambda} \bigr)\leq C_{14}\lambda^{\frac{2(p-1)}{p}}. $$
Combining (3.29)-(3.31), we have
$$ \mathcal{K}_{y_{*}^{\lambda}}-\mathcal {K}_{y^{\lambda}}\geq-C_{15} \lambda^{\frac{2(p-1)}{p}}. $$
Together with (3.28), it follows that
$$\begin{aligned} \Phi_{\lambda}(u_{y^{\lambda}}+v_{\mu,\lambda,y^{\lambda}}) \geq& j_{n}c_{0}+\frac{\lambda}{2}\int_{\mathbb{R}^{N}}a(x)w_{\lambda ,1}^{2}+ \lambda\int_{\mathbb{R}^{N}}a(x)w_{\lambda,1}w_{y_{*}^{0}} -C_{15}\lambda^{\frac{2(p-1)}{p}} \\ &{} +\frac{\lambda}{p}\int_{\mathbb{R}^{N}}b(x) \bigl(w_{y^{\lambda }}^{p}-w_{y_{*}^{0}}^{p} \bigr) +\Phi_{\lambda}(w_{y_{*}^{0}}+v_{\mu,\lambda,y_{*}^{0}}). \end{aligned}$$
We distinguish the following two cases to finish the proof of this lemma.
If (3.21) holds, then by (3.25), we have, for \(i=1,\ldots,j_{n}\),
$$\begin{aligned} \int_{\mathbb{R}^{N}}a(x)w_{\lambda,1}^{2} \geq&\int_{|x-y_{i}^{\lambda}|\leq1}a(x)z^{2}\bigl(x-y_{i}^{\lambda} \bigr) \geq\int_{|x-y_{i}^{\lambda}|\leq1}e^{-\tau |x|}z^{2} \bigl(x-y_{i}^{\lambda}\bigr) \\ \geq& C_{16}e^{-\tau|y_{i}^{\lambda}|}\geq C_{16}e^{-10n\tau\ln\frac{1}{\lambda}}=C_{16} \lambda^{10n\tau}. \end{aligned}$$
$$ \Phi_{\lambda}(u_{y^{\lambda}}+v_{\mu,\lambda,y^{\lambda}})\geq j_{n}c_{0}+\Phi_{\lambda}(w_{y_{*}^{0}}+v_{\mu,\lambda ,y_{*}^{0}})+C_{16} \lambda^{10n\tau+1} -C_{15}\lambda^{\frac{2(p-1)}{p}}. $$
Since \(10n\tau+1<\frac{2(p-1)}{p}\), we obtain, for λ small enough,
$$ \Phi_{\lambda}(u_{y^{\lambda}}+v_{\mu,\lambda,y^{\lambda}})\geq j_{n}c_{0}+\Phi_{\lambda}(w_{y_{*}^{0}}+v_{\mu,\lambda ,y_{*}^{0}})+C'_{16} \lambda^{10n\tau+1}, $$
which contradicts (3.18).
Suppose that (3.22) holds. Similar to (3.32), one has
$$\begin{aligned} \int_{\mathbb{R}^{N}}b(x) \bigl(u_{y^{\lambda }}^{p}-u_{y_{*}^{0}}^{p} \bigr) \geq&\int_{|x-y_{1}^{\lambda}|\leq 1}b(x)z^{p} \bigl(x-y_{1}^{\lambda}\bigr) \geq\int_{|x-y_{1}^{\lambda}|\leq1}e^{-\tau |x|}z^{p} \bigl(x-y_{1}^{\lambda}\bigr) \\ \geq& C_{17}e^{-\tau|y_{1}^{\lambda}|}\geq C_{17}e^{-10n\tau\ln\frac{1}{\lambda}}=C_{17} \lambda^{10n\tau}. \end{aligned}$$
Repeating the arguments of (i), we get, for λ small enough,
$$ \Phi_{\lambda}(u_{y^{\lambda}}+v_{\mu,\lambda,y^{\lambda}})\geq j_{n}c_{0}+\Phi_{\lambda}(w_{y_{*}^{0}}+v_{\mu,\lambda ,y_{*}^{0}})+C'_{17} \lambda^{10n\tau+1}. $$
This contradicts (3.18).
From (i) and (ii), we know that there exists \(\lambda(n)>0\) such that, if \(0 <\lambda<\lambda(n)\), then \(\mathcal {U}(\lambda)=\emptyset\) and \(\Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\) reaches its maximum at some point \((y_{1}^{0},\ldots,y_{n}^{0})\in\mathcal {D}_{\mu(\lambda)}\). □
Next, we shall prove Theorem 1.1.
For \(n\geq2\), according to Lemma 3.2, if \(0<\lambda<\lambda(n)\), then \(\Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\) reaches its maximum at some point \(y^{0}=(y_{1}^{0},\ldots,y_{n}^{0})\in\mathcal {D}_{\mu(\lambda)}\). Then \(u_{y^{0}}+v_{h,\lambda,y^{0}}\) is an n-bump solution of (\(\mathcal{S}_{\lambda}\)). For \(n=1\), as a consequence of Lemma 2.11(iii), if \(\lambda\in(0,\lambda_{0}]\), then
$$ \lim_{|y|\rightarrow\infty}\Phi_{\lambda}(u_{y}+v_{h,\lambda ,y})= \Phi_{0}(z)=c_{0}. $$
Since \(\Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\) is defined on all \(\mathbb{R}^{N}\), \(\Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\) has a critical point \(y^{0}\in\mathbb{R}^{N}\) and \(u_{y^{0}}+v_{h,\lambda,y^{0}}\) is a 1-bump solution of (\(\mathcal {S}_{\lambda}\)). By an argument similar to those in [34, 35], one sees that \(u_{y^{0}}+v_{h,\lambda,y^{0}}\) is a positive solution of (\(\mathcal{S}_{\lambda}\)). Set \(\lambda(1)=\lambda_{0}\) and \(\lambda_{1}(n)=\min\{\lambda(1),\ldots,\lambda(n)\}\). If \(0<\lambda<\lambda_{1}(n)\), then (\(\mathcal{S}_{\lambda}\)) has at least n nontrivial positive solutions. □
Ambrosetti, A, Malchiodi, A, Secchi, S: Multiplicity results for some nonlinear Schrödinger equations with potentials. Arch. Ration. Mech. Anal. 159(3), 253-271 (2001)
Article MATH MathSciNet Google Scholar
Besieris, I-M: Solitons in randomly inhomogeneous media. In: Nonlinear Electromagnetics, pp. 87-116. Academic Press, New York (1980)
Byeon, J, Oshita, Y: Existence of multi-bump standing waves with a critical frequency for nonlinear Schrödinger equations. Commun. Partial Differ. Equ. 29(11-12), 1877-1904 (2004)
Cingolani, S, Nolasco, M: Multi-peak periodic semiclassical states for a class of nonlinear Schrödinger equations. Proc. R. Soc. Edinb., Sect. A 128(6), 1249-1260 (1998)
Kang, X-S, Wei, J-C: On interacting bumps of semi-classical states of nonlinear Schrödinger equations. Adv. Differ. Equ. 5(7-9), 899-928 (2000)
MATH MathSciNet Google Scholar
Li, Y-Y: On a singularly perturbed elliptic equation. Adv. Differ. Equ. 2(6), 955-980 (1997)
MATH Google Scholar
Oh, Y-G: On positive multi-lump bound states of nonlinear Schrödinger equations under multiple well potential. Commun. Math. Phys. 131(2), 223-253 (1990)
Floer, A, Weinstein, A: Nonspreading wave packets for the cubic Schrödinger equation with a bounded potential. J. Funct. Anal. 69(3), 397-408 (1986)
Oh, Y-G: Existence of semiclassical bound states of nonlinear Schrödinger equations with potentials of the class \((V)_{a}\). Commun. Partial Differ. Equ. 13(12), 1499-1519 (1988)
Rabinowitz, P-H: On a class of nonlinear Schrödinger equations. Z. Angew. Math. Phys. 43(2), 270-291 (1992)
Del Pino, M, Felmer, P-L: Local mountain passes for semilinear elliptic problems in unbounded domains. Calc. Var. Partial Differ. Equ. 4(2), 121-137 (1996)
Del Pino, M, Felmer, P-L: Multi-peak bound states for nonlinear Schrödinger equations. Ann. Inst. Henri Poincaré, Anal. Non Linéaire 15(2), 127-149 (1998)
Ambrosetti, A, Badiale, M, Cingolani, S: Semiclassical states of nonlinear Schrödinger equations. Arch. Ration. Mech. Anal. 140(3), 285-300 (1997)
Del Pino, M, Felmer, P-L: Semi-classical states of nonlinear Schrödinger equations: a variational reduction method. Math. Ann. 324(1), 1-32 (2002)
Del Pino, M, Felmer, P-L: Semi-classical states for nonlinear Schrödinger equations. J. Funct. Anal. 149(1), 245-265 (1997)
Ambrosetti, A, Malchiodi, A: Perturbation Methods and Semilinear Elliptic Problems on R n. Progress in Mathematics, vol. 240. Birkhäuser, Basel (2006)
Ambrosetti, A, Malchiodi, A, Ni, W-M: Singularly perturbed elliptic equations with symmetry: existence of solutions concentrating on spheres. I. Commun. Math. Phys. 235(3), 427-466 (2003)
Del Pino, M, Kowalczyk, M, Wei, J-C: Concentration on curves for nonlinear Schrödinger equations. Commun. Pure Appl. Math. 60(1), 113-146 (2007)
Byeon, J, Wang, Z-Q: Standing waves with a critical frequency for nonlinear Schrödinger equations. Arch. Ration. Mech. Anal. 165(4), 295-316 (2002)
Byeon, J, Wang, Z-Q: Standing waves with a critical frequency for nonlinear Schrödinger equations. II. Calc. Var. Partial Differ. Equ. 18(2), 207-219 (2003)
Cao, D-M, Noussair, E-S: Multi-bump standing waves with a critical frequency for nonlinear Schrödinger equations. J. Differ. Equ. 203(2), 292-312 (2004)
Cao, D-M, Peng, S-J: Multi-bump bound states of Schrödinger equations with a critical frequency. Math. Ann. 336(4), 925-948 (2006)
Ding, Y-H, Lin, F-H: Solutions of perturbed Schrödinger equations with critical nonlinearity. Calc. Var. Partial Differ. Equ. 30(2), 231-249 (2007)
Ding, Y-H, Szulkin, A: Bound states for semilinear Schrödinger equations with sign-changing potential. Calc. Var. Partial Differ. Equ. 29(3), 397-419 (2007)
Ding, Y-H, Wei, J-C: Semiclassical states for nonlinear Schrödinger equations with sign-changing potentials. J. Funct. Anal. 251(2), 546-572 (2007)
Coti Zelati, V, Rabinowitz, P-H: Homoclinic type solutions for a semilinear elliptic PDE on \(\mathbf{R}^{n}\). Commun. Pure Appl. Math. 45(10), 1217-1269 (1992)
Coti Zelati, V, Rabinowitz, P-H: Homoclinic orbits for second order Hamiltonian systems possessing superquadratic potentials. J. Am. Math. Soc. 4(4), 693-727 (1991)
Alama, S, Li, Y-Y: On 'multibump' bound states for certain semilinear elliptic equations. Indiana Univ. Math. J. 41(4), 983-1026 (1992)
Cerami, G, Devillanova, G, Solimini, S: Infinitely many bound states for some nonlinear scalar field equations. Calc. Var. Partial Differ. Equ. 23(2), 139-168 (2005)
Cerami, G, Passaseo, D, Solimini, S: Infinitely many positive solutions to some scalar field equations with nonsymmetric coefficients. Commun. Pure Appl. Math. 66(3), 372-413 (2013)
Ackermann, N, Weth, T: Multibump solutions of nonlinear periodic Schrödinger equations in a degenerate setting. Commun. Contemp. Math. 7(3), 269-298 (2005)
Ackermann, N: A nonlinear superposition principle and multibump solutions of periodic Schrödinger equations. J. Funct. Anal. 234(2), 277-320 (2006)
Rabinowitz, P-H: A multibump construction in a degenerate setting. Calc. Var. Partial Differ. Equ. 5(2), 159-182 (1997)
Liu, Z-L, Wang, Z-Q: Multi-bump type nodal solutions having a prescribed number of nodal domains. I. Ann. Inst. Henri Poincaré, Anal. Non Linéaire 22(5), 597-608 (2005)
Liu, Z-L, Wang, Z-Q: Multi-bump type nodal solutions having a prescribed number of nodal domains. II. Ann. Inst. Henri Poincaré, Anal. Non Linéaire 22(5), 609-631 (2005)
Lin, L-S, Liu, Z-L: Multi-bubble solutions for equations of Caffarelli-Kohn-Nirenberg type. Commun. Contemp. Math. 13(6), 945-968 (2011)
Wang, J, Xu, J-X, Zhang, F-B, Chen, X-M: Existence of multi-bump solutions for a semilinear Schrödinger-Poisson system. Nonlinearity 26(5), 1377-1399 (2013)
Pi, H-R, Wang, C-H: Multi-bump solutions for nonlinear Schrödinger equations with electromagnetic fields. ESAIM Control Optim. Calc. Var. 19(1), 91-111 (2013)
Lin, L-S, Liu, Z-L: Multi-bump solutions and multi-tower solutions for equations on \(\mathbb{R}^{N}\). J. Funct. Anal. 257(2), 485-505 (2009)
Lin, L-S, Liu, Z-L, Chen, X-W: Multi-bump solutions for a semilinear Schrödinger equation. Indiana Univ. Math. J. 58(4), 1659-1689 (2009)
D'Aprile, T, Wei, J-C: Standing waves in the Maxwell-Schrödinger equation and an optimal configuration problem. Calc. Var. Partial Differ. Equ. 25(1), 105-137 (2006)
Ambrosetti, A, Badiale, M: Homoclinics: Poincaré-Melnikov type results via a variational approach. Ann. Inst. Henri Poincaré, Anal. Non Linéaire 15(2), 233-252 (1998)
Berestycki, H, Lions, P-L: Nonlinear scalar field equations. II. Existence of infinitely many solutions. Arch. Ration. Mech. Anal. 82(4), 347-375 (1983)
Kwong, M-K: Uniqueness of positive solutions of \(\Delta u-u+u^{p}=0\) in \(\mathbf{R}^{n}\). Arch. Ration. Mech. Anal. 105(3), 243-266 (1989)
Ni, W-M, Takagi, I: Locating the peaks of least-energy solutions to a semilinear Neumann problem. Duke Math. J. 70(2), 247-281 (1993)
Cerami, G, Passaseo, D: Existence and multiplicity results for semilinear elliptic Dirichlet problems in exterior domains. Nonlinear Anal. 24(11), 1533-1547 (1995)
Bahri, A, Lions, P-L: On the existence of a positive solution of semilinear elliptic equations in unbounded domains. Ann. Inst. Henri Poincaré, Anal. Non Linéaire 14(3), 365-413 (1997)
This work was supported by Natural Science Foundation of China (11201186, 11071038, 11171135), NSF of Jiangsu Province (BK2012282), Jiangsu University foundation grant (11JDG117), China Postdoctoral Science Foundation funded project (2012M511199, 2013T60499).
School of Economics and Management, Southeast University, Nanjing, 210096, China
Houqing Fang
Faculty of Science, Jiangsu University, Zhenjiang, Jiangsu, 212013, P.R. China
Houqing Fang & Jun Wang
Correspondence to Houqing Fang.
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
Fang, H., Wang, J. Existence of positive solutions for a semilinear Schrödinger equation in \(\mathbb{R}^{N}\) . Bound Value Probl 2015, 9 (2015). https://doi.org/10.1186/s13661-014-0270-8
Received: 08 September 2014
35J61
multi-bump solution
semilinear Schrödinger equation
variational methods
|
CommonCrawl
|
Only show open access (2)
Last 3 years (2)
BJPsych Open (2)
Bulletin of the Australian Mathematical Society (1)
Canadian Journal of Neurological Sciences (1)
Quaternary Research (1)
The Journal of Anatomy (1)
Royal College of Psychiatrists / RCPsych (2)
Australian Mathematical Society Inc (1)
Canadian Neurological Sciences Federation (1)
Longitudinal trajectory analysis of antipsychotic response in patients with schizophrenia: 6-week, randomised, open-label, multicentre clinical trial – CORRIGENDUM
Minhan Dai, Yulu Wu, Yiguo Tang, Weihua Yue, Hao Yan, Yamin Zhang, Liwen Tan, Wei Deng, Qi Chen, Guigang Yang, Tianlan Lu, Lifang Wang, Fude Yang, Fuquan Zhang, Jianli Yang, Keqing Li, Luxian Lv, Qingrong Tan, Hongyan Zhang, Xin Ma, Lingjiang Li, Chuanyue Wang, Xiaohong Ma, Dai Zhang, Hao Yu, Liansheng Zhao, Hongyan Ren, Yingcheng Wang, Xun Hu, Guangya Zhang, Xiaodong Du, Qiang Wang, Tao Li, for the Chinese Antipsychotics Pharmacogenomics Consortium
Journal: BJPsych Open / Volume 7 / Issue 5 / September 2021
Published online by Cambridge University Press: 20 August 2021, e153
You have access Access
Longitudinal trajectory analysis of antipsychotic response in patients with schizophrenia: 6-week, randomised, open-label, multicentre clinical trial
Journal: BJPsych Open / Volume 6 / Issue 6 / November 2020
Published online by Cambridge University Press: 22 October 2020, e126
Understanding the patterns of treatment response is critical for the treatment of patients with schizophrenia; one way to achieve this is through using a longitudinal dynamic process study design.
This study aims to explore the response trajectory of antipsychotics and compare the treatment responses of seven different antipsychotics over 6 weeks in patients with schizoprenia (trial registration: Chinese Clinical Trials Registry Identifier: ChiCTR-TRC-10000934).
Data were collected from a multicentre, randomised open-label clinical trial. Patients were evaluated with the Positive and Negative Syndrome Scale (PANSS) at baseline and follow-up at weeks 2, 4 and 6. Trajectory groups were classified by the method of k-means cluster modelling for longitudinal data. Trajectory analyses were also employed for the seven antipsychotic groups.
The early treatment response trajectories were classified into a high-trajectory group of better responders and a low-trajectory group of worse responders. The results of trajectory analysis showed differences compared with the classification method characterised by a 50% reduction in PANSS scores at week 6. A total of 349 patients were inconsistently grouped by the two methods, with a significant difference in the composition ratio of treatment response groups using these two methods (χ2 = 43.37, P < 0.001). There was no differential contribution of high- and low trajectories to different drugs (χ2 = 12.52, P = 0.051); olanzapine and risperidone, which had a larger proportion in the >50% reduction at week 6, performed better than aripiprazole, quetiapine, ziprasidone and perphenazine.
The trajectory analysis of treatment response to schizophrenia revealed two distinct trajectories. Comparing the treatment responses to different antipsychotics through longitudinal analysis may offer a new perspective for evaluating antipsychotics.
Anthropogenic hillslope terraces and swidden agriculture in Jiuzhaigou National Park, northern Sichuan, China
Amanda Henck, James Taylor, Hongliang Lu, Yongxian Li, Qingxia Yang, Barbara Grub, Sara Jo Breslow, Alicia Robbins, Andrea Elliott, Tom Hinckley, Julie Combs, Lauren Urgenson, Sarah Widder, Xinxin Hu, Ziyu Ma, Yaowu Yuan, Daijun Jian, Xun Liao, Ya Tang
Journal: Quaternary Research / Volume 73 / Issue 2 / March 2010
Published online by Cambridge University Press: 20 January 2017, pp. 201-207
Small, irregular terraces on hillslopes, or terracettes, are common landscape features throughout west central China. Despite their prevalence, there is limited understanding of the nature of these topographic features, the processes that form them, and the role humans played in their formation. We used an interdisciplinary approach to investigate the geology, ecology, and cultural history of terracette development within Jiuzhaigou National Park, Sichuan Province, China. Terracettes occur on south facing, 20° slopes at 2500 m elevation, which appears to coincide with places people historically preferred to build villages. Ethnographic interviews suggest that traditional swidden agricultural cycles removed tree roots, causing the loess sediments to lose cohesion, slump, and the terrace risers to retreat uphill over time. This evidence is supported by landslide debris at terracette faces. Archaeological analysis of terracette sites reveal remains of rammed spread soil structures, bones, stone tools, and ceramics dating from at least 2200 years before present within a distinct paleosol layer. Radiocarbon and optically stimulated luminescence dating of terracette sediments ranged in age from between 1500 and 2000 14C yr BP and between 16 and 0.30"ka, respectively. These multiple lines of evidence indicate a long history of human habitation within Jiuzhaigou National Park and taken together, suggest strong links between terracette formation and human-landuse interactions.
THE FINITE BASIS PROBLEM FOR THE MONOID OF TWO-BY-TWO UPPER TRIANGULAR TROPICAL MATRICES
Semigroups
Model theory
YUZHU CHEN, XUN HU, YANFENG LUO, OLGA SAPIR
Journal: Bulletin of the Australian Mathematical Society / Volume 94 / Issue 1 / August 2016
Published online by Cambridge University Press: 08 January 2016, pp. 54-64
Print publication: August 2016
For each positive $n$, let $\mathbf{u}_{n}\approx \boldsymbol{v}_{n}$ denote the identity obtained from the Adjan identity $(xy)(yx)(xy)(xy)(yx)\approx (xy)(yx)(yx)(xy)(yx)$ by substituting $(xy)\rightarrow (x_{1}x_{2}\ldots x_{n})$ and $(yx)\rightarrow (x_{n}\ldots x_{2}x_{1})$. We show that every monoid which satisfies $\mathbf{u}_{n}\approx \boldsymbol{v}_{n}$ for each positive $n$ and generates a variety containing the bicyclic monoid is nonfinitely based. This implies that the monoid $U_{2}(\mathbb{T})$ (respectively, $U_{2}(\overline{\mathbb{Z}})$) of two-by-two upper triangular tropical matrices over the tropical semiring $\mathbb{T}=\mathbb{R}\cup \{-\infty \}$ (respectively, $\overline{\mathbb{Z}}=\mathbb{Z}\cup \{-\infty \}$) is nonfinitely based.
Wilson Disease in the South Chinese Han Population
Nan Cheng, Kai Wang, Wenbin Hu, Daoyin Sun, Xun Wang, Jiyuan Hu, Renmin Yang, Yongzhu Han
Journal: Canadian Journal of Neurological Sciences / Volume 41 / Issue 3 / May 2014
Published online by Cambridge University Press: 23 September 2014, pp. 363-367
To prospectively investigate the incidence and prevalence of Wilson disease (WD) in Chinese Han population in Anhui Province, to analyze the genetic mutations in individuals with WD, and to provide basic epidemiological data regarding WD in this Chinese Han population.
Between November 2008 and June 2010, individuals aged from 7 to 75 years were screened for the cornea K-F ring in both eyes using slit lamp examination and random sampling methods based on age stratification and cluster level 1. The participants were from Anhui Province's Hanshan County, Jinzhai County, and Lixin County. The clinical manifestations of the brain, liver, kidney, skin, and other organs in each individual were also determined. Individuals with positive K-F rings and clinical manifestations indicative of WD underwent copper biochemistry evaluations, abdominal ultrasound testing, and ATP7B gene mutation screening to confirm or exclude the diagnosis of WD.
Of 153,370 individuals investigated in this study, nine were diagnosed with WD. In these WD individuals, three cases had neurological symptoms, one has hepatic symptoms, one was hepatic and neurological combined, and the other four cases were presymptomatic. Of the eight individuals in whom genetic mutations were detected, seven individuals had mutations in the ATP7B gene. The other individual had no ATP7B gene mutations but her copper biochemical test results met the diagnostic criteria for WD. The incidence and prevalence of WD in this population were approximately 1.96/100,000 and 5.87/100,000 respectively.
The Chinese Han population had a higher average prevalence of WD than the populations of the United States or Europe.
Expression of tissue type and urokinase type plasminogen activators as well as plasminogen activator inhibitor type-1 and type-2 in human and rhesus monkey placenta
ZHAO-YUAN HU, YI-XUN LIU, KUI LIU, SIMON BYRNE, TOR NY, QIANG FENG, COLIN D. OCKLEFORD
Journal: The Journal of Anatomy / Volume 194 / Issue 2 / February 1999
Published online by Cambridge University Press: 01 February 1999, pp. 183-195
Print publication: February 1999
The distribution of mRNAs and antigens of tissue type (t) and urokinase type (u) plasminogen activators (PA) plus their corresponding inhibitors, type-1 (PAI-1) and type-2 (PAI-2) were studied in human and rhesus monkey placentae by in situ hybridisation and immunocytochemistry. Specific monkey cRNA and antibodies against human tPA, uPA, PAI-1 and PAI-2 were used as probes. The following results were obtained. (1) All the molecules tPA, uPA, PAI-1 and PAI-2 and their mRNAs were identified in the majority of the extravillous cytotrophoblast cells of the decidual layer between Rohr's and Nitabuch's striae and in cytotrophoblast cells of the chorionic plate, basal plate, intercotyledonary septae and cytotrophoblast cells of the chorionic villous tree. (2) Expression of uPA and PAI-2 was noted in villous trophoblast whereas tPA and PAI-1 were mainly concentrated where detachment from maternal tissue occurs. (3) No expression of tPA, uPA, PAI-1 and PAI-2 was observed in the basal plate endometrial stromal cells, chorionic plate connective tissue cells, septal endometrial stromal cells or villous core mesenchyme. (4) The distribution of probes observed following in situ hybridisation is generally consistent with the immunofluorescence pattern of the corresponding antigens and no significant interspecies differences were noted. It is possible that both decidual and extravillous trophoblast cells of placentae of human and rhesus monkey are capable of producing tPA, uPA, PAI-1 and PAI-2 to differing extents. Coordinated expression of these genes in the tissue may play an essential role in the maintenance of normal placentation and parturition. The differences in distribution we observed are consistent with the suggestion that coordinated expression of tPA and its inhibitor PAI-1 may play a key role in fibrinolytic activity in the early stages of placentation and separation of placenta from maternal tissue at term. On the other hand, uPA with its inhibitor PAI-2 appears mainly to play a role in degradation of trophoblast cell-associated extracellular matrix, and thus may be of greatest importance during early stages of placentation.
|
CommonCrawl
|
Antioxidant and phytochemical analysis of Ranunculus arvensis L. extracts
Muhammad Zeeshan Bhatti1Email author,
Amjad Ali2,
Ayaz Ahmad3,
Asma Saeed4 and
Salman Akbar Malik1
© Bhatti et al. 2015
Received: 9 September 2014
Ranunculus arvensis L. (R. arvensis) has long been used to treat a variety of medical conditions such as arthritis, asthma, hay fever, rheumatism, psoriasis, gut diseases and rheumatic pain. Here, we screened R. arvensis for antioxidant activity, phytochemical and high performance liquid chromatography (HPLC) analyses.
The chloroform, chloroform:methanol, methanol, methanol:acetone, acetone, methanol:water and water extracts of R. arvensis were examined for DPPH (1, 1-diphenyl-2-picrylhydrazyl) free radical scavenging assay, hydrogen peroxide scavenging assay, phosphomolybdenum assay, reducing power assay, flavonoid content, phenolic content and high performance liquid chromatography analysis.
Significant antioxidant activity was displayed by methanol extract (IC 50 34.71 ± 0.02) in DPPH free radical scavenging assay. Total flavonoids and phenolics ranged 0.96–6.0 mg/g of extract calculated as rutin equivalent and 0.48–1.43 mg/g of extract calculated as gallic acid equivalent respectively. Significant value of rutin and caffeic acid was observed via high performance liquid chromatography.
These results showed that extracts of R. arvensis exhibited significant antioxidant activities. Moreover, R. arvensis is a rich source of rutin, flavonoids and phenolics.
Ranunculus arvensis
Phenolic content
Ranunculus arvensis L., (R. arvensis) belongs to the family Ranunculaceae which is commonly known as corn buttercup. It is widely used to treat arthritis, asthma, hay fever, rheumatism, psoriasis and gut diseases [1]. It is also used as poultice around the knees and thumbs for rheumatic pain [2]. The fresh plant is toxic because it contains acrid sap that can cause blistering of skin however, its toxicity is abolished when dried [1]. From the beginning of civilization, plants have been used to treat diseases, as source of food, shelter, fodder, timber, fuel, and also in health-care [3]. Many plants are widely used in traditional medicines. They contain active chemical constituents that produce therapeutic physiological effects to treat a variety of diseases in both humans and animals [4]. Natural products from medicinal plants are considered chemically balanced, effective and least harmful with minimal side effects as compared to synthetic medicines. These medicinal plants have long been effective used in both traditional and modern medicine as nutraceuticals as well as food supplements. The World Health Organization (WHO) estimated that 60–70% of the population of developing countries use medicinal plants for the treatment of ailments [5].
Certain diseases are caused by the free radicals which can cause irreversible oxidative damage to the living system [6, 7]. The oxidation induced by reactive oxygen species results in membrane protein damage and DNA mutation, which can lead to development and propagation of many diseases, such as tissue injury, cardiovascular diseases, inflammation, mutation in genetic material, cancer and human neurological disorders [8–10]. Antioxidants can protect human from these free radicals and/or delay the development of diseases caused by these free radicals [11, 12]. Synthetic antioxidants such as butylated hydroxyanisole (BHA) and butylated hydroxytoluene (BHT) have been used as antioxidant agents since the beginning of this century but these are prohibited due to their in vivo carcinogenic effects. Therefore, a significant effort has been spent to find out natural antioxidants over synthetic compounds and elimination of synthetic antioxidants [13]. Polyphenols are widely distributed in plants and play an important role in medicine. Flavonoids and phenolics are a significant constituent of the human diet and many of them are natural antioxidants [14]. These phytochemicals have wide pharmacological and biological applications and can be used to treat coronary heart diseases, cancer and mutagenesis [15].
To our knowledge, there are no reports on antioxidant activity of R. arvensis. The present investigation was designed to determine antioxidant activity by DPPH (2, 2-diphenyl-1-picrylhydrazyl) free radical scavenging assay, hydrogen peroxide scavenging assay, phosphomolybdenum assay, reducing power assay, phytochemical screening (total flavonoids content and total phenolics content). Moreover, we also determined the effects of different extracts by high performance liquid chromatography (HPLC) analysis.
Preparation of plant extracts
Fresh R. arvensis (L.) was collected in May 2011 from F. R. Bannu (32°56′ 33°16′ North latitudes and 70°22′ 70°52′ longitudes), located on the East of Bannu District, Khyber Pakhtunkhwa, Pakistan. Taxonomic identification of the plant was done by taxonomist Department of Plant Sciences, Quaid-i-Azam University, Islamabad, 45320, Pakistan and Department of Botany, Government Post Graduate College, Bannu, Pakistan. The voucher specimen (AR-57) was deposited in the herbarium. The plant was rinsed with distilled water and shade dried. The extracts were prepared by soaking 30 g of ground plant powder in 300 mL of various solvents i.e. chloroform, chloroform:methanol (1:1), methanol, methanol:acetone (1:1), acetone, methanol:water (1:1) and water. They were placed in a shaking incubator (1575-2, Shel Lab., USA) at 150 rpm for 24 h at room temperature (28 ± 2°C) and sonicated for 5 min after 12 h. It was filtered with Whatmann No. 41 filter paper and concentrated in rotary evaporator (BUHI Rotavapor R-20, Switzerland) at 40°C. Fully dried extracts were packed in seal-pack containers and stored at −20°C for further experiments.
Aluminum chloride, ammonium molybdate, ascorbic acid (Vitamin-C), caffeic acid, catechin, dibasic sodium phosphate, 1, 1-diphenyl-2-picrylhydrazyl (DPPH), ferric chloride, Folin–Ciocalteu reagent, gallic acid, hydrogen chloride, hydrogen peroxide (H2O2), kaempferol, monobasic sodium phosphate, myrecitin, nitric acid, potassium acetate, potassium ferricyanide, quercetin, rutin sodium carbonate, sodium phosphate, trichloroacetic acid, chloroform, methanol, acetone, and dimethyl sulphoxide (DMSO) were purchased from Sigma-Aldrich chemical co.
DPPH (2, 2-diphenyl-1-picrylhydrazyl) free radical scavenging assay
The free radical scavenging potential of different extracts were determined according to the procedure of Kulisic with some modifications [16]. An aliquot of 50 µL of sample solution of various concentrations (25–400 μg/mL) were mixed with 950 µL of methanolic solution of DPPH (3.4 mg/100 mL). The reaction mixture was incubated at 37°C for 1 h in the dark. The free radical scavenging potential of the extracts were expressed as the disappearance of the initial purple color. The absorbance of the reaction mixture was recorded at 517 nm using UV–Visible spectrophotometer (Agilent 8453, Germany). Ascorbic acid was used as the positive control. DPPH scavenging capacity was calculated by using the following formula:
$${\text{Scavenging}}\;{\text{activity}}\;(\% ) = \left( {\frac{{{\text{Absorbance}}^{\text{control}} - {\text{Absorbance}}^{\text{sample}} }}{{{\text{Absorbance}}^{\text{control}} }}} \right)\; \times \;100$$
Hydrogen peroxide scavenging assay
The ability of the extract to scavenge hydrogen peroxide (H2O2) was determined according to the method of Ruch et al. [17]. Aliquot of 0.1 mL of extracts (25–400 μg/mL) was transferred into the eppendorf tubes and their volume was made up to 0.4 mL with 50 mM phosphate buffer (pH 7.4) followed by the addition of 0.6 mL of H2O2 solution (2 mM). The reaction mixture was vortexed and after 10 min of reaction time, its absorbance was measured at 230 nm. Ascorbic acid was used as the positive control. The ability of the extracts to scavenge the H2O2 was calculated using the following equation:
$${\text{H}}_{2} {\text{O}}_{2} \;{\text{scavenging}}\;{\text{activity}}\;{\text{percentage}} = [(A_{0} - A_{1} )/A_{0} ] \, \times \, 100$$
where: A0 = Absorbance of control, A1 = Absorbance of sample.
Phosphomolybdenum assay
For the conduction of the phosphomolybdenum assay, the method of Prieto et al. was followed [18]. An aliquot of 0.1 mL of sample solution of different concentrations (25–400 μg/mL) treated with 1 mL of reagent solution (0.6 M sulfuric acid, 28 mM sodium phosphate and 4 mM ammonium molybdate). The tubes were incubated at 95°C in a water bath for 90 min. The samples were cooled to room temperature and their absorbance was recorded at 765 nm. Ascorbic acid was used as the positive control. Antioxidant capacity was estimated by using following equation:
$${\text{Antioxidant}}\;{\text{activity }}\% \; = \;[({\text{Absorbance control }} - {\text{ Absorbance sample}})/{\text{Absorbance control}}] \, \times \, 100.$$
Reducing power assay
The reducing power was determined according to the Oyaizu et al. method with some modifications [19]. Aliquot of 0.2 mL of various concentrations of the extracts (25–400 μg/mL) were mixed separately with 0.5 mL of phosphate buffer (0.2 M, pH 6.6) and 0.5 mL of 1% potassium ferricyanide. The mixture was incubated in a water bath at 50°C for 20 min. After cooling at room temperature, 0.5 mL of 10% trichloroacetic acid was added to it followed by centrifugation at 3,000 rpm for 10 min. Supernatant (0.5 mL) was collected and mixed with 0.5 mL of distilled water. Ferric chloride (0.1 mL of 0.1%) was added to it and the mixture was left at room temperature for 10 min. The absorbance was measured at 700 nm. Ascorbic acid was used as positive control.
Determination of total flavonoid content
The total flavonoid content was determined by the aluminum chloride colorimetric method as described by Chang et al. with some modifications [20]. Aliquot of 0.5 mL of various extracts (1 mg/mL) were mixed with 1.5 mL of methanol, followed by the addition of 0.1 mL of 10% aluminum chloride, 0.1 mL of potassium acetate (1 M) and 2.8 mL of distilled water. The reaction mixture was kept at room temperature for 30 min. Absorbance of the reaction mixture was recorded at 415 nm. The calibration curve (0–8 µg/mL) was plotted using rutin as a standard. The total flavonoids were expressed as mg of rutin equivalent/gram dry weight.
Determination of total phenolic content
The amount of total phenolic content was determined according to the Velioglu method using the Folin–Ciocalteu reagent [21]. Aliquot of 0.1 mL of various extracts (4 mg/mL) was mixed with 0.75 mL of Folin–Ciocalteu reagent (10-fold diluted with dH2O). The mixture was kept at room temperature for 5 min and 0.75 mL of 6% sodium carbonate was added. After 90 min of reaction, its absorbance was recorded at 725 nm. The standard calibration (0–25 μg/mL) curve was plotted using gallic acid. The total phenolics were expressed as mg gallic acid equivalent/gram dry weight. Negative control was prepared by adding 0.1 mL of DMSO instead of extract.
High performance liquid chromatography analysis
For the analysis of flavonoids and phenolics, stock solutions of caffeic acid, catechin, kaempferol, myricetin, rutin, quercetin and gallic acid were prepared in methanol (1 mg/mL). Solutions were filtered by 0.2 µm Sartolon Polyamide membrane filter (Sartorius). The calibration curve was raised by 10, 20, 50, 100, 150 and 200 µg/mL. The crude extracts of R. arvensis were prepared at concentration of 10 mg/mL in methanol. The extracts were dissolved in methanol with the aid of sonication and were filtered through 0.2 µm Sartolon Polyamide membrane filter (Sartorius). All the samples were prepared fresh and used for analysis immediately.
The analysis was carried out by using Agilent Chem. station Rev.B.02-01-SR1(260) software and Agilent 1200 series binary gradient pump coupled with a diode array detector (DAD; Agilent technologies, Germany) having Discovery-C18 analytical column (4.6 × 250 mm, 5 µm particle size, Supelco, USA). Method followed was as described by Zu et al. with slight modification according to the system suitability [22]. Briefly, mobile phase-A was methanol:acetonitrile:water:aectic acid (10:5:85:1) and mobile phase B was methanol:acetonitrile:acetic acid (60:40:1). A gradient of time 0–20 min for 0–50% B, 20–25 min 50–100% B and then isocratic 100% B till 30 min was used. Flow rate was 1 mL/min and injection volume was 20 µL. Rutin and gallic acid were analyzed at 257 nm, catechin at 279 nm, caffeic acid at 325 nm and quercetin, myricetin, kampferol was analyzed at 368 nm. Each time the column was preconditioned for 10 min before the next analysis.
Results were expressed as mean ± standard deviation of three replicates. CoStat statistical program 6.400® (2008©, USA) was used for statistical analysis. Analysis of variance (ANOVA) was performed through Bartlett's Test. Latin square design (LSD) was applied to testify the significance of concentrations and extracts.
The antioxidant activity of different extracts of R. arvensis was primarily assessed by 2, 2-diphenyl-1-picrylhydrazyl (DPPH), which is based on the ability of DPPH to react with proton donors such as phenols. The other members of family Ranunculaceae were previously assessed for free radical scavenging by many groups. However, R. arvensis free radical scavenging ability remains unknown. We showed that R. arvensis exhibits significant free radical scavenging potential especially its methanol extract (IC 50 : 34.71 µg/mL; Table 1). The percentages of free radical scavenging are given in Figure 1. The DPPH activity demonstrated in Nigella sativa was EC50 (29.40 ± 0.35) [23], while IC 50 values of chloroform extract and ethyl acetate extract were 106.56 and 121.62 μg/mL, respectively [24]. Zengin et al. reported IC 50 value of crude extract of Centaurea urvillei was 137.06 μg/mL [25]. These results show that R. arvensis is a good source for DPPH free radical scavenging activity as compared to the other members of the family.
IC50 values of various extracts of R. arvensis
Antioxidant assays
IC50 (µg/mL)
DPPH free radical scavenging assay
Chloroform extract
330.29 ± 0.01
52.58 ± 0.01
Chloroform:methaol extract
101.6 ± 0.01
Methanol extract
Methanol:acetone extract
Acetone extract
Methanol:water extract
Water extract
Ascorbic acida
LSD value
p < 0.05.
LSD least significant difference, CV coefficient of variation, LD 50 lethal dose, 50%, IC 50 half maximal inhibitory concentration.
aPositive control values are expressed as ascorbic acid (AA) (average ± SD; n = 4).
Antioxidant activities of R. arvensis extracts: a DPPH free radical scavenging activity; b hydrogen peroxide scavenging activity; c motybdenum ion percentage reduction; d reducing capacity.
The scavenging effect of different extracts of R. arvensis on hydrogen peroxide was concentration-dependent (25–400 μg/mL) as shown in Figure 1 (P < 0.05). The methanol:water extract displayed strong H2O2 scavenging activity (IC 50 43.53 µg/mL). whereas water extract exhibited IC 50 51.27 µg/mL (Table 1). The significant difference in percentage inhibition of H2O2 of all extracts was compromising in Figure 1 P < 0.05. Among various plants of the Ranunculaceae, Gymnema sylvestre exhibited better H2O2 scavenging activity (IC 50 72.55 μg/mL) but comparatively less than R. arvensis [26]. Moreover, Spondias pinnata plant extract acquires IC 50 44.74 ± 25.61 mg/mL on the scavenging of H2O2 [27]. The naturally occurring of H2O2 in the air, water, human body, plants, microorganisms and food is at low concentration levels. It is quickly decomposed into oxygen (O2) and water (H2O) and may create hydroxyl radicals (OH) that can initiate lipid peroxidation and cause DNA damage. methanol:water extract of R. arvensis capably scavenged hydrogen peroxide which may be attributed to the presence of phenolic groups that could donate electrons to hydrogen peroxide, thereby neutralizing it into H2O.
This assay is based on the reduction of MoVI into MoV by a reductant with the formation of a green phosphate–MoV complex, which shows an absorbance maximum at 695 nm. The antioxidant activity of almost all extracts was not significantly different. Chloroform extract showed best total antioxidant capacity i.e. IC 50 value was 52.58 µg/mL (Table 1). Other extracts also exhibited better IC 50 value and molybdenum ions percentage reduction at P < 0.05 (Figure 1). The total antioxidant capacity of Centaurea urvillei with ascorbic acid (39.70 mg AE/g extract) and trolox equivalents (143.53 mg TE/g extract) [25]. Nigella sativa also expressed better activity in this assay (TEAC 36.38 ± 1.08) [23]. However, these findings are not comparable due to difference in solvents, measuring techniques and growth conditions.
In this assay, the presence of e−-donating compounds resulted in the reduction of Fe3+ (ferricyanide) into Fe2+ (ferrous). The results are shown in the Figure 1. The reducing potential of the extracts measured for the concentration up to 400 μg/mL showed a general increase in activity when the concentration was increased. Among the tested extracts, the methanol:water extract possessed the highest reducing capacity of free radicals scavenging (1.28 ± 0.05), with absorbance at 700 nm. The extracts had better free radical reductive ability with increasing concentrations of the extract. Hazra et al. [27] reported the same behavior in Spondias pinnata extracts. This concentration-dependent activity pattern was also followed by Consolida orientalis extracts which behaved the best at 800 μg/mL [26].
Determination of total flavonoids content
Quantitative total flavonoid determination was performed by precipitating the extracts with aluminum chloride have an intense yellow fluorescence when observed by UV spectrophotometer. Total flavonoids content were expressed as mg rutin equivalent (RE) per gram dry extract weight. Among the studied R. arvensis extracts, total flavonoid contents estimation revealed the presence of flavonoids, except in the chloroform extract. Significant amount of flavonoids were present in the methanol extract (6.00 ± 0.02 mg RE/g; Table 2), while comparative amount was present in methanol:water extract (5.72 ± 0.01 mg RE/g) and in the water extract (2.19 ± 0.01 mg RE/g). Previous study has shown that flavonoids were present in R. arvensis by the change of sample colour [28]. Hussain et al. used the titration method for identification of flavonoids in R. arvensis (1.769 mg/100 g) [29]. This difference may be due to different geographical distribution of the plant or changes in methodology.
Identification and quantification of flavonoids and phenolics of seven crude extracts of R. arvensis through spectrophotometry and high performance liquid chromatography
Total flavonoids and total phenolics ± SD
HPLC profile
mg RE/g dry extract
GAE/g dry extract
RT (min)
λ max (nm)
% of dry weight
SD ±standard deviation, RT retention time.
Determination of total phenolics content
The quantitative determination of total phenolic was carried out using Folin–Ciocalteu reagent in terms of gallic acid equivalent. Total phenolic content is expressed as mg gallic acid equivalent per gram dry extract weight. There is variation in total phenolics present in R. arvensis ranging from 0.48 to 1.43 mg of the total GAE/g of extract. The highest amount was shown by water extract (1.43 mg/g GAE), whereas the chloroform extract, chloroform:methanol extract, methanol:acetone extract and acetone extract remained insignificant (Table 2). Our results are more significant than the results of Hachelaf et al. which detected the presence of phenolic acid in R. arvensis by the change of sample color [28]. Hussain et al. found phenolic contents (0.848 mg/100 g) in R. arvensis using titration method [29]; the same work was performed in two other species of Ranunculus, with the highest phenolics were found in the ethyl acetate extract of R. marginatus (131.7 ± 4.2 mg/g GAE) and R. sprunerianus (140.2 ± 5.3 mg/g GAE) [30], which are comparable with our results of R. arvensis.
The crude extracts of R. arvensis were assessed via seven standards (caffeic acid, catechin, kaempferol, myricetin, rutin, quercetin and gallic acid) of flavonoids and phenolics to monitor their efficiency. The HPLC profile of methanol extract of R. arvensis showed the presence of rutin (0.44%) and caffeic acids (0.017%). In comparison with methanol extract, smaller amount of rutin (0.01%) in methanol:water extract and caffeic acid (0.008%) in water extract (Figure 2; Table 2). The compounds belonging to classes of flavonoid and phenolics (flavonol glycosides of quercetin, kaempferol, isorhamnetin and their aglycons) were previously identified in another species of Ranunculus, R. sardous [31]. Previous studies showed the presence of quercetin-7-O-glucoside and rutin in R. peltatus extracts [32]. Noor et al. reported many flavonoids and phenolics from R. repens [33]. The presence of rutin in high quantities can be closely related to the lowest values of IC 50 obtained for methanol extract in the DPPH assay.
High performance liquid chromatography chromatograms of the flavonoids present in different extracts of R. arvensis.
To the best of our knowledge, this study provides new scientific information about R. arvensis based on the phytochemical analysis, antioxidant potential and HPLC analysis. The various extracts R. arvensis showed different potential of antioxidant activity in variety of antioxidant assays. Quantitative and qualitative analysis of various crude extracts indicated the presence of bioactive compounds as flavonoids and phenolics. Moreover, the above data indicate that, R. arvensis was also rich in rutin and caffeic acid. However, further studies are needed for the isolation of the natural products with fascinating biological and pharmacological properties.
CE:
CME:
MAE:
methanol:acetone extract (MAE)
AE:
acetone extract (AE)
MWE:
WE:
water extract (WE)
MZB, contributed to the study design, data collection, laboratory work, and writing of the manuscript. AA, AA, AS, participated in data analysis, interpretation and drafting and writing of the manuscript. SAM, participated in supervision and revision of the manuscript. All authors read and approved the final manuscript.
We are thankful to Higher Education Commission (HEC) of Pakistan for logistical support to conduct this study. We would like to acknowledge Dr. Ihsan-ul-Haq, Department of Pharmacy, Quaid-i-Azam University for his technical assistance in HPLC.
Competing interests The authors declare that they have no competing interests.
Department of Biochemistry, Faculty of Biological Sciences, Quaid-i-Azam University, Islamabad, 45320, Pakistan
Institute of Biomedical Sciences, School of Life Science, East China Normal University, 500 Dongchuan Road, Shanghai, 200241, People's Republic of China
Department of Biotechnology, Abdul Wali Khan University, Mardan, 23200, Pakistan
Department of Biological Sciences, Gomal University, Dera Ismail Khan, 29050, Pakistan
Orak M, Ustundag M, Guloglu C, Tas M, Baylan B (2009) A skin burn associated with Ranunculus arvensis. Ind J Dermatol. 54:19–20View ArticleGoogle Scholar
Akbulut S, Semur H, Kose O, Ozhasenekler A, Celiktas M, Basbug M et al (2011) Phytocontact dermatitis due to Ranunculus arvensis mimicking burn injury: report of three casas and literature review. Intern J Emerg Med. 4:1–5View ArticleGoogle Scholar
Fabri RL, Nogueira MS, Braga FG, Coimbra ES, Scio E (2009) Mitracarpus frigidus aerial parts exhibited potent antimicrobial, antileishmanial, and antioxidant effects. Bioresour Technol 100:428–433PubMedView ArticleGoogle Scholar
Kanwal S, Ullah N, Haq IU, Afzal I, Mirza B (2011) Antioxidant, antitumor activities and phytochemical investigation of Hedra neplalensis K. Koch an important medicinal plant from Pakistan. Pak J Bot 43:85–89Google Scholar
Boligon AA, Pereira RP, Feltrin AC, Machado MM, Janovik V, Rocha JBT et al (2009) Antioxidant activities of flavonol derivatives from the leaves and stem bark of Scutia buxifolia Reiss. Bioresour Technol 100:6592–6598PubMedView ArticleGoogle Scholar
Sikder MAA, Rahman MA, Islam MR, Abul KM, Kiasar MA, Rahman MS et al (2010) In-vitro antioxidant, reducing power, free radical scavenging and membrane stabilizing activities of Spilanthes calva. Bangladesh Pharmaceut J. 13:63–67Google Scholar
Blazquez S, Olmos E, Hernandez JA, Fernandez-Garcıa N, Fernandez JA, Piqueras A (2009) Somatic embryogenesis in saffron (Crocus sativus L.) Histological differentiation and implication of some components of the antioxidant enzymatic system. Plant Cell Tiss Organ Cult 97:49–57View ArticleGoogle Scholar
Zur I, Dubas E, Krzewska M, Janowiak F, Hura K, Pociecha E et al (2014) Antioxidant activity and ROS tolerance in triticale (xTriticosecale Wittm.) anthers affect the efficiency of microspore embryogenesis. Plant Cell Tiss Organ Cult. doi:10.1007/s11240-014-0515-3
Li L, Yi H (2012) Effect of sulfur dioxide on ROS production, gene expression and antioxidant enzyme activity in Arabidopsis plants. Plant Physiol Biochem 58:46–53PubMedView ArticleGoogle Scholar
Hajaji HE, Lachkar N, Alaoui K, Cherrah Y, Farah A, Ennabili A et al (2011) Antioxidant activity, phytochemical screening, and total phenolic content of extracts from three genders of carob tree barks growing in Morocco. Arabian J Chem 4:321–324View ArticleGoogle Scholar
Voravuthikunchaia SP, Kanchanapoomb T, Sawangjaroena N, Towatana NH (2010) Antioxidant, antibacterial and antigiardial activities of Walsura robusta Roxb. Nat Prod Res 24:813–824View ArticleGoogle Scholar
Gill SS, Tuteja N (2010) Reactive oxygen species and antioxidant machinery in abiotic stress tolerance in crop plants. Plant Physiol Biochem 48:909–930PubMedView ArticleGoogle Scholar
Miladi S, Damak M (2008) In-vitro antioxidant activities of Aloe vera leaf skin extracts. J Soc Chim Tunisie 10:101–109Google Scholar
Velasco P, Francisco M, Moreno DA, Ferreres F, García-Viguera C, Cartea ME (2011) Phytochemical fingerprinting of vegetable Brassica oleracea and Brassica napus by simultaneous identification of glucosinolates and phenolics. Phytochem Anal 22:144–152PubMedView ArticleGoogle Scholar
Shah SMM, Sadiq A, Shah SMH, Ullah F (2014) Antioxidant, total phenolic contents and antinociceptive potential of Teucrium stocksianum methanolic extract in different animal models. BMC Complement Altern Med. 14:181PubMed CentralPubMedView ArticleGoogle Scholar
Kulisic T, Radonic A, Katalinic V, Milos M (2004) Use of different methods for testing antioxidative activity of oregano essential oil. Food Chem 85:633–640View ArticleGoogle Scholar
Ruch RJ, Cheng SJ, Klaunig JE (1989) Prevention of cytotoxicity and inhibition of intercellular communication by antioxidant catechins isolated from Chinese green tea. Carcinogenesis 10:1003–1008PubMedView ArticleGoogle Scholar
Prieto P, Pineda M, Aguliar M (1999) Spectrophotometric quantitation of antioxidant capacity through the formation of phosphomolybdenum complex: specific application to the determination of vitamin E. Anal Biochem 269:337–341PubMedView ArticleGoogle Scholar
Oyaizu M (1986) Studies on product of browning reaction prepared from glucose amine. Japanese J Nutr 44:307–315View ArticleGoogle Scholar
Chang C, Yang M, Wen H, Chern J (2002) Estimation of total flavonoid content in propolis by two complementary colorimetric methods. J Food Drug Anal 10:178–182Google Scholar
Velioglu YS, Mazza G, Gao L, Oomah BD (1998) Antioxidant activity and total phenolics in selected fruits, vegetables, and grain products. J Agri Food Chem 46:4113–4117View ArticleGoogle Scholar
Zu Y, Li C, Fu Y, Zhao C (2006) Simultaneous determination of catechin, rutin, quercetin kaempferol and isorhamnetin in the extract of sea buckthorn (Hippophae rhamnoides L.) leaves by RP-HPLC with DAD. J Pharmaceut Biomed Anal 41:714–719View ArticleGoogle Scholar
Kirca A, Arslan E (2008) Antioxidant capacity and total phenolic content of selected plants from Turkey. Intern J Food Sci Technol 43:2038–2046View ArticleGoogle Scholar
Meziti A, Meziti H, Boudiaf K, Mustapha B, Bouriche H (2012) Polyphenolic profile and antioxidant activities of Nigella Sativa seed extracts in vitro and in vivo. World Acad Sci Eng Technol 64:24–32Google Scholar
Zengin G, Aktumsek A, Guler GO, Cakmak YS, Yildiztugay E (2011) Antioxidant properties of methanolic extract and fatty acid composition of Centaurea urvillei DC. Hayekiana Wagenitz. J Nat Prod 5:123–132Google Scholar
Shah KA, Patel MD, Parmar PK, Patel RJ (2010) In-vitro evaluation of antioxidant activity of Gymnema Sylvestre. Deccan J Nat Prod 1:1–7Google Scholar
Hazra B, Biswas S, Mandal N (2008) Antioxidant and free radical scavenging activity of Spondias pinnata. BMC Complement Altern Med 8:1–10View ArticleGoogle Scholar
Hachelaf A, Zellagui A, Touil A, Rhouati S (2013) Chemical composition and analysis antifungal properties of Ranunculus arvensis L. Pharmacophore 4(3):89–91. ISSN 2229-5402 USA CODEN: PHARM7Google Scholar
Hussain I, Ullah R, Ullah R, Khurram M, Ullah N, Baseer A et al (2011) Phytochemical analysis of selected medicinal plants. Afr J Biotechnol 10:7487–7492Google Scholar
Kaya GI, Somer NU, Konyalioglu S, Yalcin HT, Yavasoglu NUK, Sarikaya B et al (2010) Antioxidant and antibacterial activities of Ranunculus marginatus var. trachycarpus and R. sprunerianus. Turk. J Biol 34:139–146Google Scholar
Campos MG, Webby RF, Markham KR, Mitchell KA, Dacunha AP (2003) Age-induced diminution of free radical scavenging capacity in bee pollens and the contribution of constituent flavonoids. J Agri Food Chem 51:742–745View ArticleGoogle Scholar
Prieto JM, Recio MC, Giner RM, Schinella GR, Manez S, Rios JL (2008) In-vitro and in vivo effects of Ranunculus peltatus subsp. baudotii methanol extract on models of eicosanoid production and contact dermatitis. Phytother Res 22:297–302PubMedView ArticleGoogle Scholar
Noor W, Gul R, Ali I, Choudary MI (2006) Isolation and antibacterial activity of the compound from Ranunculus repens L. J Chem Soc Pak 28:271–274Google Scholar
|
CommonCrawl
|
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
Vinogradov, Aleksandr Pavlovich
Statistics Math-Net.Ru
Total publications: 8
Scientific articles: 8
This page: 84
Abstract pages: 469
Full texts: 322
Member of the USSR Academy of Sciences
Doctor of chemical sciences
http://www.mathnet.ru/eng/person125394
List of publications on Google Scholar
List of publications on ZentralBlatt
Publications in Math-Net.Ru
1. A. P. Vinogradov, K. P. Florenskii, A. T. Bazilevsky, A. S. Selivanov, "First panoramas of the Venusian surface (preliminary analysis of images)", Dokl. Akad. Nauk SSSR, 228:3 (1976), 570–572
2. A. P. Vinogradov, Yu. A. Surkov, L. P. Moskaleva, F. F. Kirnozov, "Measurement of Martian gamma-radiation intensity and spectral composition from the Mars 5 automatic interplanetary station", Dokl. Akad. Nauk SSSR, 223:6 (1975), 1336–1339
3. A. P. Vinogradov, Yu. A. Surkov, F. F. Kirnozov, V. N. Glazov, "Natural radioactive element content in Vermsian rock (results of "Venera-8" space probe experiment)", Dokl. Akad. Nauk SSSR, 208:3 (1973), 576–579
4. A. P. Vinogradov, G. P. Vdovykin, "Analogue of polynucleotide in meteorites", Dokl. Akad. Nauk SSSR, 206:3 (1972), 563–565
5. A. P. Vinogradov, A. L. Devirts, È. I. Dobkina, B. I. Ogorodnikov, I. V. Petryanov, "Stratospheric concentrations of $\mathrm{C}^{14}$ in 1967–1969", Dokl. Akad. Nauk SSSR, 205:4 (1972), 824–826
6. A. P. Vinogradov, Yu. A. Surkov, B. M. Andreichikov, "Investigation of the composition of Venus atmosphere at automatic space probes "Venus-5" and "Venus-6"", Dokl. Akad. Nauk SSSR, 190:3 (1970), 552–554
7. A. P. Vinogradov, Yu. A. Surkov, K. P. Florenskii, B. M. Andreichikov, "Determination of the chemical composition of the atmosphere of the Venus by the space probe Venus-4", Dokl. Akad. Nauk SSSR, 179:1 (1968), 37–40
8. A. P. Vinogradov, Yu. A. Surkov, G. M. Chernov, "Investigation of the intensity and spectral composition of the Moon's gamma-radiation at the Luna-10 automatic station", Dokl. Akad. Nauk SSSR, 170:3 (1966), 561–564
Institute of Geochemistry and Analytic Chemistry, Academy of Sciences of the USSR
Terms of Use Registration to the website Logotypes © Steklov Mathematical Institute RAS, 2022
|
CommonCrawl
|
Lumerical Support > APP home
Nanobeam photonic crystal modulator
FDTD CHARGE Photonic Crystal Photonic Integrated Circuits - Active
In this example, we will characterize the performance of a nanobeam photonic crystal (PC) electro-optic modulator using CHARGE (electrical simulation) and FDTD (optical simulation).
nanobeam_pc_cavity_field.mat
nanobeam_pc_eomod.fsp
In a photonic crystal (PC) structure, changes to the effective index of the cladding will result in a shift in the resonant frequency, which can be used to modulate an optical signal [1]. This change in effective index is driven by modulation of the electric field within the resonant cavity. To simulate the electric field established in the dielectric between two doped silicon arms of the nanobeam, a self-consistent simulation of the charge and electrostatic potential is performed using CHARGE. To calculate the effect of the change in the electric field on the transmission of the PC, an FDTD simulation will be run. A structure group in FDTD will use the DC electric field data from CHARGE to calculate the corresponding changes in the real and imaginary parts of refractive index of the material based on the formulation presented in [1].
$$\Delta n=\frac{\gamma_{33}}{2} n_{d i}^{3} E$$
where E is the magnitude of the electric field and the dielectric (an electro-optically active polymer) has baseline refractive index ndi = 1.6. The EO coefficient γ33 is 150pm/V [1].
Simulation Setup
Electrical Simulation
The PC structure in the waveguide modulator can be set up in CHARGE using a construction group. Using a construction group enables the parameterization of the design, including the grating period, slot width, etc. Open the nanobeam_pc_eomod.ldev project in CHARGE. The dimensions of the waveguide and PC reflect the structures in the referenced paper by Biao Qui, et al. [1]. The component is fabricated in an SOI process, where a thin layer of silicon is formed epitaxially on a buried oxide. The silicon is then etched back to pattern the waveguide structures. The peak doping concentrations and dimensions of the doping profile in the referenced paper are used to define the contact and background doping profile in the waveguide. Analytic models are used to construct the doping profile, however it can also be imported from a process simulation result. The top surface is passivated with an electro-optic (EO) polymer, which also fills the holes and slot in the PC.
In the referenced device, a cavity is formed at the centre of the PC. A silicon waveguide is etched with a central slot and a periodic array of holes. The hole radius and spacing is tapered towards the cavity, as illustrated in the figure below. The dashed line represents the symmetry line at y=0. The first five holes extending to the right form the taper. In the layout, +y is chosen as the direction of optical propagation. Holes with the regular periodicity and radius extend to the length of the active area (16um).
The PC design is parameterized by a set of properties associated with the "pc" structure group in the CHARGE layout. These properties can be modified to adjust the layout of the holes and slot within the waveguide.
In the CHARGE, two simulations are run: one for the cavity and one for a single regular hole in the PC. While the entire component could be simulated at once, this approach is more efficient, relying on the assumption that the component exhibits mirror symmetry at the simulation region boundaries in the y-direction. The electric field information is recorded by the electric field monitors, and stored in a file that will be loaded in the optical simulation. The electric field monitors are also used to calculate the capacitance of the PC by evaluating the differential charge at each bias. A mesh override region has been added in the slot to further refine the mesh in that region, such that the electric field can be accurately sampled.
Optical Simulation
The layout for the optical simulation mirrors that of the electrical simulation. Open the nanobeam_pc_eomod.fsp file in FDTD. The electrical simulation calculated the electric field in response to the applied voltage. To represent a material whose index varies spatially in response the distribution of that electric field, (n,k) import material objects are used. These objects are constructed using a structure group that imports the electric field data calculated with CHARGE and performs an interpolation to the rectilinear grid in FDTD. Two objects are used to represent the two (symmetric) sides of the taper, and the field profile for a regular hole is repeated several times to define the remainder of the PC.
Note: the electrical simulation must first be run in CHARGE to generate the data required to construct the waveguide with a spatially varying index in FDTD. Alternatively, the provided data files, nanobeam_pc_cavity_field.mat and nanobeam_pc_grating_field.mat, can downloaded into your working directory.
The constructed "field dependent index material" object with spatially varying index is rectangular, and is defined for the extents of the waveguide. To correctly represent EO polymer, a construction group (similar to the one used to define the PC in CHARGE) is used to define the surrounding silicon of the waveguide. This material is given a higher priority by setting its mesh order before that of the material with the spatially varying index. As a result, the silicon will cut away the excess material from the EO polymer, such that the EO polymer will only be defined for the volume within the holes and slot. A background dielectric with the same base refractive index is used to fill the remainder of the volume.
To excite modes in the PC, dipoles are added to the FDTD simulation volume. A time monitor is used to record the electric field at the centre of the cavity and determine the wavelength of the resonance as well as the quality factor. Note that mesh overrides are used to ensure the correct mesh period such that the uniform holes in the grating are discretized in the same way. For more information, users are encouraged to review the tutorial on the photonic crystal cavity for a discussion of simulation techniques required for accurate results.
Electrical Bandwidth
Load the nanobeam_pc_eomod.ldev project in CHARGE. The intrinsic bandwidth of the modulator is calculated by treating the component as a series RC circuit. This is illustrated in the schematic below.
The resistance is dominated by the low conductance of the thin silicon slab connecting the waveguide to the adjacent contacts. An important design tradeoff is made between modal confinement (thin silicon) and low loss (thicker silicon). The resistance of the slab can be simulated by adding a "test" contact that connects to the waveguide on the same side of the slot as one of the two contacts, such that a continuous circuit is formed through the silicon (the "anode_test" contact in the project is added for this purpose). Align the solver region with this cross-section, set the solver geometry to 2D y-normal, and run the simulation. The following script commands can be used to reposition the simulation region programmatically.
switchtolayout;
setnamed('CHARGE simulation region', 'dimension','2D Y-Normal');
setnamed('CHARGE simulation region', 'y', 8.5e-6);
A 2D simulation is run at two bias points (0V and 0.1V). The simulation results are normalize to a length of 1cm, and the conductance per unit length can be found as the ratio of the current to voltage. The following script commands will calculate the resistivity, then normalize to the overall device length (14.8um)
test_contact = getresult('CHARGE','anode_test');
I_test = pinch(test_contact.I);
R_test = 0.1/I_test(2); # ohm-cm
Rslab = R_test/14.8e-4; # ohm
The physical structure of the PC resonator is that of two doped semiconductor regions separated by a dielectric, forming a semiconductor-insulator-semiconductor (SIS) capacitor. The capacitance can be calculated as a function of voltage by determining the change in net charge stored on one side of the structure when the voltage is perturbed. By running two simulations at bias voltage V and V+ΔV, the capacitance can be estimated as
$$C_{n, p} \approx \frac{Q_{n, p}(V+\Delta V)-Q_{n, p}(V)}{\Delta V}$$
The script file nanobeam_pc_run_cv.lsf can be used to set-up and run a DC sweep over the range from 0-0.5V with a perturbation of ΔV=25mV at each step. The net charge is calculated using a electric field monitor, which will integrate the electric flux through the surface (Gauss's law). Two sweeps are performed: one for the half-cavity, and one for a single hole in the grating. The script file will combine both results and perform the capacitance calculation. The capacitance is nearly constant as a function of applied bias, as expected for a SIS capacitor.
The bandwidth can then be estimated from the product of the net resistance and capacitance,
$$f_{3 d B} \approx \frac{1}{2 \pi R C}$$
The bandwidth from this calculation is estimated as approximately 120GHz.
Electrostatic Field Distribution
To determine the perturbation to the refractive index of the polymer within the cavity and grating of the PC, the electric field is required. This quantity is calculated in a typical CHARGE simulation, and can be recorded for the waveguide region using an electric field monitor. Using the same project as for the bandwidth calculation, change the DC sweep type for the anode contact from "value" to "range." This will change the simulation to sweep over biases from 0 to 0.5V without perturbing the voltage at each bias point. The following script command will also change the contact mode.
select('CHARGE::boundary conditions::anode');
set('sweep type','range');
Set the solver region to span the tapered region of the PC by setting the properties y min = 0um and y max = 2.251um. Change the solver region geometry to 3D. Enable the monitors wg_cavity and wg_grating. Run the simulation for the taper and cavity first. The electric field monitor wg_cavity will automatically save the electric field data to file. When the simulation is complete, the electric field in the taper area can be visualized from the data recorded by the cavity_field_xsec monitor. The magnitude of E is shown in the plot below at V = 0.2V. Note that the electric field is concentrated in the narrow slot regions, but becomes much weaker in the wider holes.
Next, shift the simulation region to span one hole in the grating: set properties y min = 3.52um and y max = 3.943um. Run the simulation. In this simulation, the electric field monitor wg_grating will save the electric field data to a second file.
Electro-optic Modulator Response
Load the nanobeam_pc_eomod.fsp project in FDTD. In this layout, (n,k) import objects are constructed using the electric field data from the electrical simulation according to the formula presented in the first section. This object is restricted to the PC grating and cavity and is composed of three adjacent structures: one representing the taper and cavity, and two others that repeat the result for the single hole on either side of the taper to represent the grating. Outside of that region, an idealized index material is used to represent the background EO polymer, with an index of 1.6. An index monitor has been included to visualize the index profile. The selected bias point for simulation can be changed by setting the "index_V" property of the "field dependent index material" structure group, which contains the (n,k) import objects constructed from the electric field data. There are six bias points ranging from 0 to 0.5V. At 0.2V (index_V = 3), the index in the vicinity of the cavity is shown in the figure below. Note that the scale of the colorbar is restricted to values around the base index of the EO polymer (the index of the surrounding silicon is approximately 3.48).
In the optimizations and sweeps toolbox, a sweep named "voltage" has been included which will vary the bias point used to determine the field-dependent index of the EO polymer. Using a modified high Q analysis object from the object library, the Q factor and resonant peak of the cavity will be recorded. Running this sweep will simulate the response of the PC as a function of voltage, and generate the following figures. At 0V, the resonant frequency is approximately 186THz, with a Q factor of approximately 5300. Therefore, the FWHM is estimated as 35GHz, which requires a wavelength shift of 0.30nm. From the plot of the wavelength of resonance vs. applied voltage, a slope of 0.9nm/V is obtained, such that a bias of approximately 0.33V would be required to shift the spectrum by the FWHM. For a 14.8um device, this corresponds to a Vπ-L value of 0.5mV-cm. These results are comparable to the values obtained in the referenced publication [1].
Modulation response for the photonic crystal electro-optic modulator, showing variation in (a) the quality factor and (b) the wavelength of resonance as a function of the applied voltage.
Biao Qi, et al., "Analysis of Electrooptic Modulator With 1-D Slotted Photonic Crystal Nanobeam Cavity," IEEE Photonics Tech. Lett., 23, 992 (2011) [doi]
PN depletion phase shifter
Interleaved PN Junction Micro-ring Modulator
nanobeam_pc_run_cv.lsf
nanobeam_pc_grating_field.mat
nanobeam_pc_eomod.ldev
Photonic crystals - list of examples
Traveling wave Mach-Zehnder modulator
Nanobeam grating
Photonic integrated circuits - Actives - list of examples
Electro-absorption modulator
|
CommonCrawl
|
O'Neill/McKendree Looping River
Is there a way to design a river system in an O'Neill/McKendree-style cylindrical habitat to passively feed into itself in an endless loop, from one end of the habitat to the other and back again?
Reworded: is the Coriolis effect or other innate properties of a spinning habitat up to the task of circulating water, river-like, the length and breadth of the structure? (If so, I would expect uphill flow to be possible in antispinward channels.)
The river must flow as a river does – making water sit still isn't difficult to figure out – without use of pumps. Assume the primary courses/channels are artificially constructed and maintained, which allows for forking and variable depth/width/etc. The system can use underground channels (vertical, lateral, angled) to take advantage of differences in pressure between the inner surface and hull. Dams, reservoirs, lakes, etc, can all play a role.
science-based engineering water megastructures
rekrek
$\begingroup$ Can this question be reduced to 'can water orbit a central point'? $\endgroup$ – Twelfth Sep 28 '17 at 16:42
$\begingroup$ Orbit doesn't describe flow, and I'm not sure if simplifying the system to a 2d cross section wouldn't eliminate some of the options, or could translate to lengthwise (spin-neutral) motion. $\endgroup$ – rek Sep 28 '17 at 16:48
$\begingroup$ The water can flow like how we have ocean currents but will not flow like rivers. The effect of the rotation will create a drag on the atmosphere and water and it will move. This loses energy from the system over time but on Earth is rather small compared to the Earth. In your system, to keep the habitat rotating will require energy. oceanservice.noaa.gov/education/tutorial_currents/… $\endgroup$ – A. C. A. C. Sep 28 '17 at 17:23
$\begingroup$ @A.C.A.C. Are you suggesting a wind- and/or temperature gradient-based system, i.e. wide flat sections driven by wind, deep narrow sections driven by convection currents? (It's a given that such a habitat will need energy to maintain spin.) $\endgroup$ – rek Sep 28 '17 at 21:20
$\begingroup$ I don't feel like I have the right knowledge to expand this into a full answer, but wouldn't a McKendree cylinder that wasn't tidally locked with the planet it is orbiting have tides. If you tuned the free variables in this situation, could the revolution of the tides give something resembling a 'loop river'? $\endgroup$ – Lex Sep 29 '17 at 7:58
I believe that there should be some way of achieving this although it might require multiple cylinders for it to work. Consider an arrangement of 4 rotating cylinders such as this:
The water in the rotating cylinder at the top of the diagram would flow downhill (from left to right). When it reaches the lowest point it is collected in the reservoir on the far right projecting "below" the first cylinder. Once per revolution the bottom of the reservoir opens when it is directly over the centre of the adjacent cylinder centrifugal forces force the water into the adjacent cylinder and the process is repeated.
Although it may appear that I am suggesting perpetual motion, I am not. The energy required would ultimately come from slowing of rotation of the cylinders by a small amount.
There are many objections to this design on practical grounds such as transferring the water through a vacuum. However the basic principle stands and such issues could be greatly minimised by careful design improving on my basic proof of concept idea.
edit mark 2
edit mark 3 counter rotating end torus forces water outwards and back to the central axis of the main cylinder by cetrifugal force.
SlartySlarty
$\begingroup$ This is a brilliant solution though I want to point out that the internal appearance would cease to be a cylinder and look more conical. Plus you would need 3-4 of these suckers at least. It also doesn't really wrap the cylinder which I think was the OP's desire. $\endgroup$ – anon Sep 28 '17 at 22:43
$\begingroup$ @anon I see what you are saying, but let me clarify. Concerning the conical issue, I have exaggerated the slope for illustrative purposes. The height would probably not need to be more than a few percent of the length or even less. It would also not have to cover the entire inner surface nor would it have to run in a straight line. Think of a very shallow winding aqueduct disguised with hills and mountains. Additionally with suitable piping the water could flow from either end or even from one end to the other and back again. $\endgroup$ – Slarty Sep 28 '17 at 23:00
$\begingroup$ @anon I suspect the mark 2 could be refined further still the pipe could run the whole length of the main cyinder with a smaller "pump-transfer" cylinder at one end $\endgroup$ – Slarty Sep 28 '17 at 23:33
$\begingroup$ The Mk II version would work perfectly if you had two counter-rotating cylinders of equal size as well. $\endgroup$ – Joe Bloggs Sep 29 '17 at 8:23
$\begingroup$ I feel like the Mark 3 is just an overly complicated pump. $\endgroup$ – Lex Sep 29 '17 at 22:59
Water flow dissipates energy. On Earth that energy is supplied by gravitational field and from the sun.
If your system doesn't have a supply of energy to the flow, the water is going, sooner or later, to stand still.
The centrifugal force will only help distributing the water on the walls of the cylinder. To move it up some hill you cannot escape using some pumping mechanism.
$\begingroup$ This is true, but OP never said that there is no source of energy. His habitat is spinning and he asks if that can make river flow. Energy of the spinning is an energy. How habitat is kept spinning is another matter. $\endgroup$ – Mołot Sep 28 '17 at 17:29
$\begingroup$ There is no such thing as centrifugal force in these systems; there is only angular momentum and centripetal force. Combined, the two give the illusion of centrifugal force and what is called the Coriolis effect. The question here is if the angular momentum can cause a river to flow. Note that if this works, the water would almost certainly flow the opposite direction from the spin. The water would be attempting to stay stationary while the ground revolved underneath it. $\endgroup$ – Brythan Sep 28 '17 at 18:21
$\begingroup$ @Brythan: that's just a matter of definition. Define your frame of reference as constantly rotating and there most certainly are centrifugal and Coriolis forces. $\endgroup$ – Joe Bloggs Sep 28 '17 at 19:10
$\begingroup$ As an O'Neill-style habitat there is energy in the form of spin, which produces spinward and outward (down, from the perspective of the inner surface) motion from anything not held to the surface. There's also heating in the form of a simulated day/night cycle, but I don't expect that to factor. $\endgroup$ – rek Sep 28 '17 at 20:53
$\begingroup$ @rek Yes and the spinning cannot cause a river to flow, unless it is uneven. The goal is to have an even spin that provides constant gravity so people don't bounce around the inside. If you have uniform gravity, then the river will not flow anywhere but down hill, and getting it to go back uphill will not work with just the spin. $\endgroup$ – Braydon Sep 29 '17 at 2:47
The Coriolis acceleration is $$\mathbf{a}_c=-2\mathbf{\Omega}\times\mathbf{v}$$ where $\mathbf{\Omega}$ is the angular velocity vector of the cylinder and $\mathbf{v}$ is the velocity vector of the river. $\mathbf{\Omega}$ is along the axis of rotation of the cylinder. Let's look at two cases:
$\mathbf{v}$ is parallel to $\mathbf{\Omega}$. Here, $\mathbf{a}_c=\mathbf{0}$, because the cross product of two parallel vectors is zero.
$\mathbf{v}$ is tangent to the circular cross-section of the cylinder. Here, $\mathbf{a}_c$ is pointed inwards, to the central axis. From the point of view of a person on the ground, this is a vertical force, not a horizontal force.
On the inside of the cylinder - not the caps - the Coriolis force won't have any "horizontal" effects on the flow of rivers.
Maybe you're not convinced. Consider the Coriolis acceleration on Earth's equator. There's no horizontal component to the acceleration, right? Well, on the cylinder, the edge of every cross-section is like the equator, at the same distance from the axis.
HDE 226868♦HDE 226868
I don't think it'll work without active control. Even then, your plan would borrow energy from the rotation which would be bad in the long run.
As has been pointed out, rivers require weather. You have to evaporate water from the low lying pools and release it as rain on higher ground.
Theoretically you could taper the inside of the cylinder near the ends (making the ends of the cylinder "high ground"). Then, with differential heating, make the center of the cylinder warmer and try to start a air circulation patter running from the center to each end along the ground (with return through the axis). This would cause the air to release the moisture that it picked up at the center. This will form a cycle that will allow the flowing of rivers.
The problem that I see with that is that it relies on the rising air releasing its water. In the rotating cylinder, the gravity decreases dramatically as you approach the center. The lower gravity would allow the air to hold larger droplets of water before they are heavy enough to fall.
Would this be enough to prevent weather in the cylinder? I don't know.
If that is the case, this might still work if you have water condensers on the ends (or, maybe, on a spine running through the axis) and get your water out of the air that way.
ShadoCatShadoCat
$\begingroup$ The question is not whether rivers (and weather systems) could or would form, but whether a looping river could be constructed to take advantage of the properties of the habitat. $\endgroup$ – rek Sep 28 '17 at 21:15
$\begingroup$ @rek, as I mentioned at the top, I don't think it would for reasons other people have stated. I proposed some possible work arounds. $\endgroup$ – ShadoCat Sep 28 '17 at 21:24
First the hard core facts: What makes a river work on earth is gravity and differences in elevation ie ultimately a difference in potential energy. In this system it is not possible per say to use difference in elevation as a driver of the river because it must wrap around (because water cant travel uphill at any point in the stream).
Fun Natural facts of the Water Cycle:
Mountains are a major contributor to the formation of rivers as water sources. When water vaporizes in the water cycle into the atmosphere it's looking for ways to cool and condense and return. Because of the elevation of mountains they can act as a condensation point collecting the water vapor as rain, mist, dew, etc. This water then flows down the mountain and feeds rivers.
How to make this work:
Strategically place mountains near your river and build collection channels so that they feed their water into the main river. Have all of the streams pointed in the same direction (like right of the mountain into the river). Because all of the rivers tributaries are flowing water in from the same direction this will result in a current flowing in one direction. The river can then be wrapped back into itself.
One important detail:
Water must be consumed from the river or else it would just pool. But this could be used to water fields, civilization, or intentionally be vaporized to further fuel the cycle.
But this is essentially a river, its basically the same concept as a lazy river at a water park.
A heck I just drew you a picture instead, deal with its quality:
I misspelled "feed" with "feen" oops.
regardless you are using the energy of the water run off from the mountains to drive the current of the river.
anonanon
$\begingroup$ Explaining how the water cycle works isn't necessary for answering this question, and suggesting a water cycle doesn't fulfill the stipulation that the river be a continuous loop. $\endgroup$ – rek Sep 28 '17 at 21:10
$\begingroup$ You have to read the whole answer. Im using the water run off from the mountains to give the river directional flow. In short its like those jets in a lazy river, they are all aimed in the same direction. $\endgroup$ – anon Sep 28 '17 at 21:14
$\begingroup$ If you have ever poured water into a pot such that it spins, this is mechanically similar to that. $\endgroup$ – anon Sep 28 '17 at 21:46
Only if your willing to vary the rotational speed of the entire station.
Objective : a passively powered lazy river inside an O'Neill cylinder.
That it be an endless loop is actually a requirement. It will flow backwards until it's done contending with the first law of motion. The first part of which makes the water remain where it is. The second part eventually accelerates it due to the friction of the riverbed, at which point it no longer 'flows'. Once that happens you have to slow the station down and let it reset, and then you can repeat the process.
Or, enjoy it while it lasts. After a while once the station is up to speed, without pumps or a water cycle, the water will stagnate.
MazuraMazura
If we imagine a river that circles the habitat around the axis of rotation I think such a river will have an apparent flow from the perspective of an observer standing on the banks. I will explain my reasoning below.
The first thing we need to address is L.Dutch's criticism that there must be an energy source to generate flow. As Molot pointed out in the comments that energy can come from the rotational spin of the habitat. In fact, any nonrigid body will lose speed due to friction and turbulence in the gases and liquids inside it. What this means is that the atmosphere and hydrosphere inside the habitat will slow in relation to the rigid surface of the cylinder. Friction between the rigid and nonrigid will speed up the water and air and slow down the ring. This is a constant process that will gradually slow the ring. This means that at equilibrium between the two opposing frictional forces we can expect that on average the nonrigid components of the system will have a slightly longer rotational period than the rigid components. That is to say, the air and water will tend to have an anti-spinward velocity from an observer standing on the inner surface of the ring.
This effect is minor but I think it will be exacerbated by two additional forces. The first is a part of the Coriolis effect. If we look at HDE226868's answer we see that the Coriolis effect on our river is a vertical one, rather than a horizontal one, but because our river has a vertical this will still effect the flow of the river.
On Earth, a train going around the equator to the East (spinwards) is lighter than a train going West (anti-spinwards). This is due to the vertical component of the Coriolis effect called the Eötvös effect. Essentially, the centrifugal force of the Earth's spin acts against the pull of gravity and tries to fling us into space. Spinning faster increases this force and makes us even lighter while spinning more slowly reduces this force and makes us heavier.
On Earth, this effect is slight and only important for rocket launches and long-range artillery bombardment, but on our relatively small spinning habitat, the magnitude would be much larger. Now how does this apply to our river? Because our spinning cylinder is "inside-out" compared to the Earth the forces are reversed. Water moving in a spinward direction faster than the water around it will be effectively heavier and water moving in an anti-spinward fashion will be lighter. This means that spinward currents will sink and anti-spinward currents will rise. This will result in the surface of the river having a larger anti-spinward velocity than the bottom of the river. In this way, the Eötvös effect will exacerbate the perceived flow of the river from an observer on the surface.
The second effect is that of wind. I anticipate that the wind at the surface of the habitat will be primarily anti-spinward and that this, as a result, will act to pull the river further anti-spinward. My reasoning is as follows. All of the aforementioned effects are acting on the air of the habitat just as they were the water. This means the air will also have a net anti-spinward velocity relative to the ring, with higher altitudes having larger anti-spinward velocities. Additionally, the heat cycle will play a role here. Hot air on the surface of the ring heated by the artificial sun will rise due to decreased density just as it does on Earth. However, here the Coriolis effect will deflect the rising air spinwards. In turn, the cool air from above that sinks to take the warm air's place will be moving anti-spinward. In this way, convection currents on the rotating habitat will create strong anti-spinward winds on the surface of the ring. The surface of any water will, therefore, be pushed in an anti-spinward direction by the wind.
These forces, the various frictions and Coriolis effects, will act together to cause the surface of a circular river to flow anti-spinward in an endless cycle powered by the rotational kinetic energy of the system which will be gradually lost to heat.
Mike NicholsMike Nichols
Not the answer you're looking for? Browse other questions tagged science-based engineering water megastructures or ask your own question.
Vampiric Blood River
Possible: Canal town (e.g. Venice) in a River Valley?
We broke all the river dams. Does it reverse environmental damage/climate change?
How do I simulate the path of a river?
What conditions would make rain possible in an O'Neill Cylinder?
Gravity differences on asteroid with an O'Neill cylinder
Would Rayleigh scattering (blue skies) be noticeable in an O'Neill cylinder?
How common is it for a river to enter a valley from a flatter plain?
Would an deep sea lava river be feasible?
Equatorial oceanic river caused by tides
|
CommonCrawl
|
Proving non-regularity of $\{a^p \mid p \in \text{Prime} \}$ without pumping lemma
I was wondering whether it is possible to prove $\{a^p \mid p \in \text{Prime} \}$ is a non-regular language without using the pumping lemma. I'm having trouble choosing an alphabet that completes the proof using Myhill-Nerode and figuring out other methods to use generally.
automata finite-automata pumping-lemma
xskxzr
moloculemolocule
Let $L=\{a^p \mid p \in \text{Prime} \}$.
Let $m,n\in\Bbb N$, $m<n$. Let $s$ be the smallest prime number that is bigger than $(3n)! + 3n$.
$a^na^{s-n}=a^s\in L$.
$a^ma^{s-n}=a^{s-(n-m)}$. Note that $(3n)! + 2n < s - (n-m) < s $.
Any number between $(3n)! + 2n $ and $(3n)!+ 3n$ inclusive is not a prime number.
Any number between $(3n)!+3n$ and $s$ exclusive is not a prime number.
So $s-(n-m)$ is not prime, i.e., $a^ma^{s-n}\not\in L$.
The above shows that $a^m$ and $a^n$ represent different Myhill-Nerode equivalence classes. That means each word represents a distinct Myhill-Nerode equivalence class. Since there are infinitely many of them, $L$ cannot be regular.
John L.John L.
$\begingroup$ To justify this answer, we can say (the proof of) Myhill–Nerode theorem does not use the (proof of) pumping lemma. In fact, it is reasonable to say the finiteness of Myhill-Nerode classes is easier to prove than the pumping lemma. $\endgroup$ – John L. Apr 11 '19 at 14:57
Consider a DFA which has a unary input alphabet. Every state has exactly one successor state. Without restriction of generality, the trace of an input long enough will be $q_0 q_1 \dotsm q_k q_i$, where $q_0, q_1, \dotsc, q_k$ are all distinct states and $i \in \{ 0, \dotsc, k\}$. Hence, $q_i \dotsm q_k q_i$ is a cycle.
If the DFA's language is infinite, there must be an accepting state $q_j$ with $j \in \{ i, \dotsc, k \}$. Note we may assume $j > 1$ since otherwise we immediately obtain $\varepsilon$ or $a$ is in the DFA's language (and neither have prime length). But then $a^{j + m(k - i + 1)}$ is accepted for all $m \in \mathbb{N}_0$. Since $j + j(k - i + 1) > j$ is not prime, the DFA's language contains strings with composite length (unless it is finite).
dkaeaedkaeae
$\begingroup$ You've basically just proved the pumping lemma for unary alphabets, so I don't think this counts. $\endgroup$ – David Richerby Mar 11 '19 at 17:17
$\begingroup$ Well, not really. It might be in the spirit of the pumping lemma (or, rather, of the pumping lemma's proof), but it does not explicitly rely on it. I'll let the OP decide what counts and what doesn't. $\endgroup$ – dkaeae Mar 11 '19 at 19:44
Parikh's theorem (whose proof in this case is trivial) implies that if your language were regular, then the set of primes would be eventually periodic: there exist $n_0 \geq 0$ and $m \geq 1$ such that for $n \geq n_0$, $n$ is prime iff $n+m$ is prime. In particular, since the set of primes is infinite, it would have positive density. However, it is well-known that the set of primes has vanishing density (this follows from, but is much easier than, the prime number theorem).
Yuval FilmusYuval Filmus
$\begingroup$ "Parikh's theorem (whose proof in this case is trivial)". However, that trivial proof is sort of including the proof of the pumping lemma (in this case). So it is dubious this answer counts. $\endgroup$ – John L. Apr 12 '19 at 3:51
Another weird way to prove it is using busy beavers and the prime gaps theorem:
suppose that you have a $DFA$ $A$ that accepts the primes $\{ a^p\mid p \text{ prime} \}$
given a state $q$ of $A$, you can build a Turing machine $M_{\langle A,q \rangle}$ that sequentially simulates $A$ starting from state $q$ on inputs $a^1, a^2, a^3, a^4,...$ until $A$ accepts some $a^k$ (or never halt)
let $|M_{\langle q,A \rangle}| = n$ be the size of such Turing machine, and $BB(n)$ the maximum number of steps achievable by a halting Turing machine of size $n$ (uncomputable)
by the prime gaps theorem, there exists a prime $p_i$ such that $p_{i+1} - p_i \gg BB(n)$
$A$ accepts $a^{p_{i+1}}$, so let $q_i$ be the state of $A$ on input $a^{p_{i+1}}$ after $p_{i}$ steps (i.e. it has scanned the first part $a^{p_i}...$ and the head is at the beginning of the remaining part of the input $...a^{p_{i+1} - p_{i}}$)
so there exists $M_{\langle A,q_i\rangle}$ of size $n$ that by construction will run for a number of steps greater than $p_{i+1} - p_i \gg BB(n)$ and halt, contradicting the hypothesis that $BB(n)$ is the maximum number of steps achievable by a halting TM of size $n$
Not the answer you're looking for? Browse other questions tagged automata finite-automata pumping-lemma or ask your own question.
How to show that the language made up of strings with nlogn 0s is not regular with the pumping lemma?
Prime number CFG and Pumping Lemma
Proofs using the regular pumping lemma
Prove that the language of unary not-prime numbers satisfies the Pumping Lemma
Pumping Lemma for $L=\{a^{2k} b^n b^k \mid k\ge0, n\ge0\}$
Is there a way a proving a language regular/non-regular that works for every possible language?
Pumping lemma occurrence of c > d
|
CommonCrawl
|
Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research
Miguel Angel Luque-Fernandez1,
Aurélien Belot1,
Manuela Quaresma1,
Camille Maringe1,
Michel P. Coleman1 &
Bernard Rachet1
BMC Medical Research Methodology volume 16, Article number: 129 (2016) Cite this article
In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion.
We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling.
All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models.
We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
In population-based cancer research, the relative survival setting is used because the cause of death is often not available or considered to be unreliable [1]. Therefore, the survival and the mortality associated with cancer are estimated by incorporating the information of the expected mortality from the general population (i.e. background mortality) obtained from national or regional life tables [1, 2]. The main advantage of the relative survival setting is that it provides a measure of patients survival and mortality associated with cancer without the need for information on the specific cause of death [1]. These measures of survival and mortality are known as the net survival and the excess mortality respectively [2–4]. When multivariable adjustment is of interest, the excess mortality can be modelled using piecewise exponential regression models [3, 5]. Piecewise exponential regression excess mortality (PEREM) models derive adjusted excess mortality rates accounting for the expected mortality of the background population [5, 6].
It has been shown that PEREM models can be fitted in the Generalized Linear Modelling (GLM) framework [3]. Using the GLM framework it is relatively easy to extend the models to deal with clustering, through either a random-effects model or by utilizing sandwich-type estimators for the standard errors (SE) [6–8]. To fit PEREM models follow-up time is split into k intervals (e.g., yearly, monthly) and person-times of follow-up y k is introduced as an offset in the model, assuming that the excess mortality rate is constant within each interval but, it can vary arbitrarily between the intervals. Moreover, the usual assumption that the number of deaths (d k ) observed in interval k can be described by a Poisson distribution with rate parameter \(\lambda _{k}\,=\,\frac {d_{k}}{y_{k}}\) has been adapted to the relative survival setting [3].
The rate parameter λ k is adapted to include the expected mortality of the general population under the relative survival setting
$$ \lambda_{k}^{+}\,=\,\frac{d_{k}-d_{k}^{*}}{y_{k}}\,=\,\frac{d^{+}_{k}}{y_{k}}, $$
where (d k ) and (\(d^{*}_{k}\)) are the observed and expected number of deaths from the general population and \(\left (d^{+}_{k}\right)\), the excess number of deaths.
Thus, the Log-likelihood for the PEREM model includes the updated rate parameter:
$$ ln\left(\lambda_{k}^{+}\right) = ln\left(\lambda^{+}_{0k}\right)\,+\,\mathbf{x}^{\mathrm{T}}\boldsymbol{\beta}, $$
where \(ln(\lambda _{k}^{+})\) is the logarithm of the excess mortality and xT denotes the transpose of the vector of covariates x and β represent the corresponding parameter estimates.
Using (1), we can rewrite the rate parameter defined in (2) as:
$$ ln\left(d_{k}-d_{k}^{*}\right) = ln(y_{k})\,+\,ln\left(\lambda^{+}_{0k}\right)\,+\,\mathbf{x}^{\mathrm{T}}\boldsymbol{\beta}, $$
where ln(y k ) is the logarithm of the person-time at risk for the kth interval incorporated in the model as an offset and \(ln(\lambda ^{+}_{0k}\)), is the log of the baseline excess mortality rate [3, 6].
Using (1), we can rewrite the PEREM model in (3):
$$\frac{d_{k}}{y_{k}}\,=\,\frac{d^{*}_{k}}{y_{k}}\,+\,\lambda^{+}_{0k}\,\exp\left(\mathbf{x}^{\mathrm{T}}\boldsymbol{\beta}\right). $$
The model in (3) assumes constant rates over intervals of time and it may lead to overdispersion due to extra variability in the rate parameter (i.e. the variance is greater than the mean). The assumption that the conditional mean and variance of the rate parameter given covariates x are equal is strong and may fail to account for inherent or genuine overdispersion. The variance exceeds the mean generally because of positive correlation between variables or excess variation between rates [9].
Overdispersion in PEREM models is typically due to extra variability in the rate parameter (genuine overdispersion). However, other forms of non-genuine overdispersion may appear when the model omits important explanatory predictors; the data contain outliers, or the model fails to introduce enough interaction terms or non-linear functional form between predictors and outcome. By contrast, no external remedy can be applied in the case of inherent or genuine overdispersion [10].
Fitting an overdispersed PEREM model leads to underestimating standard errors (SE) and therefore to the inappropriate interpretation of the conditional estimates of the covariates introduced in the model (i.e. a variable may be wrongly considered as a significant predictor).
Using an empirical example, we aim to take advantage of the relationship between the GLM framework and the PEREM model to apply a simple method to test and correct for overdispersion that could be easily implemented and used by population-based cancer researchers.
The presence of overdispersion can be recognized when the value of the ratio between the Pearson χ2 (or deviance statistics) over the degrees of freedom is larger than one. However, a more formal statistical approach is required to test the presence of inherent overdispersion, then to correct for it [11].
Testing overdispersion in PEREM models
A regression-based score test enables us to evaluate whether the variance is equal to the mean (Var(λ+) = E(λ+)) or proportional to the square mean [11]:
$$ Var\left(\lambda^{+}\right)\,=\,E\left(\lambda^{+}\right)\,+\,\alpha\,E\left(\lambda^{+}\right)^{2}, $$
We first calculate the score statistic (Z) to test H0: α=0 against H1: α > 0, using the fitted values of the excess mortality rate \(\widehat {\lambda ^{+}}\) [11–13]:
$$Z\,=\,\displaystyle\sum_{i=1}^{N}\displaystyle\sum_{k=1}^{M}\left(\frac{\left(\lambda^{+}_{ik}-\widehat{\lambda_{ik}^{+}}\right)^{2}\,-\,\lambda^{+}_{ik}}{\widehat{\lambda_{ik}^{+}}}\right), $$
where \(\lambda ^{+}_{ik}\,=\,\frac {d^{+}_{ik}}{y_{ik}}\) and substituting λ ik and \(\widehat {\lambda _{ik}^{+}}\) gives:
$$ Z\,=\,\displaystyle\sum_{i=1}^{N}\displaystyle\sum_{k=1}^{M}\left(\frac{\left(\frac{d^{+}_{ik}}{y_{ik}}\,-\, \frac{\widehat{d^{+}_{ik}}}{y_{ik}}\right)^{2}\,-\,\frac{d^{+}_{ik}}{y_{ik}}}{\frac{\widehat{d^{+}_{ik}}}{y_{ik}}}\right). $$
The test is implemented by a linear regression of the generated dependent variable Z on \(\widehat {\lambda _{ik}^{+}}\) (independent variable), without including an intercept term. Hence the output can be interpreted as a T-test of whether the coefficient of \(\widehat {\lambda _{ik}^{+}}\) is zero testing whether the variance of the rate parameter is equal to the mean [12].
Correcting for overdispersion
The most commonly used approaches to correct for inherent overdispersion are relatively straightforward to implement in common statistical software.
Quasi-likelihood approach
Inherent overdispersion in PEREM modeling may be due to extra variability in the parameter \(\lambda _{ik}^{+}\,=\,\frac {d_{ik}-d_{ik}^{*}}{y_{ik}}\). Including an extra parameter ϕ in the model allows the variance to vary freely from the mean [14]. There are several options to compute the extra parameter ϕ. The simplest is to take f(λ+,ϕ) = ϕ×λ+, which specifies a constant proportional overdispersion ϕ across all individuals. Using a PEREM modeling approach, we assume that the distribution of \(\lambda _{ik}^{+}\) is Poisson. Hence the Pearson Chi-squared statistic can be computed as a criteria of goodness of fit using the observed (O) and expected values (E) from the model:
$$\chi^{2}\,=\,\displaystyle\sum_{i=1}^{n}\frac{(O_{i}-E_{i})^{2}}{E_{i}}. $$
Substituting O and E by \(\lambda ^{+}_{ik}\) and \(\widehat {\lambda _{ik}^{+}}\) gives:
$$\chi^{2}\,=\,\displaystyle\sum_{i=1}^{N}\displaystyle\sum_{k=1}^{M}\left(\frac{\left(\frac{d^{+}_{ik}}{y_{ik}}\,-\, \frac{\widehat{d^{+}_{ik}}}{y_{ik}}\right)^{2}}{\frac{\widehat{d^{+}_{ik}}}{y_{ik}}}\right). $$
The ratio between the Pearson χ2 or deviance statistic, and the degrees of freedom should be closed to one as we expect that the variance of the model is equal to that of the assumed Poisson distribution. We can estimate the overdispersion parameter ϕ multiplying the inverse of the degrees of freedom (df) of the model times the Pearson χ2 statistic.
Scaling the SE with \(\sqrt {\hat \phi }\,=\,\sqrt {\chi ^{2}/df}\) will correct the estimated SE of \(\hat {\beta }\), which was estimated under the model of constant overdispersion [15, 16]. The estimated \(\hat \phi \) is integrated as a scalar updating the variance-covariance matrix of the PEREM model estimated under the GLM framework and thus correcting for overdispersion [17]. Under the GLM framework \(\hat {\beta }\) and the SE of \(\hat {\beta }\) is optimized via an iteratively reweighted least squares procedure [11, 17]. Therefore, scaling the SE of \(\widehat {\beta }\) in terms of matrix notation is given by [17]
$$\text{Variance}(\hat{\beta})\,=\,\hat{\phi}\,\left(\mathbf{X}^{T}\mathbf{WX}\right)^{-1}, $$
where X represents the n ×p design matrix of the observed data and, W is a diagonal n ×n matrix with the values of \(\widehat {\lambda ^{+}}=\exp (\mathbf {x}^{\mathrm {T}}\boldsymbol {\beta })\) on the diagonal. Thus, the variance is updated with the new values of the weighted matrix under the assumption of no specific probability distribution [14].
Robust standard errors of parameters estimates
In maximum likelihood estimation, the standard errors of the estimated parameters are derived from the Hessian (matrix of second derivatives on the parameters) of the likelihood. However, theses standard errors are correct only if the likelihood is the true likelihood of the data [14]. In cases where we consider that overdispersion might be due to unobserved covariates and the link function or the probability distribution function are misspecified, the assumption about the true likelihood of the data does not hold. Under these scenarios, we can still use robust estimates of the standard error known as Huber, White, or sandwich variance estimates to correct for overdispersion Additional file 1 [18–20].
$$\text{Variance}(\hat{\beta})\,=\,(\mathbf{X}^{T}\mathbf{WX})^{-1}(\mathbf{X}^{T}\mathbf{\Sigma X})(\mathbf{X}^{T}\mathbf{WX})^{-1}, $$
where Σ is a n ×n matrix with the values of \((\lambda ^{+}\,-\,\widehat {\lambda ^{+}})^{2}\) on the diagonal.
Negative binomial regression model
Given the presence of heterogeneity of subject-specific rates leads naturally to the question of whether we can model subject-specific rates using a random effects framework. The simplest random effects model assumes a person-to-person heterogeneity can be expressed by a model for the mean along with a log-Gamma distribution of the random intercept term. The random intercept follows a log-Gamma distribution, and the marginal distribution of the outcome followed a negative binomial distribution which has two parameters, shape (E(λ+)) and scale (Var(λ+)), but importantly its variance and mean are related by (4), where the parameter α must be positive allowing the variance of λ+ to be greater than the mean [11]. The Poisson distribution is a special case of the negative binomial distribution where α = 0 [11]. We can estimate α using the coefficient from a linear regression with Z (5) as dependent variable on \(\widehat {\lambda ^{+}}\) (independent variable), without including an intercept term, as described above [12].
Flexible PEREM model
The piecewise exponential regression model under the GLM and relative survival frameworks could be extended by finely splitting the time scale and using a flexible function of time such as splines [21, 22]. The flexible PEREM models allow modelling the baseline hazard and any time-dependent effects as smooth and continuous functions [6]. A time-dependent effect is easily modelled by including an interaction term between the smooth function of time and the covariate [23]. Cubic regression splines is a very popular choice for modelling flexible functions. In a truncated power basis, a cubic regression spline s(t) of time t, with K knots located in different places of the distribution of the smooth function of time can be written as [5]:
$$s(t)\,=\,\sum_{j=0}^{3}\,\beta_{0j}t^{j}\,+\,\sum_{i=1}^{K}\beta_{i3}(t\,-\,k_{i})^{3}_{+}, $$
$$(\text{t} \,-\,k_{i})^{3}_{+}\,=\, \left\{\begin{array}{ll} (\text{t} \,-\, k_{i})^{3} & \quad \text{if t} > k_{i},\\ 0 & \quad \text{otherwise}.\\ \end{array}\right. $$
In order to deal with high variance at the outer range of the predictors, they may be forced (restricted) to be linear before the first knot and after the last knot leading to a natural or restricted cubic spline [23]. The first and the last knots are known as boundary knots [24, 25]. If we define m interior knots, k1,…,k m , and also two boundary knots, k min ,…,k max , we can now write s(t) as a function of parameters γ and some newly created variables z1,…,zm+1, giving [5]:
$$s(t)\,=\, \gamma_{0}\,+\,\gamma_{1}z_{1}\,+\,\gamma_{2}z_{2}\,+\,\ldots\,+\,\gamma_{m+1}z_{m+1}, $$
The basic functions z j (j=2,…,m+1) are derived as follows:
$$\begin{array}{*{20}l} {}z_{1}\,&=\,t,\\ {}z_{j}\,&=\,(x - k_{j})^{3}_{+} - \lambda_{j}(x - k_{min})^{3}_{+} - \left(1 - \lambda_{j}\right)(x - k_{max})^{3}_{+},\\ {}\lambda_{j}\,&=\,\frac{k_{max} - k_{j}}{k_{max} - k_{min}}. \end{array} $$
These functions can be easily implemented using various Stata commands (e.g., rcsgen) [5, 6]. The flexible PEREM approach using splines allows modelling easily non-proportional excess mortality rate ratios including time-dependent effects of the covariates. Thus, we can achieve a better model specification which should minimize non-genuine overdispersion [25]. However, we can still scale the SE estimates in case of inherent overdispersion previously detected using the suggested regression based Score test.
Data were obtained from the Office for National Statistics (ONS), comprising 376,791 women diagnosed with breast cancer in England between 1997 and 2005, with a follow-up to the end of 2012. The event of interest is death from any cause, with follow-up restricted to 7 years after diagnosis though we estimated up to 5 years excess mortality [21]. We built life tables from England to derive the expected mortality in the background population, by sex, single year of age, calendar year, and deprivation quintile. We aimed to estimate excess mortality hazard rate for age and deprivation in the first five years after the diagnosis of a breast cancer. Legal authority to hold the cancer data derives from a contract with the ONS to produce the official national statistics on cancer survival.
First, we split the times-to-event to merge the cancer data with the estimated expected number of deaths for all causes using life tables from England [26]. Then, we fitted four types of PEREM models: in model A, we did not correct for overdispersion, in model B we scaled the SEs by the \(\sqrt {{\vphantom {\frac {3}{2}}}\widehat {\phi }}\), in model C we used the Sandwich estimates of the SEs and in model D we fitted a NBR assuming a log-gamma distribution. All models were within the GLM framework with the Poisson family and the modified link \(\left (ln\left (d_{ik}\,-\,d^{*}_{ik}\right)\right)\). The modified link log was used to incorporate in the maximum likelihood estimation the expected number of deaths (d∗) from the background population [3, 5].
Deprivation was included in all PEREM models as a categorical variable, with Q1, the least deprived group, as the reference category. Age was included as a categorical variable with five levels (<50, 50-59, 60-69, 70-79, ≥80) using <50 as reference. Follow-up time was parameterized as a categorical variable in PEREM models and as a smooth function of time for the flexible PEREM models. We reported \(\widehat {\beta }\), \(\text {var}(\hat \beta)\), and the relative loss in efficiency (RLE) of \(\text {var}(\widehat \beta)\). To estimate RLE for each PEREM model corrected for overdispersion, the model not corrected for overdispersion was the reference [27]. The RLE was computed as the ratio between the variance of the estimates from the models adjusted for overdispersion and the variance from the uncorrected model
$$ \text{RLE}(\text{var}(\hat{\beta_{2}}),\text{var}(\hat{\beta_{1}}))\,=\,\frac{\text{var}(\hat{\beta}_{2})}{\text{var}(\hat{\beta}_{1})}, $$
where var(\(\hat {\beta _{2}}\)) refers to the corrected estimate of the variance for overdispersion (scaling the SE or using the sandwich robust estimates) and var(\(\hat {\beta _{1}}\)) to the uncorrected. The RLE was interpreted as the percentage of efficiency loss (% of increase in the variance estimate) for PEREM models needed to reduce bias after correction for overdispersion.
Finally, we fitted a flexible PEREM model which included an interaction between deprivation quintiles and follow-up time to allow the effect of deprivation to vary over time. Hence, the baseline rate was defined as a restricted cubic spline, with one-month intervals and five knots placed at the minimum and the maximum and at the 25th, 50th, and 75th centiles of the event times. For this flexible PEREM model, we plotted the excess mortality rate ratios and 95 % CI for each quintile of deprivation in categories with corrected and uncorrected SE for overdispersion [6, 23, 28]. All analysis were performed using Stata v.14 (StataCorp LP, College Station, Texas, US) Additional file 2.
The Pearson χ2 deviance residuals were non-normally distributed for the uncorrected PEREM model (Shapiro-Wilk test for normality p-value = 0.01) [29], and the overdispersion parameter (ϕ) was 21.3 % times higher than expected suggesting the presence of overdispersion. The Score test for overdispersion rejected the H0 (p-value <0.001) indicating the presence of truly overdispersion in the rate parameter and, the scatter plot of the standardized Pearson's χ2 residuals against the excess mortality rates suggested the presence of heteroscedasticity and hence, potential overdispersion (Fig. 1).
Piecewise exponential regression excess mortality model: standardized Pearson χ2 residual analysis, n= 376,791 women diagnosed with breast cancer in England between 1997 and the end of 2005
Table 1 contrasts the exponentiated coefficients, SE, and RLE for the four different PEREM models, uncorrected (model A) or corrected for the presence of inherent overdispersion (models B, C with ϕ parameter = 21.3) and model D adjusted for overdispersion using the NBR approach.
Table 1 Piecewise exponential regression excess mortality models with and without correcting for overdispesion, n = 376,791 women diagnosed with breast cancer in England between 1997 and the end of 2005
The model with the less conservative SE, model A, showed significant excess mortality rate ratio for each of the four deprivation quintiles (compared with the first quintile). Models corrected for overdispersion (B, C, and D) provided more conservative estimates of the SEs. After accounting for oversdispersion, deprivation showed a significant excess mortality, compared to Q1, only for the deprivation quintiles Q3-Q5 for models B and D, and for the deprivation quintiles Q4 and Q5 for model C (Table 1). Compared with the unadjusted model A, all corrected models showed a non-significant effect for age groups 60-69 and 50-59. Overall, the RLE ranged between 12 and 46 percent for corrected models compared with the model uncorrected for overdispersion. The RLE was, however, larger for model C (robust SE). The model D (NBR), compared with model B (scaled SE) and C, showed the smallest RLE. The loss of precision in the models corrected for overdispersion was reflected by the loss of statistical significance for the age groups 50-59, 60-69 and the deprivation quintiles Q1 and Q2. However, scaling the SE to control for overdispersion (model B) showed better efficiency (smaller RLE) compared with the robust SE estimation (model C)(Table 1).
Finally, the flexible PEREM model showed smaller overdispersion parameter (ϕ = 3.2). The test for overdispersion showed the presence of significant inherent overdispersion (p-value <0.001). The flexible PEREM model reduced significantly the overdispersion parameter compared with the models without the smooth functions of time (21.3 vs. 3.2). Allowing for the time dependent effect of deprivation, revealed a decreasing trend of the excess mortality over time during the first five years after the diagnosis of breast cancer. Furthermore, the interaction between the smooth function of time with deprivation showed a stronger effect of deprivation over time, illustrated with 8 to 4 times higher excess mortality rate ratios for the most deprived group compared with the least deprived (Fig. 2).
Flexible piecewise exponential regression model: A (non-scaled SE) B (robust SE), n = 376,791 women diagnosed with breast cancer in England between 1997 and the end of 2005
We have shown that under the relative survival and GLM frameworks, the modified link to fit PEREM models, allows the inclusion in the maximum likelihood estimation of the information regarding the background mortality of the reference population [3]. However, data analysts may expect to find inherent overdispersion as a characteristic of this modelling approach [30].
We have shown, that inappropriate imposition of the Poisson restriction may produce spuriously small SEs of the estimated coefficients \(\hat {\beta }\). Fitting an overdisperse PEREM model under the relative survival and GLM frameworks, may lead to underestimate SEs and, therefore, to inappropriate statistical interpretation of the significance of the conditional estimates from the effects of the covariates introduced in the model (i.e., a variable or the levels of a categorical variable, may appear to be significant predictors of the outcome, when in fact it is not).
We encourage epidemiologist and applied statistician using PEREM models under the relative survival framework, to consider to test the Poisson restriction and to relax it, if appropriate, using the methodological approaches described in this article. However, in addition to cancer, the same advice may apply to any other chronic disease or condition for which estimates of disease-specific population-based survival time controlling for competing risk are of interest. We have shown, that using a simple test for overdispersion we can identify significant inherent overdispersion, and applying a pseudo-likelihood estimation, fitting an NBR or a more advanced flexible PEREM modeling approach we can correct for it. These simple approaches may allow applied researchers in population-based cancer registries to infer correct conclusions from the analysis of their data in the presence of significant overdispersion. Applied researchers will have to consider the trade-off between modelling complexity and model interpretation as it might happen that there is no reason for applying a more advanced flexible PEREM modelling given a non-significant overdispersion test. However, it rarely happens, as under the relative survival framework we may expect the presence of overdispersion due to the variability of the rate parameter. Furthermore, in case of a significant overdispersion test, applied researchers will have to consider the compromise of model efficiency (i.e., the precision of the SE) while deciding which method or approach to use to deal with overdispersion. As suggested in our results, the flexible PEREM model showed the smaller loss in precision.
We suggest scaling the SE to correct for overdispersion due to the variability of the rate parameter with a significant overdispersion test and a small overdispersed parameter ϕ. However, it should be noticed that our results regarding the RLE are based on only one empirical data. Hence, further investigation is warranted using simulations.
The maximum likelihood methods are based on strong distributional assumptions, while quasi-likelihood or maximum likelihood methods with robust SEs rely on weaker assumptions. Furthermore, using a flexible parametric approach including time-dependent effects allows for a better model specification decreasing overdispersion. We suggest testing for the presence of inherent overdispersion in the data and correct for it using any of the approaches presented in this article. Given that there are no major differences between the above-described methods, the question is not which method to use (robust SE, scaled SE or NBR) in the presence of inherent overdispersed data, but to use any of them to correct for overdispersion and to infer correct conclusions from the models. However, we have shown the benefits of using the flexible PEREM modelling approach with either scaled, robust SE or NBR, under the GLM and relative survival frameworks. Flexible PEREM modelling benefits are double as it deals with model misspecification and overdispersion. The introduction of smooth functions of time and time-dependent effects in the flexible PEREM models may improve the model specification reducing significantly the overdispersion parameter.
In population-based cancer research, PEREM models are used to estimate the excess mortality rate from cancer under the relative survival framework. We have shown the impact of overdispersion on the excess mortality rate estimates by deprivation among women diagnosed with breast cancer in England between 1997 and the end of 2005. PEREM models are fitted under the assumption of a Poisson distribution leading to overdispersion. We have shown that inappropriate imposition of the Poisson restriction may produce spuriously small estimated standard errors, and thus, wrong interpretation of the model estimates. Given the public health relevance of population-based data analyses for policy and decision making, it is desirable to test for overdispersion and to correct it if appropriate.
GLM:
Generalized linear model
NBR:
Negative binomial regression
Observed value
Office of national statistics
PEREM:
Piecewise exponential regression excess mortality model
RLE:
Relative loos of efficiency
Estève J, Benhamou E, Croasdale M, Raymond L. Relative survival and the estimation of net survival: elements for further discussion. Stat Med. 1990; 9(5):529–38.
Perme MP, Stare J, Estève J. On estimation in relative survival. Biometrics. 2012; 68(1):113–20. doi:10.1111/j.1541-0420.2011.01640.x.
Dickman PW, Sloggett A, Hills M, Hakulinen T. Regression models for relative survival. Stat Med. 2004; 23(1):51–64. doi:10.1002/sim.1597.
Mariotto AB, Noone AM, Howlader N, Cho H, Keel GE, Garshell J, Woloshin S, Schwartz LM. Cancer survival: an overview of measures, uses, and interpretation. J Natl Cancer Inst Monographs. 2014; 2014(49):145–86.
Dickman PW, Coviell E. Estimating and modelling relative survival. Stata J. 2015; 15(1):186–215.
Royston P, Lambert PC, et al. Flexible parametric survival analysis using stata: beyond the cox model. College Station, Texas: Stata Press Books; 2011, p. 347.
Hardin JW. The sandwich estimate of variance. Adv Econ. 2003; 17:45–74.
Dupont C, Bossard N, Remontet L, Belot A. Description of an approach based on maximum likelihood to adjust an excess hazard model with a random effect. Cancer Epidemiol. 2013; 37(4):449–56. doi:10.1016/j.canep.2013.04.001.
Hardin JW, et al. The robust variance estimator for two-stage models. Stata J. 2002; 2(3):253–66.
Rao CR, Miller JP, Rao DC, Vol. 27. Handbook of statistics: epidemiology and medical statistics. Amsterdam: North Holland; 2007, p. 870.
Hardin JW, Hilbe JM, Hilbe J. Generalized linear models and extensions. College Station, Texas: Stata Press Books; 2007, p. 387.
Cameron AC, Trivedi PK. Regression-based tests for overdispersion in the poisson model. J Econ. 1990; 46(3):347–64.
Cameron AC, Trivedi PK. Econometric models based on count data. comparisons and applications of some estimators and tests. J Appl Econ. 1986; 1(1):29–53.
Rabe-Hesketh S, Everitt B. Handbook of statistical analyses using stata, Fourth edition. USA: Chapman and Hall/CRC; 2007, p. 342.
Aitkin M. A general maximum likelihood analysis of overdispersion in generalized linear models. Stat Comput. 1996; 6(3):251–62.
Guisan A, Edwards TC, Hastie T. Generalized linear and generalized additive models in studies of species distributions: setting the scene. Ecol Model. 2002; 157(2):89–100.
Faraway JJ. Extending the linear model with r: generalized linear, mixed effects and nonparametric regression models, Second edition. USA: Chapman and Hall/CRC; 2016, p. 399.
Rabe-Hesketh S, Skrondal A. Multilevel and longitudinal modeling using stata. College Station, Texas: Stata Press Books; 2008, p. 562.
Huber PJ. The behavior of maximum likelihood estimates under nonstandard conditions. Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Statistics. Berkeley: University of California Press; 1967, pp. 221–33. http://projecteuclid.org/euclid.bsmsp/1200512988.
White H. A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica: J Econ Soc. 1980; 48(4):817–38.
Remontet L, Bossard N, Belot A, Esteve J. An overall strategy based on regression models to estimate relative survival and model the effects of prognostic factors in cancer survival studies. Stat Med. 2007; 26(10):2214–28.
Durrleman S, Simon R. Flexible regression models with cubic splines. Stat Med. 1989; 8(5):551–61.
Lambert PC, Royston P. Further development of flexible parametric models for survival analysis. Stata J. 2009; 9(2):265.
De Boor C. A practical guide to splines. Math Comput. 1978; 27:348.
James G, Witten D, Hastie T, Tibshirani R. An introduction to statistical learning. New York: Springer: 2013. p. 426.
Coleman MP, Rachet B, Woods LM, Mitry E, Riga M, Cooper N, Quinn MJ, Brenner H, Estève J. Trends and socioeconomic inequalities in cancer survival in england and wales up to 2001. Br J Cancer. 2004; 90(7):1367–73. doi:10.1038/sj.bjc.6601696.
Rao CR. Efficient estimates and optimum inference procedures in large samples. J R Stat Soc Ser B Methodol. 1962; 24(1):46–72.
Royston P, Sauerbrei W. Multivariable modeling with cubic regression splines: a principled approach. Stata J. 2007; 7(1):45.
Royston P. A simple method for evaluating the shapiro-francia w' test for non-normality. Statistician. 1983; 32:297–300.
McCullagh P, Nelder JA, Vol. 37. Generalized linear models, Second edition. USA: Chapman and Hall/CRC. Monographs on Statistics and Applied Probability; 1989, p. 532.
The corresponding author had full access to all the data in the study and had final responsibility for the decision to submit for publication. We appreciate Dr. Francisco Rubio's comments and thorough review of the algebraic expressions.
This work was supported by Cancer Research UK grant number C7923/A18525. The findings and conclusions in this report are those of the authors and do not necessarily represent the views of Cancer Research UK.
Stata code is provided as a supplement of the article.
MALF developed the concept and design of the study, analyzed the data and, wrote the manuscript. All authors interpreted the data, drafted and revised the manuscript critically. All authors read and approved the final version of the manuscript. MALF is the guarantor of the paper.
Approval to analyse the data including the consent to participate was obtained from the ONS Medical Research Service (MR1101, Nov 20, 2007) and from the statutory Patient Information Advisory Group (PIAG; now the Ethics and Confidentiality Committee of the National Information Governance Board) under Section 61 of the Health and Social Care Act 2001 (PIAG 1-05(c)/2007,July 31, 2007).
Department of Non-Communicable Disease Epidemiology, Faculty of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, Cancer Survival Group, Keppel Street, London, WC1E 7HT, UK
Miguel Angel Luque-Fernandez, Aurélien Belot, Manuela Quaresma, Camille Maringe, Michel P. Coleman & Bernard Rachet
Miguel Angel Luque-Fernandez
Aurélien Belot
Manuela Quaresma
Camille Maringe
Michel P. Coleman
Bernard Rachet
Correspondence to Miguel Angel Luque-Fernandez.
Additional file 1
Robust standard error estimation for generalized linear models. (PDF 104 kb)
Stata do file with commented syntax. (PDF 40 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Luque-Fernandez, M.A., Belot, A., Quaresma, M. et al. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research. BMC Med Res Methodol 16, 129 (2016). https://doi.org/10.1186/s12874-016-0234-z
Epidemiologic methods
Proportional hazard models
Data analysis, statistics and modelling
|
CommonCrawl
|
n equation with n+1 unknown!
I know: An equation with 2 unknowns has infinite solutions which lies on a line. Two equation with three unknowns again have infinite solution which lies on a line (the intersection of 2 planes), and so on....
what about n equation with n+1 unknowns? I know it has infinite solutions which lies on a line again but how can I write a general equation for that line? I want to find a point on the line but I need to know the equation of that line first:)
linear-algebra
Harald Hanche-Olsen
MahdiehMahdieh
$\begingroup$ Hmmm...how many solutions do you think the following system (two equations, three variables, as you mention) has?: $$\begin{cases}x+y+z=0\\x+y+z=1\end{cases}$$ $\endgroup$ – DonAntonio Apr 16 '17 at 13:11
$\begingroup$ But this is different... these two planes do not have any intersection and solution. One of them has no intercept but the other has! I think I should wrote homogeneous equations... $\endgroup$ – Mahdieh Apr 16 '17 at 13:30
$\begingroup$ Ah, homogeneous is a huge difference from what you wrote in your question... $\endgroup$ – DonAntonio Apr 16 '17 at 13:34
$\begingroup$ I am really sorry. Actually I am chemistry student and not completely familiar with this... $\endgroup$ – Mahdieh Apr 16 '17 at 13:36
$\begingroup$ The solution to the following system of equations is not a line: $$\begin{cases}x+y+z=0\\2x+2y+2z=0\end{cases}$$ $\endgroup$ – Hurkyl Apr 16 '17 at 14:59
It's a bit difficult to write a good answer without including a complete introduction to linear algebra, but I'll try to outline the main ideas anyhow.
Generically, you're right that $n$ equations in $n+1$ unknowns determine a line. Consider the homogeneous equations $$ a_{i,0}x_0+a_{i,1}x_1+\cdots+a_{i,n}x_n=0,\qquad i=1,2,\ldots,n. $$ Take any non-zero solution $(X_0,X_1,\ldots,X_n)$ to this system. Then any multiple of this solution is again a solution, so the line parametrized by $(x_0,x_1,\ldots,x_n)=(tX_0,tX_1,\ldots,tX_n)$ consists of solutions to the system of equations. Generically (there's that word again), this line contains all solutions to the system.
The exceptions to this are when the equations are linearly dependent, as in this example: $$ \begin{aligned} x-y+z&=0\\y-z+w&=0\\x+w&=0 \end{aligned} $$ Here, the last equation is the sum of the first two. Linear dependence is a bit more general than this, but the point is that, if the equations are linearly dependent, you get more solutions than just the line.
In a sense, linear dependence happens rarely. That is why I wrote generically above, twice. There are different ways to make this notion precise, but for applications to science, we may note that if the coefficients $a_{i,j}$ are drawn at random from a continuous probability distribution, there is zero probability that the resulting equations are linearly dependent. That does not mean that linear dependence doesn't happen in scientific applications! But when it does, it is a safe bet that this is because of some symmetry of the problem at hand, or a dependence of variables dictated by an underlying physical law. (Almost linear dependence is a different matter; not uncommonly, it creates trouble for numerical methods, with possibly catastrophic consequences for the accuracy of computed answers.)
Harald Hanche-OlsenHarald Hanche-Olsen
$\begingroup$ You may want to add the word "homogeneous" after "Generically, you're right that $\;n\;$ ..." , in the third line. Otherwise the comments already contain a counterexample to this claim. $\endgroup$ – DonAntonio Apr 16 '17 at 20:41
$\begingroup$ @DonAntonio With the reservation of genericity, I think my claim is just fine. After all, a generic linear map $\mathbb{R}^{n+1}\to\mathbb{R}^n$ is surely surjective? (Also, I don't want to bump this to the front page unless absolutely necessary.) $\endgroup$ – Harald Hanche-Olsen Apr 17 '17 at 8:07
$\begingroup$ Perhaps you're using the term "generic" in some way I just don't know. Anyway, lots and lots of linear maps $\;\Bbb R^{n+1}\to\Bbb R^n\;$ aren't surjective, of course, which in matrix language means that there are lots (=infinitely many) matrices of order $\;n\times (n+1)\;$ whose rank is less than $\;n\;$ ... $\endgroup$ – DonAntonio Apr 17 '17 at 8:14
$\begingroup$ @DonAntonio To be technical, I think of "generic" as meaning "true in an open, dense set". But there is also a measure theoretic version, summarized as "almost certainly true" – in this case, assuming a continuous probability distribution on the parameters of the problem. But please note my lengthy final paragraph, in which I warn against this lulling you into a false sense of believing that linear dependence never happens! (Possibly, this confuses the OP more than it helps. I don't know.) $\endgroup$ – Harald Hanche-Olsen Apr 17 '17 at 9:05
Not the answer you're looking for? Browse other questions tagged linear-algebra or ask your own question.
Difference between planes intersecting along a common line and coinciding
how to solve a system with more equations than unkowns?
No solutions with three planes in $\mathbb{R}^3$
4 equations with 4 unknowns analysis
Why does eliminating from linear equations work but adding them does not?
Understanding geometry of linear system
number of solutions of system of two equations, two unknowns (Matrix)
dimension of intersection of subspaces in $\Bbb R^4$
Equation of a plane as matrix.
Understanding solution of system of linear equations
|
CommonCrawl
|
MCRF Home
Quantification of the unique continuation property for the nonstationary Stokes problem
March 2016, 6(1): 1-25. doi: 10.3934/mcrf.2016.6.1
Local feedback stabilisation to a non-stationary solution for a damped non-linear wave equation
Kaïs Ammari 1, , Thomas Duyckaerts 2, and Armen Shirikyan 3,
Département de Mathématiques, Faculté des Sciences de Monastir , Université de Monastir, 5019 Monastir, Tunisia
LAGA (UMR 7539), Institut Galilée, Université Paris 13, 99, avenue Jean-Baptiste Clément, 93430 Villetaneuse
Département de Mathématiques, Université de Cergy-Pontoise, UMR CNRS 8088, 2 avenue Adolphe Chauvin, 95302 Cergy-Pontoise, France
Received January 2015 Revised October 2015 Published January 2016
We study a damped semi-linear wave equation in a bounded domain of $\mathbb{R}^3$ with smooth boundary. It is proved that any $H^2$-smooth solution can be stabilised locally by a finite-dimensional feedback control supported by a given open subset satisfying a geometric condition. The proof is based on an investigation of the linearised equation, for which we construct a stabilising control satisfying the required properties. We next prove that the same control stabilises locally the non-linear problem.
Keywords: truncated observability inequality., distributed control, feedback stabilisation, Non-linear wave equation.
Mathematics Subject Classification: Primary: 35L71, 93B52; Secondary: 93B0.
Citation: Kaïs Ammari, Thomas Duyckaerts, Armen Shirikyan. Local feedback stabilisation to a non-stationary solution for a damped non-linear wave equation. Mathematical Control & Related Fields, 2016, 6 (1) : 1-25. doi: 10.3934/mcrf.2016.6.1
C. Bardos, G. Lebeau and J. Rauch, Un exemple d'utilisation des notions de propagation pour le contrôle et la stabilisation de problèmes hyperboliques,, Rend. Sem. Mat. Univ. Politec. Torino (1988), (1988), 11. Google Scholar
C. Bardos, G. Lebeau and J. Rauch, Sharp sufficient conditions for the observation, control, and stabilization of waves from the boundary,, SIAM J. Control Optim., 30 (1992), 1024. doi: 10.1137/0330055. Google Scholar
V. Barbu, S. S. Rodrigues and A. Shirikyan, Internal exponential stabilization to a non-stationary solution for 3D Navier-Stokes equations,, SIAM J. Control Optim., 49 (2011), 1454. doi: 10.1137/100785739. Google Scholar
A. V. Babin and M. I. Vishik, Attractors of Evolution Equations,, North-Holland Publishing, (1992). Google Scholar
W. C. Chewning, Controllability of the nonlinear wave equation in several space variables,, SIAM J. Control Optim., 14 (1976), 19. doi: 10.1137/0314002. Google Scholar
J.-M. Coron and E. Trélat, Global steady-state stabilization and controllability of 1D semilinear wave equations,, Commun. Contemp. Math., 8 (2006), 535. doi: 10.1142/S0219199706002209. Google Scholar
B. Dehman, P. Gérard and G. Lebeau, Stabilization and control for the nonlinear Schrödinger equation on a compact surface,, Math. Z., 254 (2006), 729. doi: 10.1007/s00209-006-0005-3. Google Scholar
B. Dehman and G. Lebeau, Analysis of the HUM control operator and exact controllability for semilinear waves in uniform time,, SIAM J. Control Optim., 48 (2009), 521. doi: 10.1137/070712067. Google Scholar
B. Dehman, G. Lebeau and E. Zuazua, Stabilization and control for the subcritical semilinear wave equation,, Ann. Sci. École Norm. Sup. (4), 36 (2003), 525. doi: 10.1016/S0012-9593(03)00021-1. Google Scholar
T. Duyckaerts, X. Zhang and E. Zuazua, On the optimality of the observability inequalities for parabolic and hyperbolic systems with potentials,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 25 (2008), 1. doi: 10.1016/j.anihpc.2006.07.005. Google Scholar
H. O. Fattorini, Local controllability of a nonlinear wave equation,, Math. Systems Theory, 9 (1975), 30. doi: 10.1007/BF01698123. Google Scholar
D. Fujiwara, Concrete characterization of the domains of fractional powers of some elliptic differential operators of the second order,, Proc. Japan Acad., 43 (1967), 82. doi: 10.3792/pja/1195521686. Google Scholar
X. Fu, J. Yong and X. Zhang, Exact controllability for multidimensional semilinear hyperbolic equations,, SIAM J. Control Optim., 46 (2007), 1578. doi: 10.1137/040610222. Google Scholar
A. Haraux, Two remarks on hyperbolic dissipative problems,, Nonlinear partial differential equations and their applications. Collège de France seminar, 122 (1985), 1983. Google Scholar
L. Hörmander, The Analysis of Linear Partial Differential Operators. III,, Springer-Verlag, (1994). Google Scholar
R. Joly and C. Laurent, A note on the semiglobal controllability of the semilinear wave equation,, SIAM J. Control Optim., 52 (2014), 439. doi: 10.1137/120891174. Google Scholar
C. Laurent, Global controllability and stabilization for the nonlinear Schrödinger equation on some compact manifolds of dimension 3,, SIAM J. Math. Anal., 42 (2010), 785. doi: 10.1137/090749086. Google Scholar
C. Laurent, On stabilization and control for the critical Klein-Gordon equation on a 3-D compact manifold,, J. Funct. Anal., 260 (2011), 1304. doi: 10.1016/j.jfa.2010.10.019. Google Scholar
J.-L. Lions, Quelques Méthodes de Résolution des Problèmes aux Limites Non Linéaires,, Dunod, (1969). Google Scholar
J.-L. Lions, Contrôlabilité Exacte, Perturbations et Stabilisation de Systèmes Distribués. Tome 1,, Masson, (1988). Google Scholar
G. Lebeau and L. Robbiano, Stabilisation de l'équation des ondes par le bord,, Duke Math. J., 86 (1997), 465. doi: 10.1215/S0012-7094-97-08614-2. Google Scholar
L. Li and X. Zhang, Exact controllability for semilinear wave equations,, J. Math. Anal. Appl., 250 (2000), 589. doi: 10.1006/jmaa.2000.6998. Google Scholar
S. Zelik, Asymptotic regularity of solutions of a nonautonomous damped wave equation with a critical growth exponent,, Commun. Pure Appl. Anal., 3 (2004), 921. doi: 10.3934/cpaa.2004.3.921. Google Scholar
X. Zhang, Exact controllability of semilinear evolution systems and its application,, J. Optim. Theory Appl., 107 (2000), 415. doi: 10.1023/A:1026460831701. Google Scholar
E. Zuazua, Exact controllability for the semilinear wave equation,, J. Math. Pures Appl. (9), 69 (1990), 1. Google Scholar
E. Zuazua, Exponential decay for the semilinear wave equation with locally distributed damping,, Comm. Partial Differential Equations, 15 (1990), 205. doi: 10.1080/03605309908820684. Google Scholar
E. Zuazua, Exact controllability for semilinear wave equations in one space dimension,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 10 (1993), 109. Google Scholar
Faustino Sánchez-Garduño, Philip K. Maini, Judith Pérez-Velázquez. A non-linear degenerate equation for direct aggregation and traveling wave dynamics. Discrete & Continuous Dynamical Systems - B, 2010, 13 (2) : 455-487. doi: 10.3934/dcdsb.2010.13.455
José M. Amigó, Isabelle Catto, Ángel Giménez, José Valero. Attractors for a non-linear parabolic equation modelling suspension flows. Discrete & Continuous Dynamical Systems - B, 2009, 11 (2) : 205-231. doi: 10.3934/dcdsb.2009.11.205
Zhenyu Lu, Junhao Hu, Xuerong Mao. Stabilisation by delay feedback control for highly nonlinear hybrid stochastic differential equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4099-4116. doi: 10.3934/dcdsb.2019052
Xu Zhang, Chuang Zheng, Enrique Zuazua. Time discrete wave equations: Boundary observability and control. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 571-604. doi: 10.3934/dcds.2009.23.571
Imen Benabbas, Djamel Eddine Teniou. Observability of wave equation with Ventcel dynamic condition. Evolution Equations & Control Theory, 2018, 7 (4) : 545-570. doi: 10.3934/eect.2018026
Byungik Kahng, Miguel Mendes. The characterization of maximal invariant sets of non-linear discrete-time control dynamical systems. Conference Publications, 2013, 2013 (special) : 393-406. doi: 10.3934/proc.2013.2013.393
G. Gentile, V. Mastropietro. Convergence of Lindstedt series for the non linear wave equation. Communications on Pure & Applied Analysis, 2004, 3 (3) : 509-514. doi: 10.3934/cpaa.2004.3.509
Daniele Garrisi, Vladimir Georgiev. Orbital stability and uniqueness of the ground state for the non-linear Schrödinger equation in dimension one. Discrete & Continuous Dynamical Systems - A, 2017, 37 (8) : 4309-4328. doi: 10.3934/dcds.2017184
Niclas Bernhoff. On half-space problems for the weakly non-linear discrete Boltzmann equation. Kinetic & Related Models, 2010, 3 (2) : 195-222. doi: 10.3934/krm.2010.3.195
César E. Torres Ledesma. Existence and concentration of solutions for a non-linear fractional Schrödinger equation with steep potential well. Communications on Pure & Applied Analysis, 2016, 15 (2) : 535-547. doi: 10.3934/cpaa.2016.15.535
Simon Plazotta. A BDF2-approach for the non-linear Fokker-Planck equation. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2893-2913. doi: 10.3934/dcds.2019120
Kurt Falk, Marc Kesseböhmer, Tobias Henrik Oertel-Jäger, Jens D. M. Rademacher, Tony Samuel. Preface: Diffusion on fractals and non-linear dynamics. Discrete & Continuous Dynamical Systems - S, 2017, 10 (2) : ⅰ-ⅳ. doi: 10.3934/dcdss.201702i
Dmitry Dolgopyat. Bouncing balls in non-linear potentials. Discrete & Continuous Dynamical Systems - A, 2008, 22 (1&2) : 165-182. doi: 10.3934/dcds.2008.22.165
Dorin Ervin Dutkay and Palle E. T. Jorgensen. Wavelet constructions in non-linear dynamics. Electronic Research Announcements, 2005, 11: 21-33.
Armin Lechleiter. Explicit characterization of the support of non-linear inclusions. Inverse Problems & Imaging, 2011, 5 (3) : 675-694. doi: 10.3934/ipi.2011.5.675
Denis Serre. Non-linear electromagnetism and special relativity. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 435-454. doi: 10.3934/dcds.2009.23.435
Feng-Yu Wang. Exponential convergence of non-linear monotone SPDEs. Discrete & Continuous Dynamical Systems - A, 2015, 35 (11) : 5239-5253. doi: 10.3934/dcds.2015.35.5239
Robert Magnus, Olivier Moschetta. The non-linear Schrödinger equation with non-periodic potential: infinite-bump solutions and non-degeneracy. Communications on Pure & Applied Analysis, 2012, 11 (2) : 587-626. doi: 10.3934/cpaa.2012.11.587
Michela Procesi. Quasi-periodic solutions for completely resonant non-linear wave equations in 1D and 2D. Discrete & Continuous Dynamical Systems - A, 2005, 13 (3) : 541-552. doi: 10.3934/dcds.2005.13.541
Tommi Brander, Joonas Ilmavirta, Manas Kar. Superconductive and insulating inclusions for linear and non-linear conductivity equations. Inverse Problems & Imaging, 2018, 12 (1) : 91-123. doi: 10.3934/ipi.2018004
Kaïs Ammari Thomas Duyckaerts Armen Shirikyan
|
CommonCrawl
|
Formula for sufficiently lengthy encryption key?
As you add length to an encryption key, at some point the message becomes impossible to brute-force decrypt. This is because at that point, if you go through all the possible keys, you'll get many meaningful decryptions just by random chance and you won't be able to determine which was the original message.
As you add length to the message though, these meaningful decryptions become rarer until there is once again a small enough number of them left to figure out which is the right one (if you know what you're looking for, that is).
Has anybody figured out a way to estimate the required key length for this obfuscation by quantity to happen for more popular encryption algorithms?
cryptography encryption
$\begingroup$ What does "meaningful" mean? Suppose I encrypt twice; while this does not add security (and may even decrease it!), you certainly won't recognize the real message by looking at the result of decrypting once. $\endgroup$ – Raphael♦ Jul 2 '15 at 12:29
That's not how security works. Sometimes you want to use encryption in circumstances in which there are two possible messages, and you want encryption to be secure even in these cases. That's because encryption is used as a building block in more complicated cryptographic protocols. Also, attacks could be based on multiple related (or even unrelated) messages.
Instead, security is based on the fact that nobody can try and decrypt a message with respect to all possible keys since there are too many, and there is no better way of finding the key. The second requirement is security of the cryptosystem, and currently there is no way to prove it. Instead, we use standard systems which are being researched and so far have revealed no weaknesses.
Suppose that our system is cryptographically secure. How can we guarantee that there are too many keys to try all of them? The standard approach is to agree on a key length in advance, usually 128 bits (or more). Classical (rather than quantum!) computation cannot run $2^{128}$ steps even if you parallelize it across atoms of the universe, and wait a million years. That's considered secure enough.
$\begingroup$ two many => too many $\endgroup$ – Brian Tompsett - 汤莱恩 Jul 2 '15 at 13:57
I very much doubt that anyone has done such an analysis. Trying every key isn't a plausible attack so it isn't worth defending against or studying its effectiveness.
Even if you can try a billion keys a second (i.e., roughly one key per clock cycle on a commodity PC), a $64$-bit key is too long to brute-force decrypt: trying all $2^{64}\approx 2\times 10^{19}$ keys would take nearly $600$ years. Every extra bit makes it take twice as long to try all the keys. In reality, keys are much longer than $64$ bits and are often hundreds or thousands of bits long. It would take many, many orders of magnitude longer than the current age of the universe to try every key.
David RicherbyDavid Richerby
$\begingroup$ 128 bit keys are pretty standard. $\endgroup$ – Yuval Filmus Jul 2 '15 at 13:59
$\begingroup$ As to the length of the keys representations, you have to distinguish symmetric (private key, 128 bits) vs. asymmetric (public key, 2048 bits) encryption. $\endgroup$ – JimmyB Jul 2 '15 at 14:58
Not quite sure I fully understand your question, but let me share some thoughts:
Provably safe encryption requires a key of the same length k (in bits) as the message (n), with 2^n possible keys and decryptions of the same ciphertext.
If you take a key of length k=n-1 instead, you'll have half as many keys and possible decryptions, and have 1 extra bit of the message to check for plausibility. Best case, you can discard half of the possible decryptions as implausible.
If you take a key of length k=n-2 instead, ... and so on.
If you take a key of length k=1, you've got two possible decryptions, one of which is correct while the other is more or less recognizably incorrect.
The expectancy value for getting more than one plausible messages for a given key length then roughly depends on two factors:
a. n-k, and
b. the total number of plausible messages of length n.
Notice that, to detect if a decrypted message is plausible or not, there has to be redundancy in the plain text in the first place. If there were no redundancy in the plain text, it would be indistinguishable from pure random content. This redundancy may exist inside the message, like it is given in any natural language text for example, or it may exist outside of the message itself which is the case in the more general known plaintext attacks.
However, measuring the number of plausible messages may prove to be difficult, and definitely depends on the kind of message. The (typical) plaintext's entropy might give a rough estimate. This is a factor that's independent of the encryption algorithm or key.
JimmyBJimmyB
Not the answer you're looking for? Browse other questions tagged cryptography encryption or ask your own question.
How is the key in a private key encryption protocol exchanged?
How can encryption involve randomness?
RSA Decryption from Simple Public Key Values
Why did RSA encryption become popular for key exchange?
Monoalphabetic Cipher Key
RSA Encryption Explanation
Asynchronous Encryption with wordlists
Why does Feistel encryption algorithm encode half block every time?
Perfect Probabilistic Encryption still requires key length about as long as message
|
CommonCrawl
|
Yuan Xu1,4,
Jing Cao1,
Yuriy S. Shmaliy2 &
Yuan Zhuang ORCID: orcid.org/0000-0003-3377-96583
Satellite Navigation volume 2, Article number: 22 (2021) Cite this article
Colored Measurement Noise (CMN) has a great impact on the accuracy of human localization in indoor environments with Inertial Navigation System (INS) integrated with Ultra Wide Band (UWB). To mitigate its influence, a distributed Kalman Filter (dKF) is developed for Gauss–Markov CMN with switching Colouredness Factor Matrix (CFM). In the proposed scheme, a data fusion filter employs the difference between the INS- and UWB-based distance measurements. The main filter produces a final optimal estimate of the human position by fusing the estimates from local filters. The effect of CMN is overcome by using measurement differencing of noisy observations. The tests show that the proposed dKF developed for CMN with CFM can reduce the localization error compared to the original dKF, and thus effectively improve the localization accuracy.
With the improvement of people's living standards, the population aging problem has become increasingly serious in China. Consequently, health care for elderly people has gradually received due attention and become a new research area (Li et al., 2016; Xu et al., 2018, 2019; Zhuang et al., 2019c). As an important technology to assist indoor medical care, localization and tracking of target personnel appear in many works (Chen et al., 2020; Tian et al., 2020), and many approaches have been developed to provide the localization with sufficient accuracy.
The Global Positioning System (GPS) is most widely used for localization solution (El-Sheimy & Youssef, 2020; Li et al., 2020; Mosavi & Shafiee, 2016). For example, the GPS signals are used in Sekaran et al. (2020) to navigate a robot car. A drawback of GPS is its signals are not always available in indoor environments (El-Sheimy & Li, 2021). Accordingly, the short-range communication technologies, such as the Radio Frequency IDentification (RFID) (Tzitzis et al., 2019), bluetooth, Wireless Fidelity (WiFi), and Ultra Wide Band (UWB), have been developed for GPS-denied spaces. For example, an active RFID tag-based pedestrian navigation scheme was proposed in Fu and Retscher (2009). In Zhuang and El-Sheimy (2015); WiFi was used to assist the micro electromechanical systems sensors for indoor pedestrian navigation. An improved UWB localization structure was investigated in Yu et al. (2019) in a harsh indoor environment.
The localization techniques discussed above can provide social navigation in indoor environment with a sufficient accuracy. Compared to the RFID, bluetooth, and WiFi, the UWB-based can provide more accurate. Consequently, several UWB-based solutions were proposed in the last decades. However, these short-range communication and localization techniques require the pre-placed devices that cannot always be deployed properly in indoor spaces. To overcome this problem, several self-contained localization structures were proposed, such as the indoor pedestrian navigation scheme (Li et al., 2016) and a foot-mounted pedestrian navigation system based on Inertial Navigation System (INS) (Gu et al., 2015).
The INS-based navigation can be organized as a self-contained system. However, the accuracy is acceptable only in a short time interval due to the accumulation of drift errors. This shortcoming can be circumvented by integrating the INS- and short range-based communication technologies, as shown in Zhuang et al. (2019a). The INS/UWB integrated scheme is a typical example, but there exist many other approaches. For example, in Xu et al. (2021) and Zhang et al. (2020); an UWB/INS integrated pedestrian navigation algorithm was proposed, which employed the INS to assist the UWB to improve robustness. Another INS/UWB integrated scheme was designed for the quadrotor localization in Xu et al. (2021). The seamless indoor pedestrian tracking using the fusion of INS and UWB data is discussed in Xu et al. (2020). The advantages of the integrated schemes are the higher accuracy and robustness.
It is obvious that data fusion can improve the localization accuracy (Zhao & Huang, 2020). In a hybrid navigation technology, Kalman Filter (KF) is typically a linear fusing model (Norrdine et al., 2016; Zhao et al., 2016; Zhuang et al., 2019b). For nonlinear models, it is often organized using the Extended Kalman Filter (EKF) (Hsu et al., 2017), Iterated Extended Kalman Filter (IEKF) (Xu et al., 2013), and Unscented Kalman Filter (UKF) (Chen et al., 2015). Note that the above-mentioned filters are centralized. Although such filters can fuse sensor's data, the drawbacks, compared to the distributed filters, are: higher operation complexity and poorer fault tolerance. Moreover, the sensor data can be affected by Colored Measurement Noise (CMN). For example, Fig. 1 displays the UWB-derived distance with the CMN and white measurement noise. One thus can infer that the CMN is an important error factor in sensor data. It worth noticing that although the KF-based algorithms solve the problem of multi-sensor data fusion in an integrated navigation system and improve the localization accuracy, they are not efficient under CMN observed in UWB data.
The distance with the colored measurement noise and white measurement noise
To mitigate the effect of CMN on the navigation accuracy in INS/UWB integrated schemes in indoor environments, in this paper we modify the distributed KF (dKF) under Gauss–Markov CMN with an assumption that the Colouredness Factor Matrix (CFM) can switch at some points due to unstable operation conditions. A local filter employs the differences between the INS-measured and UWB-measured distances. The main filter produces the final estimates by fusing the estimates provided by local filters. The effect of CMN is mitigated in local filters using measurement differencing. The experiments show that the dKF modified for CMN with switch CFM can reduce the localization Root Mean Square Error (RMSE) by \(26.85\%\) compared to the standard dKF.
The rest of this work is structured as below. First, the INS/UWB integration for human localization scenario operating under CMN is described. Second, a dKF is developed for CMN with switch CFM. Third, the experiment is introduced. Fourth, the comparisons are made in terms of the localization accuracy given by INS, UWB, dKF, and dKF modified for CMN with switch CFM. Finally, conclusions are drawn.
INS/UWB integrated human navigation under CMN
The proposed INS/UWB integrated human localization scheme affected by CMN is shown in Fig. 2. In this structure, the INS and UWB subsystems work in parallel. The fusing filter is organized in the way that one key filter works together with M sub-filters. The jth sub-filter, \(j \in [1,M]\), is employed to estimate the target human position by fusing ranges \(r^{UWB}_j\) and \(r^{INS}_j\) from the target person to the jth UWB Reference Node (RN) under CMN at a discrete time index n. The main filter fuses the results of sub-filters to produce an optimal estimate.
INS/UWB integrated human localization scheme for distributed localization under CMN
Design of dKF for CMN with switch CFM
In this section, we modify the dKF under Gauss–Markov CMN. First, we consider the state-space model of the navigation problem. Then, the dKF is designed under CMN assuming switch CFM. Finally, the main filter fuses the results of the sub-filters.
Sub-filters for CMN
The state equation representing the 2D human dynamics and related to the jth sub-filter is described by:
$$\begin{aligned} {\varvec{x}}_n^{(j)} &= {{\varvec{F}}^{(j)}}{\varvec{x}}_{n - 1}^{(j)} + {\varvec{w}}_n^{(j)}\\&= { \left[ {\begin{array}{*{20}{c}} 1&\quad {{T^{(j)}}}&\quad 0&\quad 0\\ 0&\quad 1&\quad 0&\quad 0\\ 0&\quad 0&\quad 1&\quad {{T^{(j)}}}\\ 0&\quad 0&\quad 0&\quad 1 \end{array}} \right] {\varvec{x}}_{n - 1}^{(j)} + {\varvec{w}}_n^{(j)}} \end{aligned}$$
where the state vector is defined as
$$\begin{aligned} {\varvec{x}}_n^{(j)} = \left[ {\begin{array}{*{20}c} {\delta {\mathrm{Pos}}_n^{E{(j)}} } &\quad {\delta {\mathrm{Vel}}_n^{E{(j)}} } & \quad {\delta {\mathrm{Pos}}_n^{N{(j)}} } &\quad {\delta {\mathrm{Vel}}_n^{N{(j)}} } \\ \end{array}} \right] ^{\mathrm{T}}, \end{aligned}$$
in which \(( {\delta {\mathrm{Pos}}_n^{E{(j)}} ,\delta {\mathrm{Pos}}_n^{N{(j)}} })\) and \(( {\delta {\mathrm{Vel}}_n^{E{(j)}} ,\delta {\mathrm{Vel}}_n^{N{(j)}} } )\) are the position and velocity errors in east and north directions, \(T^{(j)}\) is the sample time for the jth sub-filter, \({\varvec{w}}_n^{(j)} \sim {\mathcal {N}} ({\varvec{0}}, {\varvec{Q}}^{(j)})\) is noise in the jth sub-filter.
The observation equation corresponding to the data obtained by the jth sub-filter is written as
$$\begin{aligned} y_n^{(j)} &= \delta {\mathrm{r}}_n^{(j)} = {\mathrm{r}}_n^{I(j)}- {\mathrm{r}}_n^{U(j)}\\ & = \frac{{\delta _{{\mathrm{x}}n}^{(j)}}}{{\sqrt{\delta _{{\mathrm{x}}n}^{{{(j)}^2}} + \delta _{{\mathrm{y}}n}^{{{(j)}^2}}} }}\delta {\mathrm{Pos}}_n^{E(j)} + \frac{{\delta _{{\mathrm{y}}n}^{(j)}}}{{\sqrt{\delta _{{\mathrm{x}}n}^{{{(j)}^2}} + \delta _{{\mathrm{y}}n}^{{{(j)}^2}}} }}\delta {\mathrm{Pos}}_n^{N(j)} + {\varvec{v}}_n^{(j)}\\ & = {\left[ {\begin{array}{*{20}{c}} {\frac{{\delta _{{\mathrm{x}}n}^{(j)}}}{{\sqrt{\delta _{{\mathrm{x}}n}^{{{(j)}^2}} + \delta _{{\mathrm{y}}n}^{{{(j)}^2}}} }}}\\ 0\\ {\frac{{\delta _{{\mathrm{y}}n}^{(j)}}}{{\sqrt{\delta _{{\mathrm{x}}n}^{{{(j)}^2}} + \delta _{{\mathrm{y}}n}^{{{(j)}^2}}} }}}\\ 0 \end{array}} \right] ^{\mathrm{T}}}{\varvec{x}}_n^{(j)} + {\varvec{v}}_n^{(j)}\\ & = {\varvec{H}}_n^{(j)}{\varvec{x}}_n^{(j)} + {\varvec{v}}_n^{(j)} \end{aligned}$$
where \(j \in [1,M]\), \(\delta _{{\mathrm {x}}n}^{(j)} = {\mathrm{Pos}}_n^{E,I} - x^{(j)}\), \(\delta _{{\mathrm {y}}n}^{(j)} = {\mathrm{Pos}}_n^{N,I} - y^{(j)}\), \(\left( {{\mathrm{Pos}}_n^{E,I} ,{\mathrm{Pos}}_n^{N,I}} \right)\) denotes INS positions in east and north directions, and \({\varvec{v}}_n\) is Gauss–Markov CMN represented with
$$\begin{aligned} {\varvec{v}}_n^{\left( j \right) } = \alpha _n^{\left( j \right) } {\varvec{v}}_{n-1}^{\left( j \right) } + \gamma _n^{\left( j \right) } \end{aligned}$$
where \(\gamma _n^{\left( j \right) } \sim {\mathcal {N}} ({\varvec{0}},{\varvec{R}})\) is white Gaussian driving noise and \(\alpha _n^{\left( j \right) }\) is the CFM.
To address the effects of CMN and apply filtering algorithms, we use measurement differencing and write the new observation equation as
$$\begin{aligned} {\varvec{z}}_n^{\left( j \right) }&= {\varvec{y}}_n^{\left( j \right) } - \alpha _n^{\left( j \right) } {\varvec{y}}_{n-1}^{\left( j \right) } \nonumber \\&= {\varvec{H}}_n^{\left( j \right) } {\varvec{x}}_n^{\left( j \right) } + {\varvec{v}}_n^{\left( j \right) } - \alpha _n^{\left( j \right) } {\varvec{H}}_{n - 1}^{\left( j \right) } {\varvec{x}}_{n - 1}^{\left( j \right) } \nonumber \\&\quad - \alpha _n^{\left( j \right) }{\varvec{v}}_{n - 1}^{\left( j \right) } \end{aligned}$$
From (1) and (3) we obtain
$$\begin{aligned} {\varvec{x}}_{n - 1}^{(j)}&= {{\varvec{F}}^{(j)^{-1}}} ( {{\varvec{x}}_n^{(j)} - {\varvec{w}}_n^{(j)}} ) \end{aligned}$$
$$\begin{aligned} {\varvec{v}} _{n - 1}^{(j)}&= {\alpha _n^{(j)^{-1}} } ( {{\varvec{v}} _n^{(j)} - \gamma _n^{(j)}} ) \end{aligned}$$
and substitute them into (4), giving
$$\begin{aligned} {\varvec{z}}_n^{(j)}&= ( {\varvec{H}}_n^{(j)} - {\varvec{T}}_n^{(j)} ) {\varvec{x}}_n^{(j)} + {\varvec{T}}_n^{(j)} {\varvec{w}}_n^{(j)} + \gamma _n^{(j)} \nonumber \\&= {\varvec{D}}_n^{(j)} {\varvec{x}}_n^{(j)} + {\bar{\gamma }}_n^{(j)} \end{aligned}$$
where \({\varvec{T}}_n^{(j)} = \alpha _n^{(j)} {\varvec{H}}_n^{(j)} {\varvec{F}}^{(j)^{-1}}\), \({\varvec{D}}_n^{(j)} = {\varvec{H}}_n^{(j)} - {\varvec{T}}_n^{(j)}\), \({{\bar{\gamma }}} _n^{(j)} ={\varvec{T}}_n^{(j)} {\varvec{w}}_n^{(j)} + \gamma _n^{(j)}\) and \({\bar{\gamma }}_n^{(j)} \sim {\mathcal {N}} ( {{\varvec{0}},{\bar{{\varvec{R}}}} } )\) is white Gaussian noise
$$\begin{aligned} {\bar{\gamma }}_n^{(j)} = {\varvec{T}}_n^{(j)} {\varvec{w}}_n^{(j)} + \gamma _n^{(j)} \end{aligned}$$
with the covariance
$$\begin{aligned} \overline{\varvec{R}} &= {\mathrm{E}}\{ {{\bar{\gamma }}} _{\mathrm{n}}^{({\mathrm{j}})}{{\bar{\gamma }}} _{\mathrm{n}}^{{{({\mathrm{j}})}^{\mathrm{T}}}}\} = {\varvec{T}}_{\mathrm{n}}^{({\mathrm{j}})}{{\varvec{Q}}^{({\mathrm{j}})}}{\varvec{T}}_{\mathrm{n}}^{{{({\mathrm{j}})}^{\mathrm{T}}}}+ {\varvec{R}}\\ & = {\varvec{T}}_n^{(j)}{\varvec{\Phi }}_n^{(j)} + {\varvec{R}} \end{aligned}$$
where \({\varvec{\Phi }}_n^{(j)} = {\varvec{Q}}^{(j)} {\varvec{T}}_n^{(j)^T}\). It follows from (8) that the observation noise \({\bar{\gamma }}_n^{(j)}\) is time-correlated with system noise \({\varvec{w}}_n^{(j)}\) and the KF cannot be applied straightforwardly. To de-correlate noise, we follow Shmaliy et al. (2020) and modify the state equation (1) as
$$\begin{aligned} {\varvec{x}}_n^{(j)} &= {{\varvec{F}}^{(j)}}{\varvec{x}}_{n - 1}^{(j)} + {\varvec{w}}_n^{(j)} + {\varvec{\beta }}_n^{(j)}[{\varvec{z}}_n^{(j)} - ({\varvec{D}}_n^{(j)}{\varvec{x}}_n^{(j)} + {\bar{{{\varvec{\gamma }}} }}_n^{(j)})]\\ & = ({\varvec{I}} - {\varvec{\beta }}_n^{(j)}{\varvec{D}}_n^{(j)}){{\varvec{F}}^{(j)}}{\varvec{x}}_{n - 1}^{(j)} + {\varvec{\beta }}_n^{(j)}{\varvec{z}}_n^{(j)}\\ &\quad + ({\varvec{I}} - {\varvec{\beta }}_n^{(j)}{\varvec{D}}_n^{(j)}){\varvec{w}}_n^{(j)} - {\varvec{\beta }}_n^{(j)}{\bar{{{\varvec{\gamma }}} }}_n^{(j)}\\ & = {\varvec{A}}_n^{(j)}{\varvec{x}}_{n - 1}^{(j)} + {\varvec{u}}_n^{(j)} + {\varvec{\eta }}_n^{(j)} \end{aligned}$$
where \({{\varvec{\eta }}_n^{(j)} } \sim {\mathcal {N}} ({\varvec{0}},{\varvec{\Theta }}_n^{(j)})\) has the covariance
$$\begin{aligned} {\varvec{\Theta }}_{\mathbf {n}}^{({\mathbf {j}})} &= {\mathbf {E}}\left\{ {[({\mathbf {I}} - {\varvec{\beta }}_{\mathbf {n}}^{({{j}})}{\mathbf {D}}_{\mathbf {n}}^{({{j}})}){\mathbf {w}}_{\mathbf {n}}^{({{j}})} - {\varvec{\beta }}_{\mathbf {n}}^{({{j}})}{ \bar{{{\boldsymbol{\gamma }}} }}_{\mathbf {n}}^{({{j}})}]} \right. \\ &\quad \left. {{{[({\mathbf {I}} - {\varvec{\beta }}_{\mathbf {n}}^{({{j}})}{\mathbf {D}}_{\mathbf {n}}^{({{j}})}){\mathbf {w}}_{\mathbf {n}}^{({{j}})} - {\varvec{\beta }}_{\mathbf {n}}^{({{j}})}{\bar {{{\boldsymbol{\gamma }}} }}_{\mathbf {n}}^{({{j}})}]}^{\mathrm{T}}}} \right\} \\ & = ({\mathbf {I}} - {\varvec{\beta }}_{\mathbf {n}}^{({{j}})}{\mathbf {H}}_{\mathbf {n}}^{({{j}})}){\mathbf {Q}}_{\mathbf {n}}^{({{j}})}{({\mathbf {I}} - {\varvec{\beta }}_{\mathbf {n}}^{({{j}})}{\mathbf {H}}_{\mathbf {n}}^{({{j}})})^{\mathrm{T}}}\\ &\quad + {\varvec{\beta }}_{\mathbf {n}}^{({{j}})}{\mathbf {R}}{({\varvec{\beta }}_{\mathbf {n}}^{({{j}})})^{\mathrm{T}}} \end{aligned}$$
The noise vectors \({{\varvec{\eta }}_n^{(j)} }\) and \({{{\bar{\gamma }}} _n^{(j)} }\) will be de-correlated if the conditions \({\mathrm {E}} \{ {{\varvec{\eta }}_n^{(j)} ( {{{\bar{\gamma }}} _n^{(j)} } )^{\mathrm{T}} } \} = 0\) is satisfied that can be achieved with
$$\begin{aligned} {\varvec{\beta }}_n^{(j)}&= \Phi _n^{(j)} ( {{\varvec{H}}_n^{(j)} \Phi _n^{(j)} + {\varvec{R}}} )^{ - 1}\,, \end{aligned}$$
$$\begin{aligned} {\varvec{\Theta} }_n^{(j)}&= ( {{\varvec{I}} - {\varvec{\beta }}_n^{(j)} {\varvec{H}}_n^{(j)} } ){\varvec{Q}}^{( j )} ( {{\varvec{I}} - {\varvec{\beta }}_n^{(j)} {\varvec{D}}_n^{(j)} } )^{\mathrm{T}} \end{aligned}$$
Provided the de-correlation, the algorithm of the sub-KF method operating under CMN is described in Algorithm 1. Unlike the standard KF, Algorithm 1 requires the CFM \(\alpha _n^{(j)}\) at each n. To set \(\alpha _n^{(j)}\) properly, we take notice of possible time variations in \(\alpha _n^{(j)}\), and make CMF switching by the following steps:
Set several possible values of \(\alpha _n^{(j),i}, i\in [1,q]\).
Run q sub-KFs with \(\alpha _n^{(j),i}, i\in [1,q]\) in parallel.
Compute the Mahalanobis distance (Mahalanobis, 1936)
$$\begin{aligned} L_n^{\alpha _n^{(j),i} } = ( {{\varvec{z}}_n^{(j)} - {\varvec{D}}_n^{(j)} {{\hat{\varvec{x}}}}_n^{(j)-} } )^T {{\varvec{R}}^{(j)^{-1}} } ( {{\varvec{z}}_n^{(j)} - {\varvec{D}}_n^{(j)} {{\hat{\varvec{x}}}}_n^{(j)-} } ) \end{aligned}$$
Find \(\alpha _{n,{\mathrm {opt}}}^{(j)}\) by solving the minimization problem
$$\begin{aligned} \alpha _{n,{\mathrm {opt}}}^{(j)} = \mathop {\arg \min }\limits _{\alpha _n^{(j),i} } {L_n^{\alpha ^{(j),i} } } \end{aligned}$$
A pseudo code of the sub-KF for CMN with switch CFM is listed as Algorithm 2 and the structure of this filter is shown in Fig. 3. Having selected a proper range for CFM, this algorithm determines \(\alpha _{n,{\mathrm {opt}}}^{(j)}\), which is further used in the main filter.
Structure of the sub-KF for CMN with switch CFM
Distributed KF for CMN with switch CMN
The dKF algorithm used in the proposed navigation system is responsible for fusing data collected from local sub-filters and estimating the object position \(\hat{\varvec{x}}_n\) and localization error covariance \({\varvec{P}}_n\) as
$$\begin{aligned} \hat{\varvec{x}}_n&= {\varvec{P}}_n ( {\varvec{P}}_n^{(1)^{-1}} \hat{\varvec{x}}_n^{(1)} + {\varvec{P}}_n^{(2)^{-1}} \hat{\varvec{x}}_n^{(2)} \nonumber \\&\quad + \dots + {\varvec{P}}_n^{(M)^{-1}} \hat{\varvec{x}}_n^{(M)}) \,, \end{aligned}$$
$$\begin{aligned} {\varvec{P}}_n^{-1}&= {\varvec{P}}_n^{(1)^{-1}} + {\varvec{P}}_n^{(2)^{-1}} + \cdots + {\varvec{P}}_n^{(M)^{-1}} \end{aligned}$$
A pseudo code of the main dKF for CMN with switch CFM is listed as Algorithm 3.
Experimental setup
To test the dKF designed under CMN with switch CFM, we exploited the INS/UWB human localization system deployed in the No. 14 building of the University of Jinan, Jinan, China, as shown in Fig. 4. The target person equipped with experimental devices was pictured in Fig. 5. In this work, we conducted two tests, where we exploited the INS and the UWB localization systems described in Xu et al. (2019). The testbed worked as follows. We placed UWB RNs in indoor spaces according to designed positions and installed a Blind Node (BN) on a mobbing target. The target person wore an Inertial Measurement Unit (IMU) to obtain the INS results. The encoder was used to measure the distance from the start point. In this experiment, a target traveled along a planned trajectory, which was complicated with obstacles. Moreover, the method to obtain the ground truth coordinates in the experimental test can be founded in Xu et al. (2019). It has two phases: (1) establishing the mapping between the distance walking along the planned path from the start point and the ground truth coordinate and (2) encoding, to measure the walking distance and calculate the ground truth coordinates through the constructed mapping.
Testing experimental setup
The target human
Localization errors
To test this filter, we specify the state space model for \(q=5\) and \(T^{(j)} = 0.45\, {\mathrm {s}}, j\in [1,4]\), with \(\alpha _n^{(j),1}=0.1\), \(\alpha _n^{(j),2}=0.3\), \(\alpha _n^{(j),3}=0.5\), \(\alpha _n^{(j),4}=0.7\), \(\alpha _n^{(j),5}=0.9\).
Error comparison of KF and dKF
Since the models (1)–(7) are linear, the linear data fusion is used in this paper. The localization error \({\mathrm{Pos}}\_{\mathrm{error}}\) produced by INS, UWB, KF, and dKF in Test 1 is computed as
$$\begin{aligned} {\mathrm{Pos}}\_{\mathrm{error}} = \sqrt{\left({\mathrm{Pos}}^{\mathrm{E}} - {{\mathrm{Pos}}^{\mathrm{E,R}}}\right) ^2 + \left( {\mathrm{Pos}}^{\mathrm{N}} - {\mathrm {Pos}}^{\mathrm{N,R}} \right) ^2 } \end{aligned}$$
where \(\left( {\mathrm {Pos}}^{\mathrm{E,R}},{\mathrm { Pos}}^{\mathrm{N,R}}\right)\) denote the reference coordinates. The cumulative distribution functions (CDFs) are sketched in Fig. 6. From this figure, one can see that the KF and dKF can reduce the localization error over the INS and UWB. Also, the dKF has a better performance than the KF. The INS, UWB, KF, and dKF results in Test 1 are given in Table 1, which suggests that the dKF gives the smallest localization error.
Table 1 Position RMSEs produced by the INS, UWB, KF, and dKF in Test 1
The CDFs of \({\mathrm {Pos}}\_{\mathrm{error}}\) produced by the KF and dKF in Test 1
The trajectories planned or estimated by the INS, UWB, dKF for CMN, and dKF for CMN with switch CMN in test 1 are shown in Fig. 7. As can be seen, the errors in the INS outputs are accumulated, but at a lower rate due to the implementation of the Zero-velocity Update (ZUPT). Figures 8 and 9 show the estimates in the east and north directions produced by the INS, UWB, dKF, and dKF for CMN with switch CFM in test 1. Compared to INS, the UWB trajectory is close to the planned path. It is also noticed that the estimate produced by the dKF is not as accurate as that by the dKF for CMN with switch CFM, whose outputs are closer to the planned path. Figure 10 sketches the CDFs of the localization errors \({\mathrm{Pos}}\_{\mathrm{error}}\) produced by different estimators in test 1. It is clear that the proposed dKF for CMN with switch CFM produces the smallest errors compared to the INS, UWB, and dKF. Table 2 shows the localization results which indicating that the dKF for CMN with switch CFM has the smallest errors.
Trajectories by plan or given by the INS, UWB, dKF for CMN, and dKF for CMN with switch CMN in test 1
Estimates in the east direction produced by the INS, UWB, dKF, and dKF for CMN with switch CFM in test 1
Estimates in the north direction produced by the INS, UWB, dKF, and dKF for CMN with switch CFM in test 1
The CDFs of \({\varvec{Pos}}\_{\varvec{error}}\) produced by the INS, UWB, dKF, and dKF for CMN with switch CFM in test 1
Table 2 Position RMSEs produced by the INS, UWB, dKF for CMN, and dKF for CMN with switch CFM in test 1
We employed another test to verify the performance of the proposed method. Trajectories planned or given by the INS, UWB, dKF, and dKF for CMN with switch CMN in test 2 are shown in Fig. 11. One can see that the UWB trajectory is close to the planned path, unlike that of INS. It is also noticed that the estimate produced by the dKF is not as accurate as that by the dKF for CMN with switch CFM, whose output is much closer to the planned path. The CDFs of \({\varvec{Pos}}\_{\varvec{error}}\) produced by the INS, UWB, dKF, and dKF for CMN with switch CFM in test 2 are shown in Fig. 12. Thus, we conclude that the proposed dKF for CMN with switch CFM gives the smallest errors of 0.9, which is reduced by about 27.4% relative to dKF (Table 3).
Trajectories by plan or given by the INS, UWB, dKF, and dKF for CMN with switch CMN in test 2
The CDFs of \({\mathrm {Pos}}\_{\mathrm{error}}\) produced by the INS, UWB, dKF, and dKF for CMN with switch CFM in test 2
The performances for the dKF for CMN with switch CFM and with constant CFM
We compare the performances of the dKFs designed to the trajectories given by INS and UWB. The CDFs of \({\varvec{Pos}}\_{\varvec{error}}\) produced by the dKF for CMN with switch CFM and with constant CFM in test 1 and test 2 are shown in Figs. 13 and 14 respectively. To compare errors, we set \(\alpha ^{(j)}_{\mathrm{opt}}=0.15\), \(j\in {[}1,4{]}\), in constant CFM for all data. From this figure, we infer that the performances of the dKFs for CMN with switch CFM and with constant CFM are similar in the tests. It should also be emphasized that the method of setting a constant CFM is offline in this work, which is obtained from all the test data. The procedure is not designed and not the best choice for real time applications. Compared with dKF for CMN with constant CFM, the proposed dKF for CMN with switch CFM can obtain the CFM adaptively. Moreover, its performance is similar to the dKF for CMN with constant CFM. Figures 13 and 14 also show that the dKF for CMN with switch CFM can perform online and the results are very close to the optimal ones.
The CDFs of \({\mathrm {Pos}}\_{\mathrm{error}}\) produced by the dKF for CMN with switch CFM and constant CFM in test 1
The dKF designed in this paper under CMN with switch CFM has demonstrated the ability to improve online the performance of the INS/UWB integrated human localization system in indoor environments. The effect was achieved with determining the optimal CFM by solving the minimization problem and modifying the KF-based fusion filter. Accordingly, the effect of the CMN has been essentially mitigated in the output of the main dKF. The tests demonstrated a better performance of the dKF for CMN with switch CFM over the dKF for CMN. It also shows that the accuracy of the INS/UWB integrated system-based human localization can be improved compared to the standard dKF.
The raw data were provided by University of Jinan, Jinan, China. The raw data used in this study are available from the corresponding author upon request.
CDF:
Cumulative distribution function
Colouredness factor matrix
CMN:
Colored measurement noise
dKF:
Distributed Kalman filter
EKF:
Extended Kalman filter
IEKF:
Iterated extended Kalman filter
Inertial measurement unit
INS:
Inertial navigation system
KF:
Kalman filter
Radio frequency identification
UKF:
Unscented Kalman filter
UWB:
Ultra wide band
UWB RN:
UWB reference node
UWB BN:
UWB blind node
ZUPT:
Zero-velocity update
Chen, G., Meng, X., Wang, Y., Zhang, Y., Peng, T., & Yang, H. (2015). Integrated WiFi/PDR/Smartphone using an unscented Kalman filter algorithm for 3D indoor localization. Sensors, 15(9), 24595–24614.
Chen, N., Li, M., Yuan, H., Su, X., & Li, Y. (2020). Survey of pedestrian detection with occlusion. Complex and Intelligent Systems, 7, 1–11.
El-Sheimy, N., & Li, Y. (2021). Indoor navigation: State of the art and future trends. Satellite Navigation, 2, 7.
El-Sheimy, N., & Youssef, A. (2020). Inertial sensors technologies for navigation applications: State of the art and future trends. Satellite Navigation, 1, 2.
Fu, Q., & Retscher, G. (2009). Active RFID trilateration and location fingerprinting based on RSSI for pedestrian navigation. Journal of Navigation, 62(2), 323–340.
Gu, Y., Song, Q., Li, Y., & Ma, M. (2015). Foot-mounted pedestrian navigation based on particle filter with an adaptive weight updating strategy. Journal of Navigation, 68(01), 23–38.
Hsu, Y. L., Wang, J. S., & Chang, C. W. (2017). A wearable inertial pedestrian navigation system with quaternion-based extended Kalman filter for pedestrian localization. IEEE Sensors Journal, 17(10), 3193–3206.
Li, R., Zheng, S., Wang, E., Chen, J., Feng, S., Wang, D., & Dai, L. (2020). Inertial sensors technologies for navigation applications: State of the art and future trends. Satellite Navigation, 1, 12.
Li, Y., Zhuang, Y., Lan, H., Zhang, P., Niu, X., & El-Sheimy, N. (2016). Self-contained indoor pedestrian navigation using smartphone sensors and magnetic features. IEEE Sensors Journal, 16(19), 7173–7182.
Mahalanobis, P. C. (1936). On the generalised distance in statistics. Proceedings of the National Institute of Sciences, 2, 49–55.
MATH Google Scholar
Mosavi, M. R., & Shafiee, F. (2016). Narrowband interference suppression for GPS navigation using neural networks. GPS Solutions, 20(3), 341–351.
Norrdine, A., Kasmi, Z., & Blankenbach, J. (2016). Step detection for ZUPT-aided inertial pedestrian navigation system using foot-mounted permanent magnet. IEEE Sensors Journal, 16(17), 6766–6773.
Sekaran, J. F., Kaluvan, H., & Irudhayaraj, L. (2020). Modeling and analysis of GPS-GLONASS navigation for car like mobile robot. Journal of Electrical Engineering and Technology, 15(5), 927–935.
Shmaliy, Y. S., Zhao, S., & Ahn, C. K. (2020). Kalman and UFIR state estimation with coloured measurement noise using backward Euler method. IET Signal Process, 14(2), 64–71.
Tian, Q., Wang, I. K., & Salcic, Z. (2020). An INS and UWB fusion approach with adaptive ranging error mitigation for pedestrian tracking. IEEE Sensors Journal, 20(8), 4372–4381.
Tzitzis, A., Megalou, S., Siachalou, S., Emmanouil, T. G., Kehagias, A., Yioultsis, T. V., & Dimitriou, A. G. (2019). Localization of RFID tags by a moving robot, via phase unwrapping and non-linear optimization. IEEE Journal of Radio Frequency Identification, 3(4), 216–226.
Xu, Y., Ahn, C. K., Shmaliy, Y. S., Chen, X., & Bu, L. (2019). Indoor INS/UWB-based human localization with missing data utilizing predictive UFIR filtering. IEEE/CAA Journal of Automatica Sinica, 6(4), 952–960.
Xu, Y., Chen, X., & Li, Q. (2013). Autonomous integrated navigation for indoor robots utilizing on-line iterated extended Rauch–Tung–Striebel smoothing. Sensors, 13(12), 15937–15953.
Xu, Y., Li, Y., Ahn, C. K., & Chen, X. (2020). Seamless indoor pedestrian tracking by fusing INS and UWB measurements via LS-SVM assisted UFIR filter. Neurocomputing, 388, 301–308.
Xu, Y., Shmaliy, Y. S., Ahn, C. K., Shen, T., & Zhuang, Y. (2021). Tightly-coupled integration of INS and UWB using fixed-lag extended UFIR smoothing for quadrotor localization. IEEE Internet of Things Journal, 8(3), 1716–1727.
Xu, Y., Shmaliy, Y. S., Ahn, C. K., Tian, G., & Chen, X. (2018). Robust and accurate UWB-based indoor robot localisation using integrated EKF/EFIR filtering. IET Radar Sonar and Navigation, 12(7), 750–756.
Yu, K., Wen, K., Li, Y., Zhang, S., & Zhang, K. (2019). A novel NLOS mitigation algorithm for UWB localization in harsh indoor environments. IEEE Transactions on Vehicular Technology, 68(1), 686–699.
Zhang, Y., Tan, X., & Changsheng, Z. (2020). UWB/INS integrated pedestrian positioning for robust indoor environments. IEEE Sensors Journal, 20(23), 14401–14409.
Zhao, S., & Huang, B. (2020). Trial-and-error or avoiding a guess? Initialization of the Kalman filter. Automatica, 121, 109184.
Zhao, S., Shmaliy, Y. S., & Liu, F. (2016). Fast Kalman-like optimal unbiased FIR filtering with applications. IEEE Transactions on Signal Processing, 64(9), 2284–2297.
Zhuang, Y., & El-Sheimy, N. (2015). Tightly-coupled integration of WiFi and mems sensors on handheld devices for indoor pedestrian navigation. IEEE Sensors Journal, 16(1), 224–234.
Zhuang, Y., Hua, L., Wang, Q., Cao, Y., & Thompson, J. S. (2019a). Visible light positioning and navigation using noise measurement and mitigation. IEEE Transactions on Vehicular Technology, 68(11), 11094–11106.
Zhuang, Y., Wang, Q., Shi, M., Cao, P., Qi, L., & Yang, J. (2019b). Low-power centimeter-level localization for indoor mobile robots based on ensemble Kalman smoother using received signal strength. IEEE Internet of Things Journal, 6(4), 6513–6522.
Zhuang, Y., Yang, J., Qi, L., Li, Y., Cao, Y., & El-Sheimy, N. (2019c). A pervasive integration platform of low-cost MEMS sensors and wireless signals for indoor localization. IEEE Internet of Things Journal, 5(6), 4616–4631.
Yuan Xu do hope his wife Chen Fu will soon get well again.
This work is supported by NSFC Grant 61803175, Shandong Key R&D Program 2019JZZY021005 and Mexican Consejo Nacional de Cienciay Tecnologıa Project A1-S-10287 Grant CB2017-2018.
School of Electrical Engineering, University of Jinan, Jinan, China
Yuan Xu & Jing Cao
Department of Electronics Engineering, Universidad de Guanajuato, Salamanca, Mexico
Yuriy S. Shmaliy
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan, China
Yuan Zhuang
Shandong Beiming Medical Technology Co., Ltd, Jinan, China
Yuan Xu
Jing Cao
We all conceived the idea and contributed to the writing of the paper. All authors read and approved the final manuscript.
Correspondence to Yuan Zhuang.
This article does not contain any studies with human participants performed by any of the authors.
Xu, Y., Cao, J., Shmaliy, Y.S. et al. Distributed Kalman filter for UWB/INS integrated pedestrian localization under colored measurement noise. Satell Navig 2, 22 (2021). https://doi.org/10.1186/s43020-021-00053-z
Distributed filtering
Human localization
Ubiquitous Positioning, Indoor Navigation and Location-Based Services
|
CommonCrawl
|
Tue, 12 Nov 2019 21:28:23 GMT
11.3: Explain the Time Value of Money and Calculate Present and Future Values of Lump Sums and Annuities
[ "article:topic", "showtoc:no", "license:ccbyncsa", "authorname:openstaxaccounting2", "time value of money", "lump sum", "Compounding", "future value", "present value", "annuity", "Discounting", "annuities due" ]
11: Capital Budgeting Decisions
Time Value of Money Fundamentals
Lump Sums and Annuities
Future Value of \(\$1\)
Future Value of an Ordinary Annuity
Present Value
Present Value of \(\$1\)
Annuity Table
Your mother gives you \(\$100\) cash for a birthday present, and says, "Spend it wisely." You want to purchase the latest cellular telephone on the market but wonder if this is really the best use of your money. You have a choice: You can spend the money now or spend it in the future. What should you do? Is there a benefit to spending it now as opposed to saving for later use? Does time have an impact on the value of your money in the future? Businesses are confronted with these questions and more when deciding how to allocate investment money. A major factor that affects their investment decisions is the concept of the time value of money.
The concept of the time value of money asserts that the value of a dollar today is worth more than the value of a dollar in the future. This is typically because a dollar today can be used now to earn more money in the future. There is also, typically, the possibility of future inflation, which decreases the value of a dollar over time and could lead to a reduction in economic buying power.
At this point, potential effects of inflation can probably best be demonstrated by a couple of examples. The first example is the Ford Mustang. The first Ford Mustang sold in 1964 for \(\$2,368\). Today's cheapest Mustang starts at a list price of \(\$25,680\). While a significant portion of this increase is due to additional features on newer models, much of the increase is due to the inflation that occurred between 1964 and 2019.
Similar inflation characteristics can be demonstrated with housing prices. After World War II, a typical small home often sold for between \(\$16,000\) and \(\$30,000\). Many of these same homes today are selling for hundreds of thousands of dollars. Much of the increase is due to the location of the property, but a significant part is also attributed to inflation. The annual inflation rate for the Mustang between 1964 and 2019 was approximately \(4.5\%\). If we assume that the home sold for \(\$16,500\) in 1948 and the price of the home in 2019 was about \(\$500,000\), that's an annual appreciation rate of almost \(5\%\).
Today's dollar is also more valuable because there is less risk than if the dollar was in a long-term investment, which may or may not yield the expected results. On the other hand, delaying payment from an investment may be beneficial if there is an opportunity to earn interest. The longer payment is delayed, the more available earning potential there is. This can be enticing to businesses and may persuade them to take on the risk of deferment.
Businesses consider the time value of money before making an investment decision. They need to know what the future value is of their investment compared to today's present value and what potential earnings they could see because of delayed payment. These considerations include present and future values.
Before you learn about present and future values, it is important to examine two types of cash flows: lump sums and annuities.
A lump sum is a one-time payment or repayment of funds at a particular point in time. A lump sum can be either a present value or future value. For a lump sum, the present value is the value of a given amount today. For example, if you deposited \(\$5,000\) into a savings account today at a given rate of interest, say \(6\%\), with the goal of taking it out in exactly three years, the \(\$5,000\) today would be a present value-lump sum. Assume for simplicity's sake that the account pays \(6\%\) at the end of each year, and it also compounds interest on the interest earned in any earlier years.
In our current example, interest is calculated once a year. However, interest can also be calculated in numerous ways. Some of the most common interest calculations are daily, monthly, quarterly, or annually. One concept important to understand in interest calculations is that of compounding. Compounding is the process of earning interest on previous interest earned, along with the interest earned on the original investment.
Returning to our example, if \(\$5,000\) is deposited into a savings account for three years earning 6% interest compounded annually, the amount the \(\$5,000\) investment would be worth at the end of three years is \(\$5,955.08 (\$5,000 × 1.06 – \$5,300 × 1.06 – \$5,618 × 1.06 – \$5,955.08)\). The \(\$5,955.08\) is the future value of \(\$5,000\) invested for three years at \(6\%\). More formally, future value is the amount to which either a single investment or a series of investments will grow over a specified time at a given interest rate or rates. The initial \(\$5,000\) investment is the present value. Again, more formally, present value is the current value of a single future investment or a series of investments for a specified time at a given interest rate or rates. Another way to phrase this is to say the \(\$5,000\) is the present value of \(\$5,955.08\) when the initial amount was invested at \(6\%\) for three years. The interest earned over the three-year period would be \(\$955.08\), and the remaining \(\$5,000\) would be the original deposit of \(\$5,000\).
As shown in the example the future value of a lump sum is the value of the given investment at some point in the future. It is also possible to have a series of payments that constitute a series of lump sums. Assume that a business receives the following four cash flows. They constitute a series of lump sums because they are not all the same amount.
The company would be receiving a stream of four cash flows that are all lump sums. In some situations, the cash flows that occur each time period are the same amount; in other words, the cash flows are even each period. These types of even cash flows occurring at even intervals, such as once a year, are known as an annuity. The following figure shows an annuity that consists of four payments of \(\$12,000\) made at the end of each of four years.
The nature of cash flows—single sum cash flows, even series of cash flows, or uneven series of cash flows—have different effects on compounding.
Compounding can be applied in many types of financial transactions, such as funding a retirement account or college savings account. Assume that an individual invests \(\$10,000\) in a four-year certificate of deposit account that pays \(10\%\) interest at the end of each year (in this case 12/31). Any interest earned during the year will be retained until the end of the four-year period and will also earn \(10\%\) interest annually.
Figure \(\PageIndex{1}\): Compounding interest
Through the effects of compounding—earning interest on interest—the investor earned \(\$4,641\) in interest from the four-year investment. If the investor had removed the interest earned instead of reinvesting it in the account, the investor would have earned \(\$1,000\) a year for four years, or \(\$4,000\) interest (\(\$10,000 × 10\% = \$1,000\) per year \(× 4\) years \(= \$4,000\) total interest). Compounding is a concept that is used to determine future value (more detailed calculations of future value will be covered later in this section). But what about present value? Does compounding play a role in determining present value? The term applied to finding present value is called discounting.
Discounting is the procedure used to calculate the present value of an individual payment or a series of payments that will be received in the future based on an assumed interest rate or return on investment. Let's look at a simple example to explain the concept of discounting.
Assume that you want to accumulate sufficient funds to buy a new car and that you will need \(\$5,000\) in three years. Also, assume that your invested funds will earn \(8\%\) a year for the three years, and you reinvest any interest earned during the three-year period. If you wanted to take out adequate funds from your savings account to fund the three-year investment, you would need to invest \(\$3,969.16\) today and invest it in the account earning \(8\%\) for three years. After three years, the \(\$3,969.16\) would earn \(\$1,030.84\) and grow to exactly the \(\$5,000\) that you will need. This is an example of discounting. Discounting is the method by which we take a future value and determine its current, or present, value. An understanding of future value applications and calculations will aid in the understanding of present value uses and calculations.
There are benefits to investing money now in hopes of a larger return in the future. These future earnings are possible because of interest payments received as an incentive for tying up money long-term. Knowing what these future earnings will be can help a business decide if the current investment is worth the long-term potential. Recall, the future value (FV) as the value of an investment after a certain period of time. Future value considers the initial amount invested, the time period of earnings, and the earnings interest rate in the calculation. For example, a bank would consider the future value of a loan based on whether a long-time client meets a certain interest rate return when determining whether to approve the loan.
To determine future value, the bank would need some means to determine the future value of the loan. The bank could use formulas, future value tables, a financial calculator, or a spreadsheet application. The same is true for present value calculations. Due to the variety of calculators and spreadsheet applications, we will present the determination of both present and future values using tables. In many college courses today, these tables are used primarily because they are relatively simple to understand while demonstrating the material. For those who prefer formulas, the different formulas used to create each table are printed at the top of the corresponding table. In many finance classes, you will learn how to utilize the formulas. Regarding the use of a financial calculator, while all are similar, the user manual or a quick internet search will provide specific directions for each financial calculator. As for a spreadsheet application such as Microsoft Excel, there are some common formulas, shown in Table \(\PageIndex{1}\). In addition, Appendix 14.3 provides links to videos and tutorials on using specific aspects of Excel, such as future and present value techniques.
Table \(\PageIndex{1}\): Excel Formulas
Time Value Component
Excel Formula Shorthand
Excel Formula Detailed
Present Value Single Sum =PV =PV(Rate, N, Payment, FV)
Future Value Single Sum +FV =FV(Rate, N, Payment, PV)
Present Value Annuity =PV =PV(Rate, N, Payment, FV, Type)
Future Value Annuity =FV =FV(Rate, N, Payment, PV, Type)
Net Present Value =NPV =NPV(Rate, CF2, CF3, CF4) + CF1
Internal Rate of Return =IRR =IRR(Invest, CF1, CF2, CF3)
Rate = annual interest rate
N = number of periods
Payment = annual payment amount, entered as a negative number, use 0 when calculating both present value of a single sum and future value of a single sum
FV = future value
PV = current or present value
Type = 0 for regular annuity, 1 for annuity due
CF = cash flow for a period, thus CF1 – cash flow period 1, CF2 – cash flow period 2, etc.
Invest = initial investment entered as a negative number
Since we will be using the tables in the examples in the body of the chapter, it is important to know there are four possible table, each used under specific conditions (Table \(\PageIndex{2}\).
Table \(\PageIndex{2}\): Time Value of Money Tables
Table Heading
Future Value – Lump Sum Future Value of $1
Future Value – Annuity (even payment stream) Future Value of an Annuity
Present Value – Lump Sum Present Value of $1
Present Value – Annuity (even payment stream) Present Value of an Annuity
In the prior situation, the bank would use either the Future Value of \(\$1\) table or Future Value of an Ordinary Annuity table, samples of which are provided in Appendix 14.2. To use the correct table, the bank needs to determine whether the customer will pay them back at the end of the loan term or periodically throughout the term of the loan. The Future Value of \(\$1\) table is used if the customer will pay back at the end of the period; if the payments will be made periodically throughout the term of the loan, they will use the Future Value of an Annuity table. Choosing the correct table to use is critical for accurate determination of the future value. The application in other business matters is the same: a business needs to also consider if they are making an investment with a repayment in one lump sum or in an annuity structure before choosing a table and making the calculation. In the tables, the columns show interest rates (\(i\)) and the rows show periods (\(n\)). The interest columns represent the anticipated interest rate payout for that investment. Interest rates can be based on experience, industry standards, federal fiscal policy expectations, and risk investment. Periods represent the number of years until payment is received. The intersection of the expected payout years and the interest rate is a number called a future value factor. The future value factor is multiplied by the initial investment cost to produce the future value of the expected cash flows (or investment return).
A lump sum payment is the present value of an investment when the return will occur at the end of the period in one installment. To determine this return, the Future Value of \(\$1\) table is used.
For example, you are saving for a vacation you plan to take in \(6\) years and want to know how much your initial savings will yield in the future. You decide to place \(\$4,500\) in an investment account now that yields an anticipated annual return of \(8\%\). Looking at the FV table, \(n = 6\) years, and \(i = 8\%\), which return a future value factor of \(1.587\). Multiplying this factor by the initial investment amount of \(\$4,500\) produces \(\$7,141.50\). This means your initial savings of \(\$4,500\) will be worth approximately \(\$7,141.50\) in \(6\) years.
Figure \(\PageIndex{2}\): Sample Future Value
An ordinary annuity is one in which the payments are made at the end of each period in equal installments. A future value ordinary annuity looks at the value of the current investment in the future, if periodic payments were made throughout the life of the series.
For example, you are saving for retirement and expect to contribute \(\$10,000\) per year for the next \(15\) years to a 401(k) retirement plan. The plan anticipates a periodic interest yield of \(12\%\). How much would your investment be worth in the future meeting these criteria? In this case, you would use the Future Value of an Ordinary Annuity table. The relevant factor where \(n = 15\) and \(i = 12\%\) is \(37.280\). Multiplying the factor by the amount of the cash flow yields a future value of these installment savings of (\(37.280 × \$10,000\)) \(\$372,800\). Therefore, you could expect your investment to be worth \(\$372,800\) at the end of \(15\) years, given the parameters.
Figure \(\PageIndex{3}\): Future value of an ordinary annuity
Let's now examine how present value differs from future value in use and computation.
Example \(\PageIndex{1}\): Determining Future Value
Determine the future value for each of the following situations. Use the future value tables provided in Appendix 14.2 when needed, and round answers to the nearest cent where required.
You are saving for a car and you put away \(\$5,000\) in a savings account. You want to know how much your initial savings will be worth in \(7\) years if you have an anticipated annual interest rate of \(5\%\).
You are saving for retirement and make contributions of \(\$11,500\) per year for the next \(14\) years to your 403(b) retirement plan. The interest rate yield is \(8\%\).
Use FV of \(\$1\) table. Future value factor where \(n = 7\) and \(i = 5\) is \(1.407. 1.407 × 5,000 = \$7,035\).
Use FV of an ordinary annuity table. Future value factor where \(n = 14\) and \(i = 8\) is \(24.215. 24.215 × 11,500 = \$278,472.50\).
It is impossible to compare the value or potential purchasing power of the future dollar to today's dollar; they exist in different times and have different values. Present value (PV) considers the future value of an investment expressed in today's value. This allows a company to see if the investment's initial cost is more or less than the future return. For example, a bank might consider the present value of giving a customer a loan before extending funds to ensure that the risk and the interest earned are worth the initial outlay of cash.
Similar to the Future Value tables, the columns show interest rates (\(i\)) and the rows show periods (\(n\)) in the Present Value tables. Periods represent how often interest is compounded (paid); that is, periods could represent days, weeks, months, quarters, years, or any interest time period. For our examples and assessments, the period (\(n\)) will almost always be in years. The intersection of the expected payout years (\(n\)) and the interest rate (\(i\)) is a number called a present value factor. The present value factor is multiplied by the initial investment cost to produce the present value of the expected cash flows (or investment return).
\[\text { Present Value }=\text { Present Value Factor } \times \text { Initial Investment cost }\]
The two tables provided in Appendix 14.2 for present value are the Present Value of \(\$1\) and the Present Value of an Ordinary Annuity. As with the future value tables, choosing the correct table to use is critical for accurate determination of the present value.
When referring to present value, the lump sum return occurs at the end of a period. A business must determine if this delayed repayment, with interest, is worth the same as, more than, or less than the initial investment cost. If the deferred payment is more than the initial investment, the company would consider an investment.
To calculate present value of a lump sum, we should use the Present Value of \(\$1\) table. For example, you are interested in saving money for college and want to calculate how much you would need put in the bank today to return a sum of \(\$40,000\) in \(10\) years. The bank returns an interest rate of \(3\%\) per year during these \(10\) years. Looking at the PV table, \(n = 10\) years and \(i = 3\%\) returns a present value factor of \(0.744\). Multiplying this factor by the return amount of \(\$40,000\) produces \(\$29,760\). This means you would need to put in the bank now approximately \(\$29,760\) to have \(\$40,000\) in \(10\) years.
Figure \(\PageIndex{4}\): Present value sample table
As mentioned, to determine the present value or future value of cash flows, a financial calculator, a program such as Excel, knowledge of the appropriate formulas, or a set of tables must be used. Though we illustrate examples in the text using tables, we recognize the value of these other calculation instruments and have included chapter assessments that use multiple approaches to determining present and future value. Knowledge of different approaches to determining present and future value is useful as there are situations, such as having fractional interest rates, \(8.45\%\) for example, in which a financial calculator or a program such as Excel would be needed to accurately determine present or future value.
As discussed previously, annuities are a series of equal payments made over time, and ordinary annuities pay the equal installment at the end of each payment period within the series. This can help a business understand how their periodic returns translate into today's value.
For example, assume that Sam needs to borrow money for college and anticipates that she will be able to repay the loan in \(\$1,200\) annual payments for each of \(5\) years. If the lender charges \(5\%\) per year for similar loans, how much cash would the bank be willing to lend Sam today? In this case, she would use the Present Value of an Ordinary Annuity table in Appendix 14.2, where \(n = 5\) and \(i = 5\%\). This yields a present value factor of \(4.329\). The current value of the cash flow each period is calculated as \(4.329 × \$1,200 = \$5,194.80\). Therefore, Sam could borrow \(\$5,194.80\) now given the repayment parameters.
Figure \(\PageIndex{4}\): Present value of an ordinary annuity
Our focus has been on examples of ordinary annuities (annuities due and other more complicated annuity examples are addressed in advanced accounting courses). With annuities due, the cash flow occurs at the start of the period. For example, if you wanted to deposit a lump sum of money into an account and make monthly rent payments starting today, the first payment would be made the same day that you made the deposit into the funding account. Because of this timing difference in the withdrawals from the annuity due, the process of calculating annuity due is somewhat different from the methods that you've covered for ordinary annuities.
Example \(\PageIndex{2}\): Determining Present Value
Determine the present value for each of the following situations. Use the present value tables provided in Appendix 14.2 when needed, and round answers to the nearest cent where required.
You are saving for college and you want to return a sum of \(\$100,000\) in \(12\) years. The bank returns an interest rate of \(5\%\) after these \(12\) years.
You need to borrow money for college and can afford a yearly payment to the lending institution of \(\$1,000\) per year for the next \(8\) years. The interest rate charged by the lending institution is \(3\%\) per year.
Use PV of \(\$1\) table. Present value factor where \(n = 12\) and \(i = 5\) is \(0.557. 0.557 × \$100,000 = \$55,700\).
Use PV of an ordinary annuity table. Present value factor where \(n = 8\) and \(i = 3\) is \(7.020. 7.020 × \$1,000 = \$7,020\).
For a lucky few, winning the lottery can be a dream come true and the option to take a one-time payout or receive payments over several years does not seem to matter at the time. This lottery payout calculator shows how time value of money may affect your take-home winnings.
11.2: Evaluate the Payback and Accounting Rate of Return in Capital Investment Decisions
11.4: Use Discounted Cash Flow Models to Make Capital Investment Decisions
annuities due
|
CommonCrawl
|
Correlating mechanical work with energy consumption during gait throughout pregnancy
Zarko Krkeljas1 &
Sarah Johanna Moss1
Measures of mechanical work may be useful in evaluating efficiency of walking during pregnancy. Various adaptations in the body during pregnancy lead to altered gait, consequently contributing to the total energy cost of walking. Measures of metabolic energy expenditure may not be reliable for measuring energetic cost of gait during pregnancy as pregnancy results in numerous metabolic changes resulting from foetal development. Therefore, the aim of this study is to determine if mechanical work prediction equations correlate with the metabolic energy cost of gait during pregnancy.
Thirty-five (35) women (27.5 ± 6.1 years) gave informed consent for participation in the study at different weeks of gestation pregnancy. Gas exchange and gait data were recorded while walking at a fixed self-selected walking speed. External (Wext) work was estimated assuming no energy transfer between segments, while internal work (Wint) assumed energy transfer between segments. Hence total energy of the body (Wtot) was calculated based on the segmental changes relative to the surrounding, and relative to the centre of mass of the whole body. Equations for mechanical work were correlated with net and gross O2 rate, and O2 cost.
External, internal and total mechanical energy showed significant positive relationship with gross O2 rate (r = 0.48, r = 0.35; and r = 0.49 respectively), and gross O2 cost (r = 0.42; r = 0.70, and r = 0.62, respectively). In contrast, external, internal and total mechanical energy had no significant relationship with net O2 rate (r = 0.19, r = 0.24, and r = 0.24, respectively). Net O2 cost was significant related Wext (r = 0.49) Wint (r = 0.66) and Wtot (r = 0.62). Energy recovery improved with increase in gait speed.
Measures of mechanical work, when adjusted for resting energy expenditure, and walking speed may be useful in comparing metabolic energy consumption between women during pregnancy, or assessment or gait changes of the same individual throughout pregnancy.
During pregnancy energy required for walking may increase significantly due to an increase in weight [1–3]. Hence walking, as a common activity of daily living, may contribute to an increase in total energy expenditure during pregnancy. Previous studies indicate high variability in gait during pregnancy due to pregnancy-related physical and physiological changes [4], which may result in an increase in mechanical work. The ability to predict changes in mechanical energy based on pregnancy-related adaptations to gait may provide a better understanding of energy balance during pregnancy.
Current research in pregnancy is largely focused on increased energy demands stemming from foetal development, hormonal changes, and changes in physical activity [2, 5–7]. Any increase in energy expenditure during pregnancy has been attributed largely to an increase in resting metabolic rate (RMR) [2], while any increase in the active energy expenditure (AEE) has been attributed primarily to the mass gain during pregnancy [1–3]. This indicates that no other factors have currently been identified that contribute to the energy cost, and that relative to mass, AEE remains unchanged throughout pregnancy. In addition, difficulties in metabolic analysis of gait during pregnancy stem mostly from the large inter-subject variability in physiological changes resulting from pregnancy [4]. These include changes in dietary-induced thermogenesis, pre-pregnancy malnutrition, or energy cost to synthesize new placental tissue.
However, studies examining gait biomechanics during pregnancy report that walking speed lowers significantly throughout pregnancy [1, 4], a behavioural change indicating that women during pregnancy are more comfortable at lower velocities. At lower velocities vertical excursions of the centre of gravity (COG) decrease, which contributes to the increase in energy expenditure [8–11]. In addition, step width tends to increase with pregnancy, a change consistent with the need for an increased balance, also an emphasis on safety, which is characterized by the change in path of the COG [12–17]. This seems to indicate a trade-off mechanism for gait during pregnancy, where women would choose to walk slower with wider steps that may result in an increase in walking energy expenditure, contributing to AEE. This notion contradicts the inherent nature of energy sparing during pregnancy [1, 18, 19]. Hence, the use of mechanical work measures would permit simplified evaluation of changes in walking patterns during pregnancy that may reduce the energy expenditure in walking during pregnancy. However, measures of mechanical work have an inherent weakness stemming from assumptions of energy transfers, relative metabolic cost of positive and negative work, stored elastic energy, captivation of antagonist muscles, and isometric work [20]. As a result, the use of mechanical power output in clinical populations tends to be equivocal. Furthermore, the application of mechanical power output on self-paced walking during pregnancy is also lacking.
Therefore, the purpose of this study was to examine the ability of total body mechanical power to explain the variability in the metabolic energy cost of self-paced walking during pregnancy in a South African cohort of women.
This study is ancillary to a larger Habitual Activity Patterns during Pregnancy (HAPPY)-study that investigated the influence of objectively determined physical activity patterns on various pregnancy parameters. Thirty-five (35) pregnant South African women at different stages of pregnancy, mean age 27 years (S.D. = 6.1) were recruited, by advertisements in the local press, the consulting rooms of local gynaecologists, and a local health clinic in Potchefstroom, North West Province, South Africa. For participation in the study, women had to be healthy, between ages 18 and 40 years, without mental or physical disability, and able to complete the test protocol. Participants were excluded from the study, if they presented with physical limitations that may prevent movement, the inability to complete test procedures, or were considered high-risk pregnancy according to ACSM guidelines [21]. The study was approved by the Human Research Ethics committee of the North-West University (NWU-0044-10-A1). Participants gave written consent for participation in the study before data collection. A translator was available in the case of language barriers. Participants were informed that they are free to withdraw from the study at any point. In addition, at the day of motion analyses testing participants were free to withdraw from this specific protocol. An opportunity to ask questions was also given.
Participants RMR was assessed using the fraction of oxygen in expired gases (Cosmed Fitmate, Italy), applying standard metabolic formulas, while energy expenditure was calculated using a fixed respiratory quotient (RQ) of 0.85 [22]. Before the Fitmate was attached, participants were laying still for 5 min on their left side, to ensure resting state. Prior to gas collection, participants were connected to the Fitmate for no longer than two (2) minutes to ensure that all dead space and any other gases are flushed prior to data collection. Following the initial 2 min preparation period, RMR gas exchange is collected for 16 min per Fitmate RMR protocol. Participants were instructed not to perform any exercise 24 h prior to testing, and fast for at least 10 h prior to the measurement. A calibration of the Fitmate was done before each participant was subjected to a measurement.
Walking energy expenditure was measured using portable K4b2 (Cosmed, Italy) system. Prior to each measure, the system was calibrated for O2 concentration, gas volume and delay, according to the manufacturer's recommendations. Participants walked at a self-selected pace along a 30-m oval track in the laboratory until steady state was reached. Steady state was considered by HR variation of being no more than ±3 bpm, and less than 5 % variation of RQ [23], during which RQ of less than ≤ 0.99 has to be maintained (indicative of still using oxidative system, and not reaching fatigue state) [24]. Walking metabolic rate was averaged for the full minute the steady state was reached. Although participants were instructed to rest at any point if they felt tired before reaching steady state, no participant exercised this option. All the parameters were collected at 2-s intervals. The following parameters were extracted: walking volume of oxygen (VO2), respiratory quotient (RQ), resting metabolic rate (RMR) (kcal/day), heart rate (HR)(bpm), and gross energy expenditure during walking per minute (GEEw).
Three dimensional (3-D) gait analysis was collected using eight Oqus 300+ cameras from Qualisys Motion Analysis System (Qualisys, Sweden) and filmed at 220 Hz. Before each gait data analysis, a 90-s wand (750 mm) calibration was completed, with a long arm of a L-shaped reference structure. Participants were dressed in appropriate clothing, cycling shorts and a tank top which would allow marker placement on the skin and minimize any artefacts from clothing movement. A full-body CAST/IK/HH gait model was used, requiring 12-mm self-adhesive reflective markers to be applied on the following anatomical landmarks (right and left): heel at the insertion of the Achilles tendon, head of the first, second and fifth metatarsal, medial and lateral malleoli, lower leg (shank) cluster consisting of 4 markers, lateral and medial knee epicondyles, thigh cluster consisting of 4 markers, greater trochanter, anterior and posterior superior iliac spine, inferior angle of scapula, thoracic vertebrae (T10), cervical vertebrae (C7), radial and ulnar styloid processes, humeral lateral epicondyle, humerus, acromion. This full marker set was used for a static trial only, which required the participant to stand still for 5 s while filmed in the centre of the calibrated area. Static trials were used to create a dynamic model for gait analysis. Once the static trial was completed, only dynamic markers were left, hence the markers on the medial and lateral malleoli, knee epicondyles, and the trochanter were removed.
During the dynamic trials, each participant was instructed to walk in a straight line at a self-selected pace along a 15-m laboratory walkway embedded with 4 AMTI BP400600 force plates (AMTI, MA, USA). Video and ground reaction force data were collected simultaneously for five seconds in the middle portion of the runway. Only trials in which the participant's foot landed entirely on a force plate for three consecutive steps (i.e. full stride), were considered for inclusion in the data set. The participants continued walking until 3 trials of full strides were completed. The participants were instructed to stop and rest as long as necessary, should they feel tired at any stage of the gait analyses. None of the participants requested a rest period during gait measures.
During walking trials, the data were inspected for gaps in marker trajectories. The default gap-fill function was applied for gaps of no more than 10 frames using NURB spline interpolation. No walking data trials analysed had gaps larger than 10 frames. Once the walking trials were trimmed to include completed strides, the data were exported to Visual 3D-motion analysis software for processing, through which segmental and whole-body kinetic data and gait kinematic data were calculated. The kinetic and kinematic parameters were low-pass filtered with a bidirectional Butterworth with a 10-Hz cut-off frequency to remove noise from the differentiation process with zero-phase distortion [13, 25].
Metabolic energy expenditure is generally reported in terms of O2 consumption (O2 rate), the millilitres of O2 consumed, per kilogram body mass per minute (ml/kg/min). However, to demonstrate the physiological work (O2 cost) for a given task, the physiological equivalent rate will also be normalized for speed to express the physiological work per unit distance (ml/kg/m) which is used to depict energy efficiency [10]. Therefore steady state walking total O2 cost might be affected by an increase in O2 rate, or a change in a walking speed, in which case the participant would not experience any physical differences. To reduce the impact of changes in resting metabolic rate (RMR), a non-dimensional parameter (NN) was used as it deduces the resting energy expenditure from gross or total energy expenditure during walking, leaving only energy cost required for walking and non- dimensional scaling to account for stature [26]. The net O2 cost may also be less sensitive to changes in walking speed [11]. While this method theoretically accounts for the pregnancy induced changes there are no articles addressing this normalization for gait in pregnancy.
Mechanical work and total energy expenditure were explained in detail in Bennett et al. [16], and Willems et al. [27], hence only a brief summary will be provided in this study. Internal (Wint) and external work (Wext), were calculated from COM excursion considering energy exchange as
$$ {\mathrm{W}}_{\mathrm{ext}}={\displaystyle \sum_{i=1}^N\ \left(\left|\varDelta \mathrm{E}\mathrm{p}\right|+\left|\varDelta \mathrm{E}\mathrm{k}\right|\right)}, $$
and with no energy exchange as
$$ {\mathrm{W}}_{\mathrm{int}}={\displaystyle \sum_{i=1}^N\ \left(\left|\varDelta \mathrm{E}\mathrm{p}\right|+\left|\varDelta \mathrm{E}\mathrm{k}\right|\right)}, $$
where ∆Ep and ∆Ek are changes in potential and kinetic energy, respectively [16]. Then, the energy recovery factor (R) representing the percentage of mechanical energy recovered via exchange between kinetic and potential energy in the COM movement is computed as [16]:
$$ \mathrm{R}=100 \times \frac{\left({\mathrm{W}}_{\mathrm{int}}-{\mathrm{W}}_{\mathrm{ext}}\right)}{{\mathrm{W}}_{\mathrm{int}}} $$
Further, total energy of the whole body based on the segmental movement relative to COMwb at any instant in time as
$$ {\mathrm{E}}_{\mathrm{tot},\mathrm{w}\mathrm{b}}=\mathrm{M}\mathrm{g}\mathrm{H}+\frac{1}{2}\mathrm{M}{{\mathrm{V}}_{\mathrm{cg}}}^2+{\displaystyle \sum_{i=1}^N\ \left(\frac{1}{2}{\mathrm{m}}_{\mathrm{i}}{{\mathrm{v}}_{\mathrm{i}}}^2+\frac{1}{2}{\mathrm{m}}_{\mathrm{i}}{{\mathrm{K}}_{\mathrm{i}}}^2{\upomega_{\mathrm{i}}}^2\right)}, $$
where M is the total body mass; g the acceleration due to gravity; H is height of the COM; V cg the velocity of the COG; m i and v i are mass and velocity of the ith segment relative to the surrounding; ω i and K i are the angular velocity and the radius of gyration of the ith segment around its centre of mass [27].
A one-way ANOVA was used to assess differences between trimesters for descriptive participants' parameters. Simple linear regression was used to calculate the relationship between metabolic energy and mechanical work. All analyses were performed using SPSS v.21.0 (IBM Corp., Armonk, NY), and significance set at p <0.05.
The participant demographics by trimester are given in Table 1. Some of the participants completed analysis at multiple trimesters totalling 44 measures during the total period of pregnancy. There were no differences in age and height between participants in each trimester, although mass gain was significantly increased throughout pregnancy as the foetus grew.
Table 1 Descriptive statistics of the participants by trimesters
Table 2 Regression equation for metabolic energy expenditure and mechanical work
Prediction equations for external (Wext), internal work (Wint) and total work (Wtot) depict no significant relationship with net O2 expenditure of walking during pregnancy (p =0.19, p =0.10 and p = 0.11, respectively), Fig. 1. However, once normalized for speed to express energy efficiency (net O2 cost) (Fig. 2), prediction equations (Table 2) show moderate, but significant relationship for external work (Wext) (r = 0.49, p ≤ 0.01), internal work (Wint) (r = 0.66, p ≤ 0.00), and total work (Wtot) (r = 0.63, p ≤ 0.01). Although net O2 expenditure did not demonstrate a significant relationship to mechanical work, adding REE resulted in a significant relationship to mechanical work. Gross O2 expenditure during walking shows a weak to moderate, but significant relationship to Wint (r = 0.60, p ≤ 0.01), moderate with Wext (r = 0. 42, p ≤ 0.01), as well as with Wtot (r = .71, p ≤ 0.01). When considering REE in walking energy cost, the relationship with mechanical work as demonstrated in Fig. 2, did not change significantly. The mass is also the most significant (r = 0.79, p ≤ 0.01) contributor in variance of gross O2 consumption (VO2) with 33.3 %, next to walking speed (10.0 %, p ≤ 0.05). However, once normalized for REE, walking speed explained 26.1 % of the net energy expenditure (p ≤ 0.01). Energy recovery factor (R) for the pregnant population in this study was 58.1 ± 3.2 %. The exchange of potential and kinetic energies, a determining factor for energy recovery, was largely affected by speed of walking. Figure 3 demonstrates a significant relationship between speed and energy recovery (r = 0.61, p ≤ 0.01), indicating that during pregnancy energy recovered may be improved with an increase in speed. However, in this study pregnant women decrease their walking speed later in pregnancy. The changes observed are an indication of an increase in walking economy with an increase in walking speed.
Relationship between metabolic rate and mechanical work of walking during pregnancy
Relationship between metabolic cost and mechanical work of walking during pregnancy
Relationship between mechanical energy recovery and walking speed throughout pregnancy
This study addressed the relationship between metabolic energy expenditure and mechanical work measures of gait throughout pregnancy. With significant findings in studies on gait in pregnancy, as well as energy expenditure during pregnancy, researchers may be able to determine whether changes in gait during pregnancy may contribute to the overall energy expenditure, and whether changes in gait may be used as an energy-sparing strategy during pregnancy.
The study finds significant relationship between Wint, Wext, and Wtot, and net O2 and gross O2 energy cost, and gross O2 rate, but not for net O2 rate. The significance between parameters, however, was of different magnitudes and was relevant to the resting energy expenditure, and the normalization for the walking speed. The changes in metabolic system due to foetal development will affect the resting energy expenditure, and consequently the total energy expenditure during pregnancy. This change cannot be accounted for by mechanical work measures, and metabolic changes in resting energy expenditure are also generally based on the estimates. In this study normalizing gross O2 cost for resting energy expenditure, resulted in a stronger correlation, which suggests that normalization for resting energy expenditure does not decrease the internal consistency of data [23], and reduces the variability of the metabolic changes from foetal development.
Measuring metabolic energy consumption relative to distance or time travelled, may also affect the strength of the correlation [10, 26, 28]. Burdet et al. [28] found that the correlation of mechanical work with metabolic energy consumption per meter walked (ml/kg/m) weaker than when metabolic consumption was measured per time (ml/kg/s). However, Schwartz et al. [26] and Waters and Mulroy [10] demonstrate that normalizing for walking speed would give a better indication of walking efficiency. In this study, once the metabolic energy consumption was normalized for speed, the correlation improved 62.5 % for Wtot, 4.3 % for Wext, and more than doubled for Wint. This increase may be rooted in the relationship between energy recovery (i.e. energy transfer) and walking speed.
Optimized energy transfer during walking would be inversely related to metabolic cost [29]. The results of this are indicative of this notion. There was a significant positive relationship between the percentage of energy recovered and walking speed (Fig. 3). In addition, women in this study had walking speed significantly lower than that of reported optimal walking efficiency where energy recovery is the highest [10, 11, 30]. Furthermore, as energy transfer demonstrated significant inverse correlation with the measures of gross O2 cost (r = −0.44, p = 0.002) and net O2 cost (r = −.328, p = .023). These results are in agreement with previous findings of Willems et al. [27] and Olney et al. [25].
Therefore, measures of mechanical work, when adjusted for resting energy expenditure and walking speed, may be useful in comparing metabolic energy consumption of gait between women during pregnancy, or longitudinal assessment of the same individual throughout pregnancy. Although mechanical work may not account for the variability in metabolic cost stemming from the foetal development, normalizing for REE and speed of walking, may allow mechanical work to predict the changes in gait during pregnancy.
Byrne NM, Groves AM, McIntyre HD, Callaway LK. Changes in resting and walking energy expenditure and walking speed during pregnancy in obese women. Am J Clin Nutr. 2011;94(6):819–30.
Melzer K, Schutz Y, Boulvain M, Kayser B. Pregnancy-related changes in activity energy expenditure and resting metabolic rate in Switzerland. Eur J Clin Nutr. 2009;63(10):1185–91.
Van Raaij JM, Schonk C, Vermaat-Miedema SH, Peek ME, Hautvast JG. Energy before, cost of walking at a fixed pace during, and after pregnancy. Am J Clin Nutr. 1990;51:158–61.
Wu WH, Meijer OG, Jutte CP, Uegaki K, Lamoth CJC, de Wolf GS, et al. Gait coordination in pregnancy: transverse pelvic and thoracic rotations and their relative phase. Clin Biomech. 2002;19:480–8.
Butte NF, Wong WW, Treuth MS, Ellis KJ, Smith EOB. Energy requirements during pregnancy based on total energy expenditure and energy deposition. Am J Clin Nutr. 2004;79(1):1078–87.
Melzer K, Schutz Y, Kayser B. Normalization of basal metabolic rate for differences in body weight in pregnant women. Eur J Obst Gynecol Reprod Biol. 2011;159(2):480–1.
Löf M. Physical activity pattern and activity energy expenditure in healthy pregnant and non-pregnant Swedish women. Eur J Clin Nutr. 2011;65(12):1295–301.
Gordon KE, Ferris DP, Kuo AD. Metabolic and mechanical energy costs of reducing vertical center of mass movement during gait. Arch Phys Med Rehabil. 2009;90(1):136–44.
Ortega JD, Farley CT. Minimizing center of mass vertical movement increases metabolic cost in walking. J Appl Physiol. 2005;99(6):2099–107.
Waters RL, Mulroy S. The energy expenditure of normal and pathologic gait. Gait Posture. 1999;9:207–31.
Baker R, Hausch A, Mcdowell B. Reducing the variability of oxygen consumption measurements. Gait Posture. 2001;13:202–9.
Dumas GA, Reid JG, Wolfe LA, Griffin MP. Exercise, posture, and back pain during pregnancy. Part 2. Exercise and back pain. Clin Biomech. 1995;10(2):104–9.
Wu G, Siegler S, Allard P, Kirtley C, Leardini A, Rosenbaum D, et al. ISB recommendation on definitions of joint coordinate system of various joints for the reporting of human joiont motion - part I: ankle, hip, and spine. J Biomech. 2002;35:543–8.
Whittlesey SN, van Emmerik RE, Hamill J. The swing phase of human walking is not a passive movement. Mot Control. 2000;4(3):273–92.
Foti T, Davids JR, Bagley A. A biomechanical analysis of gait during pregnancy a biomechanical analysis of gait during pregnancy. J Bone Jt Surg. 2000;82-A(5):625–32.
Bennett BC, Abel MF, Wolovick A, Franklin T, Allaire PE, Kerrigan DC. Center of mass movement and energy transfer during walking in children with cerebral palsy. Arch Phys Med Rehabil. 2005;86(11):2189–94.
Kerrigan DC, Della CU, Marciello M, Riley PO. A refined view of the determinants of gait: significance of heel rise. Arch Phys Med Rehabil. 2000;81(8):1077–80.
Poppitt SD, Prentice AM, Jequier E, Schutz Y, Whitehead RG. Evidence of energy sparing in gambian women during pregnancy: a longitudinal study using whole-body calorimetry. Am J Clin Nutr. 1993;57:353–64.
Prentice AM, Goldberg GR, Davies HL, Murgatroyd PR, Scott W. Energy-sparing adaptations in human pregnancy assessed by whole-body calorimetry. Br J Nutr. 1989;62:5–22.
Unnithan VB, Dowling J, Frost G, Bar-or O. Role of mechanical power estimates in the O2 cost of walking in children with cerebral palsy. Med Sci Sports Exerc. 1999;31(12):1703–8.
ACSM (American College of Sports Medicine). ACSM's guidelines for exercise testing and prescription. 9th ed. Philadelphia: Lippincott, Williams and Wilkins; 2013.
Nieman DC, Austin MD, Benezra L, Pearce S, McInnis T, Unick J, et al. Validation of Cosmed's FitMate in measuring oxygen consumption and estimating resting metabolic rate. Res Sports Med. 2006;14(2):89–96.
Thomas SS, Buckon CE, Schwartz MH, Sussman MD, Aiona MD. Walking energy expenditure in able-bodied individuals: a comparison of common measures of energy efficiency. Gait Posture. 2009;29(4):592–6.
Schutz Y. Dietary fat, lipogenesis and energy balance. Physiol Behav. 2004;83(4):557–64.
Olney SJ, Macphail HEA, Hedden DM, Boyce WF. Work and power in hemiplegic cerebral palsy gait. J Phys Ther. 1990;70(7):431–8.
Schwartz MH, Koop SE, Bourke JL, Baker R. A nondimensional normalization scheme for oxygen utilization data. Gait Posture. 2006;24(1):14–22.
Willems PA, Cavagna GA, Heglund NC. External, internal and total work in human locomotion. J Experiemental Biol. 1995;198:379–93.
Burdett RG, Skrinar GS, Simon SR. Comparison of mechanical work and metabolic energy consumption during normal gait. J Orthop Res. 1983;1(1):63–72.
Winter DA. Human balance and posture standing and walking control during. Gait Posture. 1995;3:193. 214.
Abe D, Muraki S, Yasukouchi A. Ergonomic effects of load carriage on the upper and lower back on metabolic energy cost of walking. Appl Ergon. 2008;39(3):392–8.
We would like to acknowledge our co-worker, Abie van Oort, for assisting with the recruitment of the participants. In addition, we would like to extend our gratitude to the participants of the HAPPY-study and the clinic staff for assisting and supporting the recruitment of participants and translation when needed. We also convey our gratitude to The South African Sugar Association and National Research Foundation's Swiss South African Joint Research Programme (UID: 78606) of the HAPPY project. Funding was specific for the collection of physical activity data and resting metabolic rate.
Physical Activity, Sport and Recreation Research Focus Area, Private Bag x6001, Internal Box 481, North-West University, Potchefstroom, 2520, South Africa
Zarko Krkeljas & Sarah Johanna Moss
Zarko Krkeljas
Sarah Johanna Moss
Correspondence to Zarko Krkeljas.
ZK carried out the walking energy expenditure study, designed the protocol, collected data, and drafted the manuscript. SJM is the principle investigator of the larger conceptual project, the HAPPY-study and participated in critically revising the manuscript, and gave the final approval for the version to be published. All authors read and approved the final manuscript.
Krkeljas, Z., Moss, S.J. Correlating mechanical work with energy consumption during gait throughout pregnancy. BMC Pregnancy Childbirth 15, 303 (2015). https://doi.org/10.1186/s12884-015-0744-4
Metabolic energy
Mechanical work
|
CommonCrawl
|
Commentary to: a cross-validation-based approach for delimiting reliable home range estimates
Eric R. Dougherty1,
Perry de Valpine1,
Colin J. Carlson2,3,
Jason K. Blackburn4,5 &
Wayne M. Getz1,6
Movement Ecology volume 6, Article number: 10 (2018) Cite this article
The Original Article was published on 06 September 2017
Continued exploration of the performance of the recently proposed cross-validation-based approach for delimiting home ranges using the Time Local Convex Hull (T-LoCoH) method has revealed a number of issues with the original formulation.
Here we replace the ad hoc cross-validation score with a new formulation based on the total log probability of out-of-sample predictions. To obtain these probabilities, we interpret the normalized LoCoH hulls as a probability density. The application of the approach described here results in optimal parameter sets that differ dramatically from those selected using the original formulation. The derived metrics of home range size, mean revisitation rate, and mean duration of visit are also altered using the corrected formulation.
Despite these differences, we encourage the use of the cross-validation-based approach, as it provides a unifying framework governed by the statistical properties of the home ranges rather than subjective selections by the user.
Continued exploration of the the cross-validation-based approach proposed in [1] has revealed a number of issues with the original formulation of the optimization equation. This original formulation was ad hoc in its combination of two statistical approaches (cross-validation and information criteria), and the result was a metric without a clear basis in statistical theory. As such, we strongly recommend that users rely upon the method described here as opposed to one set forth in the original publication. In particular, the shortcomings can be summarized as follows:
Both cross-validation and information criterion approaches aim to avoid over-fitting. In the case of cross-validation, one attempts to estimate out-of-sample prediction error, so the score used should be a measure of prediction errors of the held-out points. If the model uses k too small or s too large, it is likely to overfit the training data and will predict the testing data poorly. On the other hand, if the model uses k too large or s too small, it will underfit the training data by missing the real variations in space use. Thus, cross-validation naturally penalizes model complexity because excessive complexity (small k) results in poor predictions. Information criteria approaches include a penalty term that increases with model complexity as measured by larger numbers of parameters. Using such an information criterion as a cross-validation score is not necessary since cross-validation should naturally penalize excessive model complexity.
The formulation of the information criterion score did not follow the rules of probability because probabilities of out-of-sample predictions were not properly normalized, and multiple probabilities were combined by summation. In this sense, it lacked a firm connection to the statistical theory underlying information criteria approaches.
Here we propose an alternative formulation in which we interpret a normalized version of LoCoH hulls as an estimated probability surface and recast the cross-validation score as the total log probability of out-of-sample predictions, a common choice in cross-validation schemes. The approach, explained in detail below, results in more appropriate behavior, but also has the effect of significantly altering the optimal parameter values selected by the algorithm. Thus, in addition to presenting the new cross-validation equation, we include tables and figures with the newly selected parameter values and newly calculated derived metric values (home range area, mean duration, and mean visitation rates). Finally, we offer an alternative R script that searches a much broader parameter space in a more efficient manner (Additional file 1).
Updated Cross-Validation Approach
Using the training/testing split as described in the original presentation of the algorithm, a grid-based exploration of parameter space was conducted (Fig. 1), whereby each of the training/testing datasets (i={1,...,n}) was analyzed at every combination of k and s values on the grid. This analysis entailed the creation of local convex hulls with k nearest neighbors and a scaling factor of s. In all subsequent analyses, we assume that the scaling of time follows a linear formulation; however, when movement patterns more closely exemplify diffusion dynamics, an alternative equation for the TSD may be more appropriate [2]. The test points (j={1,...,m}) were then laid upon the resulting hulls.
Conceptual Figure of Grid-based Search. A cross-validation surface is generated as the algorithm searches over a grid of alternative s and k values for each individual movement path. The increments of the grid can be chosen by the user. The peak in the surface indicates that the home range associated with the particular parameter set offers the highest probability for the test points. Here, the white boxes denote the maximum probability value, and thereby, the optimal parameter set
We formulate the probabilities for out-of-sample points by normalizing the LoCoH surface so that the probability of an observation occurring at a particular location can be calculated. This value is obtained by dividing the number of training hulls that contain the test point location (gi,j) by the summed area of all training hulls (Ai). Then, the log probability was calculated for each point per training hullset. To avoid log probability values of - ∞, test points that were not contained within any hulls were assigned a probability value equal to the inverse of \(A_{i}^{2}\), resulting in a substantially lower log probability than that of a test point contained in a single hull. Finally, a single value (Pk,s) was assigned to each combination of k and s value by summing across all of the test points in all of the training/testing datasets:
$${P_{k,s}} = \sum_{i=1}^{n} \sum_{j=1}^{m} \log\left(\frac{g_{i,j}}{A_{i}}\right) $$
Because the probability of each test point is normalized based on the total area contained within all of the training hulls, there exists a natural penalty for high k values. For example, a k value equal to the number of training points (kmax; regardless of the s value) will result in all hulls being identical and each test point overlapping all of the hulls. However, the large total area of the hullset when k=kmax will result in relatively small probability values for each test point (i.e., independent probability values equal to the inverse of the area of one of the hulls), effectively penalizing the parameter set containing kmax. The underlying cross-validation procedure could very easily be extended for the optimization of the the adaptive parameter in the a-method (as opposed to the k-method) because of its scaling with the total area of the hullset.
The optimal parameter values selected using the corrected cross-validation method are substantially different from those selected in the original publication (Table 1). However, because the original formulation was not supported by cohesive statistical theory, we will discuss these new results only in reference to the guideline-based parameter values rather than comparing them to the results emerging from the published algorithm. The mean s value selected using the algorithm for springbok was 0.02 (SE = 0.008) and for zebra was 0.0012 (SE = 0.0005). The mean s value selected using the guidelines for springbok was 0.005 (SE = 0.002) and 0.017 (SE = 0.002) for zebra. Thus, the s values selected by the algorithm and the guidelines were not significantly different for springbok (p=0.10), but were for zebra (p<0.001). In the case of the k values, the optimal values selected using the algorithm were significantly higher than those resulting from the guidelines. The mean k value selected using the algorithm for springbok was 225.5 (SE = 66.83) whereas the mean using the guidelines was 22.5 (SE = 1.71; p=0.003). The same trend was observed in zebra where the mean k value based on the algorithm was 347.2 (SE = 54.36), whereas the mean from the guidelines was 20 (SE = 1.58; p=0.004).
Table 1 Parameter values for analysis
The significantly higher k values emerging from the algorithm gave rise to significantly larger home ranges in both species (Table 2). In springbok, the mean home range size was 265.41 km2 (SE = 76.23 km2) using the high end of the guideline based range, and 401.64 km2 (SE = 127.56 km2) using the algorithm (p=0.05). In zebra, the mean home range was 694.43 km2 (SE = 80.81 km2) using the guidelines and 1081.29 km2 (SE = 162.17 km2) when the algorithm was applied (p=0.01). When the derived metrics were considered, however, the substantial differences in k values did not always result in significantly different duration (Table 3) and visitation rates (Table 4). Though the duration rates in zebra derived from the algorithm were, indeed, significantly higher than those derived using the high value from the range based on the guidelines (p=0.05), this was not the case for springbok (p=0.08). Similarly, the visitation rates emerging from the parameter sets selected by the algorithm were not significantly different from those derived based on the guidelines in either species (p=0.33 in springbok and p=0.15 in zebra).
Table 2 Home range areas (in square kilometers)
Table 3 Mean duration (MNLV) values. The derived metrics obtained using the parameter sets recommended by the algorithm and by the guidelines set forth in the T-LoCoH documentation
Table 4 Mean visitation (NSV) values
The results presented here indicate that the effect of selecting parameters using the algorithm rather than the guidelines will be highly contingent upon the focus of the research question. Where home range delineation is the goal, the results are likely to differ significantly (Fig. 2). In the case of epidemiological questions, however, the effects will be somewhat less predictable, and in certain cases, similar conclusions might be drawn irrespective of the approach used for selecting optimal parameters. If an element of the analysis involves comparisons across individuals or species, however, the cross-validation-based approach provides a unifying framework governed by statistical properties of the home ranges rather than subjective selections by the user.
Comparison of Resulting Home Ranges. An illustration of two sets of home ranges that result from the parameter sets chosen by the algorithm (red), the low range of the guide (blue), and the high range of the guide (black). The home range set on the left is based on the sample points from the springbok AG207, and the largest home range covers 429.81 km2. The home range set on the right is based on the GPS fixes from zebra AG256, and the largest home range covers 1363.21 km2
High Resolution Cross-Validation Surface. A high resolution depiction of a portion of the optimal parameter space traversed during the final stage of the efficient search algorithm. All parameter sets with log probability values above -10090 are shown, with darker shading indicating higher probability. In this particular application, the search is performed over smaller intervals of s (0.0001 rather than 0.001), and the optimal parameter set (k=171 and s=0.0133) is similar to the parameter set selected at the coarser scale
Dougherty ER, Carlson CJ, Blackburn JK, Getz WM. A cross-validation-based approach for delimiting reliable home range estimates. Mov Ecol. 2017; 5(1):19.
Lyons AJ, Turner WC, Getz WM. Home range plus: a space-time characterization of movement over real landscapes. Mov Ecol. 2013; 1(1):2.
The authors would also like to acknowledge Andy Lyons for creating, maintaining, and improving the T-LoCoH package.
The case study presented here used GPS movement data from zebra and springbok from Etosha National Park, Namibia, which were collected under a grant obtained by WMG (NIH GM083863). In addition, partial funding for this study was provided by NIH 1R01GM117617-01 to JKB and WMG. The funders had no role in study design, data collection and analysis, nor manuscript writing.
Please contact Wayne M. Getz ([email protected]) for data requests.
Environmental Science, Policy & Management, University of California, Berkeley, USA
Eric R. Dougherty, Perry de Valpine & Wayne M. Getz
National Socio-Environmental Synthesis Center, University of Maryland, Annapolis, USA
Colin J. Carlson
Department of Biology, Georgetown University, Washington, DC, USA
Spatial Epidemiology and Ecology Research Laboratory, Department of Geography, University of Florida, Gainesville, USA
Jason K. Blackburn
Emerging Pathogens Institute, University of Florida, Gainesville, USA
School of Mathematical Sciences, University of KwaZulu-Natal, Durban, South Africa
Wayne M. Getz
Eric R. Dougherty
Perry de Valpine
PDV and ERD developed cross-validation approach. ERD ran analyses on empirical movement paths. All authors contributed to writing and editing the manuscript.
Correspondence to Eric R. Dougherty.
All movement data were collected according to the animal handling protocol AUP R217-0509B (University of California, Berkeley).
A new R script for a more efficient grid-based search (Fig. 3) can be found at: https://github.com/doughertyeric/Updated_T-LoCoH_Algorithm. As currently parameterized, the grid-based search algorithm covers s values from 0 to 0.05 and k values between 4 and 800. The algorithm searches across the broadest set of k values in intervals of 20 and s values in intervals of 0.01. Upon identifying a peak in the probability surface, the algorithm selects a range of 40 k values around the peak and refines the search there in k value increments of 5. Finally, another range of 10 possible k values is selected and the finest scale grid-search is conducted in intervals of 1 and s value intervals of 0.001 before selecting the optimal parameter set. (R 11 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Dougherty, E.R., de Valpine, P., Carlson, C.J. et al. Commentary to: a cross-validation-based approach for delimiting reliable home range estimates. Mov Ecol 6, 10 (2018). https://doi.org/10.1186/s40462-018-0128-2
Time local convex hulls
T-LoCoH
Home range
Cross-validation
|
CommonCrawl
|
Quantile regression: Loss function
I am trying to understand the quantile regression, but one thing that makes me suffer is the choice of the loss function.
$\rho_\tau(u) = u(\tau-1_{\{u<0\}})$
I know that the minimum of the expectation of $\rho_\tau(y-u)$ is equal to the $\tau\%$-quantile, but what is the intuitive reason to start off with this function? I don't see the relation between minimizing this function and the quantile. Can somebody explain it to me?
quantiles loss-functions quantile-regression
Richard Hardy
CDOCDO
I understand this question as asking for insight into how one could come up with any loss function that produces a given quantile as a loss minimizer no matter what the underlying distribution might be. It would be unsatisfactory, then, just to repeat the analysis in Wikipedia or elsewhere that shows this particular loss function works.
Let's begin with something familiar and simple.
What you're talking about is finding a "location" $x^{*}$ relative to a distribution or set of data $F$. It is well known, for instance, that the mean $\bar x$ minimizes the expected squared residual; that is, it is a value for which
$$\mathcal{L}_F(\bar x)=\int_{\mathbb{R}} (x - \bar x)^2 dF(x)$$
is as small as possible. I have used this notation to remind us that $\mathcal{L}$ is derived from a loss, that it is determined by $F$, but most importantly it depends on the number $\bar x$.
The standard way to show that $x^{*}$ minimizes any function begins by demonstrating the function's value does not decrease when $x^{*}$ is changed by a little bit. Such a value is called a critical point of the function.
What kind of loss function $\Lambda$ would result in a percentile $F^{-1}(\alpha)$ being a critical point? The loss for that value would be
$$\mathcal{L}_F(F^{-1}(\alpha)) = \int_{\mathbb{R}} \Lambda(x-F^{-1}(\alpha))dF(x)=\int_0^1\Lambda\left(F^{-1}(u)-F^{-1}(\alpha)\right)du.$$
For this to be a critical point, its derivative must be zero. Since we're just trying to find some solution, we won't pause to see whether the manipulations are legitimate: we'll plan to check technical details (such as whether we really can differentiate $\Lambda$, etc.) at the end. Thus
$$\eqalign{0 &=\mathcal{L}_F^\prime(x^{*})= \mathcal{L}_F^\prime(F^{-1}(\alpha))= -\int_0^1 \Lambda^\prime\left(F^{-1}(u)-F^{-1}(\alpha)\right)du \\ &= -\int_0^{\alpha} \Lambda^\prime\left(F^{-1}(u)-F^{-1}(\alpha)\right)du -\int_{\alpha}^1 \Lambda^\prime\left(F^{-1}(u)-F^{-1}(\alpha)\right)du.\tag{1} }$$
On the left hand side, the argument of $\Lambda$ is negative, whereas on the right hand side it is positive. Other than that, we have little control over the values of these integrals because $F$ could be any distribution function. Consequently our only hope is to make $\Lambda^\prime$ depend only on the sign of its argument, and otherwise it must be constant.
This implies $\Lambda$ will be piecewise linear, potentially with different slopes to the left and right of zero. Clearly it should be decreasing as zero is approached--it is, after all, a loss and not a gain. Moreover, rescaling $\Lambda$ by a constant will not change its properties, so we may feel free to set the left hand slope to $-1$. Let $\tau \gt 0$ be the right hand slope. Then $(1)$ simplifies to
$$0 = \alpha - \tau (1 - \alpha),$$
whence the unique solution is, up to a positive multiple,
$$\Lambda(x) = \cases{-x, \ x \le 0 \\ \frac{\alpha}{1-\alpha}x, \ x \ge 0.}$$
Multiplying this (natural) solution by $1-\alpha$, to clear the denominator, produces the loss function presented in the question.
Clearly all our manipulations are mathematically legitimate when $\Lambda$ has this form.
whuber♦whuber
The way this loss function is expressed is nice and compact but I think it's easier to understand by rewriting it as $$\rho_\tau(X-m) = (X-m)(\tau-1_{(X-m<0)}) = \begin{cases} \tau |X-m| & if \; X-m \ge 0 \\ (1 - \tau) |X-m| & if \; X-m < 0) \end{cases}$$
If you want to get an intuitive sense of why minimizing this loss function yields the $\tau$th quantile, it's helpful to consider a simple example. Let $X$ be a uniform random variable between 0 and 1. Let's also choose a concrete value for $\tau$, say, $0.25$.
So now the question is why would this loss function be minimized at $m=0.25$? Obviously, there's three times as much mass in the uniform distribution to the right of $m$ than there is to the left. And the loss function weights the values larger than this number at only a third of the weight given to values less than it. Thus, it's sort of intuitive that the scales are balanced when the $\tau$th quantile is used as the inflection point for the loss function.
jjetjjet
$\begingroup$ Shouldn't it be the other way? Under-guessing will cost three times as much? $\endgroup$ – Edi Bice Apr 1 '19 at 15:01
$\begingroup$ Thanks for catching that. The formula is right but I initially worded it incorrectly in my explanation. $\endgroup$ – jjet Aug 17 '19 at 21:03
Not the answer you're looking for? Browse other questions tagged quantiles loss-functions quantile-regression or ask your own question.
Using Leibniz integral rule when minimizing Expected Absolute Loss
Quantile regression, minimizer
What is the statistical model behind the SVM algorithm?
How do I show that the sample median minimizes the sum of absolute deviations?
How to find a value that ensures 70% of population is above it
What does it mean L1 loss is not differentiable?
Examples of "risk-averse" loss functions
Formula of quantile regression?
Appropriate error measure
Different error weighting for positive and negative residuals for OLS?
A "Gambler's Loss function"?
What is the difference between conditional and unconditional quantile regression?
Scoring quantile regressor
Quantile regression - "check function"
What is the quantile covariance?
|
CommonCrawl
|
Linear programming based Lyapunov function computation for differential inclusions
DCDS-B Home
January 2012, 17(1): 1-31. doi: 10.3934/dcdsb.2012.17.1
Homogenization of a one-dimensional spectral problem for a singularly perturbed elliptic operator with Neumann boundary conditions
Grégoire Allaire 1, , Yves Capdeboscq 2, and Marjolaine Puel 3,
CMAP, CNRS UMR 7641, École Polytechnique, Route de Saclay, Palaiseau F91128, France
Mathematical Institute, 24-29 St Giles', OXFORD OX1 3LB, United Kingdom
Institut de Mathématiques, Université de Toulouse and CNRS, Université Paul Sabatier, 31062 Toulouse Cedex 9, France
Received April 2011 Revised July 2011 Published October 2011
We study the asymptotic behavior of the first eigenvalue and eigenfunction of a one-dimensional periodic elliptic operator with Neumann boundary conditions. The second order elliptic equation is not self-adjoint and is singularly perturbed since, denoting by $\epsilon$ the period, each derivative is scaled by an $\epsilon$ factor. The main difficulty is that the domain size is not an integer multiple of the period. More precisely, for a domain of size $1$ and a given fractional part $0\leq\delta<1$, we consider a sequence of periods $\epsilon_n=1/(n+\delta)$ with $n\in \mathbb{N}$. In other words, the domain contains $n$ entire periodic cells and a fraction $\delta$ of a cell cut by the domain boundary. According to the value of the fractional part $\delta$, different asymptotic behaviors are possible: in some cases an homogenized limit is obtained, while in other cases the first eigenfunction is exponentially localized at one of the extreme points of the domain.
Keywords: Homogenization, localization., spectral problem.
Mathematics Subject Classification: Primary: 35B27; Secondary: 74Q0.
Citation: Grégoire Allaire, Yves Capdeboscq, Marjolaine Puel. Homogenization of a one-dimensional spectral problem for a singularly perturbed elliptic operator with Neumann boundary conditions. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 1-31. doi: 10.3934/dcdsb.2012.17.1
G. Allaire and Y. Capdeboscq, Homogenization of a spectral problem in neutronic multigroup diffusion, Comput. Methods Appl. Mech. Engrg., 187 (2000), 91-117. doi: 10.1016/S0045-7825(99)00112-7. Google Scholar
G. Allaire and Y. Capdeboscq, Homogenization and localization for a 1-D eigenvalue problem in a periodic medium with an interface, Ann. Math. Pura Appl. (4), 181 (2002), 247-282. Google Scholar
G. Allaire and C. Conca, Bloch wave homogenization and spectral asymptotic analysis, J. Math. Pures et Appli. (9), 77 (1998), 153-208. doi: 10.1016/S0021-7824(98)80068-8. Google Scholar
G. Allaire and R. Orive, Homogenization of periodic non self-adjoint problems with large drift and potential, ESAIM COCV, 13 (2007), 735-749. doi: 10.1051/cocv:2007030. Google Scholar
G. Allaire and A. Piatnistki, Uniform spectral asymptotics for singularly perturbed locally periodic operators, Comm. PDE, 27 (2002), 705-725. doi: 10.1081/PDE-120002871. Google Scholar
A. Bensoussan, J.-L. Lions and G. Papanicolaou, "Asymptotic Analysis for Periodic Structures," North-Holland, Amsterdam, 1978. Google Scholar
Y. Capdeboscq, Homogenization of a diffusion equation with drift, C. R. Acad. Sci. Paris Série I Math., 327 (1998), 807-812. Google Scholar
G. Floquet, Sur les équations différentielles linéaires à coefficients périodiques, Ann. Sci. École Norm. Sup. Sér. (2), 12 (1883), 47-89. Google Scholar
T. Kato, "Perturbation Theory for Linear Operators," Second edition, Grundlehren der Mathematischen Wissenschaften, Band 132, Springer-Verlag, Berlin-New York, 1976. Google Scholar
S. Kozlov, Reducibility of quasiperiodic differential operators and averaging, Trudy Moskov. Mat. Obshch., 46 (1983), 99-123. Google Scholar
W. Magnus and S. Winkler, "Hill's Equation," Interscience Tracts in Pure and Applied Mathematics, No. 20, Interscience Publishers John Wiley & Sons, New York-London-Sydney, 1966. Google Scholar
S. Moskow and M. Vogelius, First-order corrections to the homogenised eigenvalues of a periodic composite medium. A convergence proof, Proc. Roy. Soc. Edinburgh Sect. A, 127 (1997), 1263-1299. Google Scholar
O. A. Oleinik, A. S. Shamaev and G. A. Yosifian, On the limiting behaviour of a sequence of operators defined in different Hilbert's spaces, Upsekhi Math. Nauk., 44 (1989), 157-158. Google Scholar
B. Perthame and P. Souganidis, Asymmetric potentials and motor effect: A homogenization approach, Ann. Inst. H. Poincaré Anal. Non Linéaire, 26 (2009), 2055-2071. doi: 10.1016/j.anihpc.2008.10.003. Google Scholar
F. Santosa and M. Vogelius, First-order corrections to the homogenized eigenvalues of a periodic composite medium, SIAM J. Appl. Math., 53 (1993), 1636-1668. doi: 10.1137/0153076. Google Scholar
M. Vanninathan, Homogenization of eigenvalue problems in perforated domains, Proc. Indian Acad. Sci. Math. Sci., 90 (1981), 239-271. doi: 10.1007/BF02838079. Google Scholar
Eugenia Pérez. On periodic Steklov type eigenvalue problems on half-bands and the spectral homogenization problem. Discrete & Continuous Dynamical Systems - B, 2007, 7 (4) : 859-883. doi: 10.3934/dcdsb.2007.7.859
Natalia O. Babych, Ilia V. Kamotski, Valery P. Smyshlyaev. Homogenization of spectral problems in bounded domains with doubly high contrasts. Networks & Heterogeneous Media, 2008, 3 (3) : 413-436. doi: 10.3934/nhm.2008.3.413
Ben Schweizer, Marco Veneroni. The needle problem approach to non-periodic homogenization. Networks & Heterogeneous Media, 2011, 6 (4) : 755-781. doi: 10.3934/nhm.2011.6.755
Fioralba Cakoni, Houssem Haddar, Isaac Harris. Homogenization of the transmission eigenvalue problem for periodic media and application to the inverse problem. Inverse Problems & Imaging, 2015, 9 (4) : 1025-1049. doi: 10.3934/ipi.2015.9.1025
Germain Gendron. Uniqueness results in the inverse spectral Steklov problem. Inverse Problems & Imaging, 2020, 14 (4) : 631-664. doi: 10.3934/ipi.2020029
Frédéric Legoll, William Minvielle. Variance reduction using antithetic variables for a nonlinear convex stochastic homogenization problem. Discrete & Continuous Dynamical Systems - S, 2015, 8 (1) : 1-27. doi: 10.3934/dcdss.2015.8.1
Grégoire Allaire, Alessandro Ferriero. Homogenization and long time asymptotic of a fluid-structure interaction problem. Discrete & Continuous Dynamical Systems - B, 2008, 9 (2) : 199-220. doi: 10.3934/dcdsb.2008.9.199
Aníbal Rodríguez-Bernal, Robert Willie. Singular large diffusivity and spatial homogenization in a non homogeneous linear parabolic problem. Discrete & Continuous Dynamical Systems - B, 2005, 5 (2) : 385-410. doi: 10.3934/dcdsb.2005.5.385
Grégoire Allaire, Zakaria Habibi. Second order corrector in the homogenization of a conductive-radiative heat transfer problem. Discrete & Continuous Dynamical Systems - B, 2013, 18 (1) : 1-36. doi: 10.3934/dcdsb.2013.18.1
Weinan E, Weiguo Gao. Orbital minimization with localization. Discrete & Continuous Dynamical Systems, 2009, 23 (1&2) : 249-264. doi: 10.3934/dcds.2009.23.249
Radu Balan, Peter G. Casazza, Christopher Heil and Zeph Landau. Density, overcompleteness, and localization of frames. Electronic Research Announcements, 2006, 12: 71-86.
Luisa Berchialla, Luigi Galgani, Antonio Giorgilli. Localization of energy in FPU chains. Discrete & Continuous Dynamical Systems, 2004, 11 (4) : 855-866. doi: 10.3934/dcds.2004.11.855
Chaoxu Pei, Mark Sussman, M. Yousuff Hussaini. A space-time discontinuous Galerkin spectral element method for the Stefan problem. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 3595-3622. doi: 10.3934/dcdsb.2017216
Grégoire Allaire Yves Capdeboscq Marjolaine Puel
|
CommonCrawl
|
Spatial degrees-of-freedom in large-array full-duplex: the impact of backscattering
Evan Everett1 &
Ashutosh Sabharwal1
EURASIP Journal on Wireless Communications and Networking volume 2016, Article number: 286 (2016) Cite this article
The key challenge for in-band full-duplex wireless communication is managing self-interference. Many designs have employed spatial isolation mechanisms, such as shielding or multi-antenna beamforming, to isolate the self-interference waveform from the receiver. Because such spatial isolation methods confine the transmit and receive signals to a subset of the available space, the full spatial resources of the channel may be under-utilized, expending a cost that may nullify the net benefit of operating in full-duplex mode. In this paper, we leverage an antenna-theory-based channel model to analyze the spatial degrees of freedom available to a full-duplex capable base station. We observe that whether or not spatial isolation out-performs time-division (i.e., half-duplex) depends heavily on the geometric distribution of scatterers. Unless the angular spread of the objects that scatter to the intended users is overlapped by the spread of objects that backscatter to the base station, then spatial isolation outperforms time division, otherwise time division may be optimal.
Currently deployed wireless communications equipment operates in half-duplex mode, meaning that transmission and reception are orthogonalized either in time (time-division-duplex) or frequency (frequency-division-duplex). Research in recent years [1–12] has investigated the possibility of wireless equipment operating in full-duplex mode, meaning that the transceiver will both transmit and receive at the same time and in the same spectrum. A potential benefit of full-duplex is illustrated in. User 1 wishes to transmit uplink data to a base station, and User 2 wishes to receive downlink data from the same base station. If the base station is half-duplex, then it must either service the users in orthogonal time slots or in orthogonal frequency bands. However, if the base station can operate in full-duplex mode, then it can enhance spectral efficiency by servicing both users simultaneously. The challenge to full-duplex communication, however, is that the base station transmitter generates high-powered self-interference which potentially swamps its own receiver, precluding the detection of the uplink message.1
For full-duplex to be feasible, the self-interference must be suppressed. The two main approaches to self-interference suppression are cancellation and spatial isolation, and we now define each. Self-interference cancellation is any technique which exploits the foreknowledge of the transmit signal by subtracting an estimate of the self-interference from the received signal. Cancellation can be applied at digital baseband, at analog baseband, at RF, or, as is most common, applied at a combination of these three domains [4–7, 11, 13, 14]. Spatial isolation is any technique to spatially orthogonalize the self-interference and the signal-of-interest. Some spatial isolation techniques studied in the literature are multi-antenna beamforming [1, 15–19], directional antennas [20], shielding via absorptive materials [21], and cross-polarization of transmit and receive antennas [10, 21]. The key differentiator between cancellation and spatial isolation is that cancellation requires and exploits knowledge of the self-interference, while spatial isolation does not. To our knowledge, all full-duplex designs to date have required both cancellation and spatial isolation in order for full-duplex to be feasible even at very short ranges (i.e., <10 m). For example, see designs such as [5, 6, 10, 11], each of which leverages cancellation techniques as well as at least one spatial isolation technique. Moreover, because cancellation performance is limited by transceiver impairments such as phase noise [22], spatial isolation often accounts for an outsized portion of the overall self-interference suppression.
For example, in the full-duplex design of [21] which demonstrated full-duplex feasibility at WiFi ranges, of the 95 dB of self-interference suppression achieved, 70 dB is due to spatial isolation, while only 25 dB is due to cancellation. Therefore, if full-duplex feasibility is to be extended from WiFi-typical ranges to the ranges typical of femtocells or even larger cells, then excellent spatial isolation performance will be required, hence our focus is on spatial isolation in this paper.
In a previous work [21], we studied three passive techniques for spatial isolation: directional antennas, absorptive shielding, and cross-polarization, and measured their performance in a prototype base station both in an anechoic chamber that mimics free space, and in a reflective room. As expected, the techniques suppressed the self-interference quite well (more than 70 dB) in an anechoic chamber, but scattering environments, the suppression was much less, (no more than 45 dB), due to the fact that passive techniques operate primarily on the direct path between the transmit and receive antennas, and do little to suppress paths that include an external backscatterer. The direct-path limitation of passive spatial isolation mechanisms raises the question of whether or not spatial isolation can be useful in a backscattering environment. Another class of spatial isolation techniques called "active" or "channel aware" spatial isolation [23] can indeed suppress both direct and backscattered self-interference. In particular, if multiple antennas are used and if the self-interference channel response can be estimated, then the radiation pattern can be shaped adaptively to mitigate both direct-path and backscattered self-interference. However, this pattern shaping (i.e., beamforming) will consume spatial degrees-of-freedom that could have otherwise been leveraged for spatial multiplexing. Thus, there is an important tradeoff between spatial self-interference isolation and achievable degrees of freedom.
To appreciate the tradeoff, consider the example in Fig. 1. The direct path from the base station transmitter, T 2, to its receiver R 1, can be passively suppressed by shielding the receiver from the transmitter as shown in [21], but there will also be backscattered self-interference due to objects near the base station (depicted by gray blocks in Fig. 1). The self-interference caused by scatterer S 0, for example, in Fig. 1, could be avoided by creating a null in the direction of S 0. However, losing access to the scatterer could create a less-rich scattering environment, diminishing the spatial degrees-of-freedom of the uplink or downlink. Moreover, creating the null consumes spatial degrees-of-freedom that could otherwise be used for spatial multiplexing to the downlink user, diminishing the achievable degrees-of-freedom of the downlink. This example leads us to pose the following question.
Three-node full-duplex model
Question: Under what scattering conditions can spatial isolation be leveraged in full-duplex operation to provide a degree-of-freedom gain over half-duplex? More specifically, given a constraint on the size of the antenna arrays at the base station and at the user devices, and given a characterization of the spatial distribution of the scatterers in the environment, what is the uplink/downlink degree-of-freedom region when the only self-interference mitigation strategy is spatial isolation?
Modeling approach: To answer the above question, we leverage the antenna-theory-based channel model developed by Poon, Broderson, and Tse in [24–26], which we will label the "PBT" model. In the PBT model, instead of constraining the number of antennas, the size of the array is constrained. Furthermore, instead of considering a channel matrix drawn from a probability distribution, a channel transfer function which depends on the geometric position of the scatterers relative to the arrays is considered (Fig. 2).
Clustered scattering. Only one cluster for each transmit receive pair is shown to prevent clutter
Contribution: We extend the PBT model to the three-node full-duplex topology of Fig. 1, and derive the degree-of-freedom region, \(\mathcal {D}_{\mathsf {FD}}\): the set of all achievable uplink/downlink degree-of-freedom tuples. By comparing \(\mathcal {D}_{\mathsf {FD}}\) to \(\mathcal {D}_{\mathsf {HD}}\), the degree-of-freedom region achieved by time-division half-duplex, we observe that full-duplex outperforms half-duplex, i.e., \(\mathcal {D}_{\mathsf {HD}}\subset \mathcal {D}_{\mathsf {FD}}\), in the following two scenarios.
When the base station arrays are larger than the corresponding user arrays, the base station has a larger signal space than is needed for spatial multiplexing and can leverage the extra signal dimensions to form beams that avoid self-interference (i.e., self-interference zero-forcing).
More interestingly, when the forward scattering intervals and the backscattering intervals are not completely overlapped, the base station can avoid self-interference by signaling in the directions that scatter to the intended receiver, but do not backscatter to the base-station receiver. Moreover, the base station can also signal in directions that do cause self-interference, but ensure that the generated self-interference is incident on the base-station receiver only in directions in which uplink signal is not incident on the base-station receiver, i.e., signal such that the self-interference and uplink signal are spatially orthogonal.
In [27], an experimental evaluation of a transmit-beamforming-based method for full-duplex operation called "SoftNull" is presented. Inspired by the achievability proof in Section 3.1, the SoftNull algorithm presented in [27] seeks to maximally suppress self-interference for a given required number of downlink-degrees-of-freedom. This paper presents an information theoretic analysis of the performance limits of beamforming-based full-duplex systems, whereas [27] presents an experimental evaluation of a specific design. We would like to refer [27] to readers who may be interested in how the theoretical intuitions from this paper can guide the design and implementation of a beamforming-based full-duplex system.
Organization of the paper: Section 2 specifies the system model: we begin with an overview of the PBT model in Section 2.1 and then in Section 2.2 apply the model to the scenario of a full-duplex base station with uplink and downlink flows. Section 3 gives the main analysis of the paper, the derivation of the degrees-of-freedom region. We start Section 3 by stating the theorem which characterizes the degrees-of-freedom region and then give the achievability and converse arguments in Sections 3.1 and 3.2, respectively. In Section 4, we assess the impact of the degrees-of-freedom result on the design and deployment of full-duplex base stations, and include an application example, that shows how the results of this paper are used to guide the design of a full-duplex base station in [27]. We give concluding remarks in Section 5.
We now give a brief overview of the PBT channel model presented in [24]. We then extend the PBT model to the case of the three-node full-duplex topology of Fig. 1, and define the required mathematical formalism that will ease the degrees-of-freedom analysis in the sequel.
Overview of the PBT model
As illustrated in Fig. 3, the PBT channel model considers a wireless communication link between a transmitter equipped with a unipolarized continuous linear array of length 2L T and a receiver with a similar array of length 2L R . The authors observe that there are two key domains: the array domain, which describes the current distribution on the arrays, and the wavevector domain which describes radiated and received field patterns. Channel measurement campaigns show that the angles of departure and the angles of arrival of the physical paths from a transmitter to a receiver tend to be concentrated within a handful of angular clusters [28–31]. Thus the authors of the PBT model [24] focus on the union of the clusters of angles-of-departure from the transmit array, denoted Θ T , and the union of the clusters of angles-of-arrival to the receive array, Θ R . Because a linear array aligned to the z-axis array can only resolve the z-component, the intervals of interest are Ψ T ={cosθ:θ∈Θ T } and Ψ R ={cosθ:θ∈Θ R }. In [24], it is shown from the first principles of Maxwell's equations that an array of length 2L T has a resolution of 1/(2L T ) over the interval Ψ T , so that the dimension of the transmit signal space of radiated field patterns is 2L T |Ψ T |. Likewise the dimension of the receive signal space is 2L R |Ψ R |, so that the spatial degrees of freedom a point-to-point communication link, d P2P, is
$$ d_{\mathrm{P2P}} = \text{min}\left\{2L_{T}|\Psi_{T}|, 2L_{R}|\Psi_{R}| \right\}. $$
The PBT Channel model for a point-to-point scenario
Extension of PBT model to three-node full-duplex
Now we extend the PBT channel model in [24], which considers a point-to-point topology, to the three-node full-duplex topology of Fig. 1. The antenna-theory-based PBT channel model is built upon far-field assumptions, i.e., that the propagation path is much larger than a wavelength. We acknowledge that direct-path self-interference may not obey far-field behavior. However, the backscattered self-interference, which will travel several wavelengths to reach an external scatterer and then return to the base station, is indeed a far-field signal. As discussed in the introduction, the intent of this paper is to understand the impact of backscattered self-interference as a function of the size of antenna arrays and the geometric distribution of the scatterers. Since the backscattering is indeed a far-field phenomenon, the PBT model is a quite well-suited model for our study.
As in [24], we consider continuous linear arrays of infinitely many infinitesimally small unipolarized antenna elements.2 Each of the two transmitters T j , j=1,2, is equipped with a linear array of length \(2{L}_{T_{j}}\), and each receiver, R i , i=1,2, is equipped with a linear array of length \(2{L}_{R_{i}}\). The lengths \({L}_{T_{j}}\) and \({L}_{R_{i}}\) are normalized by the wavelength of the carrier, and thus are unitless quantities. For each array, define a local coordinate system with origin at the midpoint of the array and z-axis aligned along the length of the array. Let \(\theta _{T_{j}} \in [0,\pi)\) denote the elevation angle relative to the T j array, and let \(\theta _{R_{i}}\) denote the elevation angle relative to the R i array. Denote the current distribution on the T j array as x j (p j ), where \(p_{j}\in [-{L}_{T_{j}},{L}_{T_{j}}]\phantom {\dot {i}\!}\) is the position along the lengths of the array, and \(\phantom {\dot {i}\!}x_{j}:[-{L}_{T_{j}},{L}_{T_{j}}] \rightarrow \mathbb {C}\) gives the magnitude and phase of the current. The current distribution, x j (p j ), is the transmit signal controlled by T j , which we constrain to be square integrable. Likewise, we denote the received current distribution on the R i array as \(\phantom {\dot {i}\!}y_{i}(q_{i}),\ q_{i}\in [-{L}_{R_{i}},{L}_{R_{i}}]\).
The signal received by the base station receiver, R 1, at a point \(\phantom {\dot {i}\!}q_{1} \in [-{L}_{R_{2}},{L}_{R_{2}}]\) along its array is given by
$$\begin{array}{*{20}l} {y}_{1}(q_{1}) &=\underbrace{\int_{-{L}_{T_{1}}}^{{L}_{T_{1}}} {C}_{11}({q_{1}}, {p}_{1}){x}_{1}({p}_{1})d{p}_{1}}_{\text{desired uplink signal}}\\& \quad+\underbrace{\int_{-{L}_{T_{2}}}^{{L}_{T_{2}}}{C}_{12}({q}_{1},{p}_{2}){x}_{2}({p}_{2})d{p}_{2} }_{\text{self-interference}} + \underbrace{{z}_{1}({q}_{1}), }_{\text{noise}} \end{array} $$
where \(\phantom {\dot {i}\!}{z}_{1}({q}_{1}),\ q_{1}\in [-{L}_{R_{1}},{L}_{R_{1}}]\) is the noise along the R 1 array. The channel response integral kernel, C ij (q i ,p j ), gives the current excited at a point q i on the receive array due to a current at the point p j on the transmit array. Note that the first term in (2) gives the received uplink signal-of-interest, while the second term gives the self-interference generated by the base station's own transmission. We assume that the mobile users are out of range of each other, such that there is no channel from T 1 to R 2.3 Thus, R 2's received signal at a point \(\phantom {\dot {i}\!}q_{2}\in [-{L}_{R_{2}},{L}_{R_{2}}]\) is
$$\begin{array}{*{20}l} {y}_{2}({q}_{2}) = \int_{-{L}_{T_{2}}}^{{L}_{T_{2}}}{C}_{22}({q}_{2},{p}_{2}){x}_{2}({p}_{2})d{p}_{2} + {z}_{2}({q}_{2}). \end{array} $$
The channel response kernel, C ij (·,·) is composed of a transmit array response, \(\phantom {\dot {i}\!}\boldsymbol {A}_{T_{j}}(\cdot, \cdot)\), a scattering response, H ij (·,·), and a receive array response, \(\phantom {\dot {i}\!}\boldsymbol {A}_{R_{i}}(\cdot, \cdot)\) [24]. The channel response kernel is given by
$$ \begin{aligned} {C}_{ij}({q},{p}) &=\!\! \int \!\!\!\! \int \!\!\!\! \underbrace{{A}_{R_{i}}({q},\boldsymbol{\hat{\kappa}})}_{\text{Rx array response}} \!\!\!\! \overbrace{H_{ij}(\boldsymbol{\hat{\kappa}}, \boldsymbol{\hat{k}})}^{\text{scattering response}}\\&\quad\times\underbrace{{A}_{T_{j}}(\boldsymbol{\hat{k}}, {p})}_{\text{Tx array response}} \!\! d\boldsymbol{\hat{k}} d\boldsymbol{\hat{\kappa}}, \end{aligned} $$
where \(\boldsymbol {\hat {k}}\) is a unit vector that gives the direction of departure from the transmitter array, and \(\boldsymbol {\hat {\kappa }}\) is a unit vector that gives the direction of arrival to the receiver array. The transmit array response kernel, \(\phantom {\dot {i}\!}\boldsymbol {A}_{T_{j}}(\boldsymbol {\hat {k}}, {p})\), maps the current distribution along the T j array (a function of p) to the field pattern radiated from T j (a function of direction of departure, \(\boldsymbol {\hat {k}}\)). The scattering response kernel, \(H_{ij}(\boldsymbol {\hat {\kappa }}, \boldsymbol {\hat {k}})\), maps the fields radiated from T j in direction \(\boldsymbol {\hat {k}}\) to the fields incident on R i at direction \(\boldsymbol {\hat {\kappa }}\). The receive array response, \(\phantom {\dot {i}\!}{A}_{R_{i}}({q},\boldsymbol {\hat {\kappa }})\), maps the field pattern incident on R i (a function of direction of arrival, \(\boldsymbol {\hat {\kappa }}\)) to the current distribution excited on the R i array (a function of position q), which is the received signal.
Array responses
In [24], the transmit array response for a linear array is derived from the first principles of Maxwell's equations and shown to be \( {A}_{T_{j}}(\boldsymbol {\hat {k}}, {p}) = {A}_{T_{j}}(\cos \theta _{T_{j}}, p) = e^{-\mathrm {i}2\pi p \cos \theta _{T_{j}}}, p \in \left [-{L}_{T_{j}}, {L}_{T_{j}} \right ], \) where \(\theta _{T_{j}}\in [0,\pi)\) is the elevation angle relative to the T j array. Due to the symmetry of the array (aligned to the z-axis), the radiation pattern is symmetric with respect to the azimuth angle and only depends on the elevation angle \(\theta _{T_{j}}\) through \(\cos \theta _{T_{j}}\). For notational convenience, let \(\phantom {\dot {i}\!}t \equiv \cos \theta _{T_{j}} \in [-1, 1]\), so that we can simplify the transmit array response kernel to
$$\begin{array}{*{20}l} {A}_{T_{j}}(t, p) &= e^{-\mathrm{i}2\pi p t},\ t \in [-1, 1],\ p \in \left[-{L}_{T_{j}}, {L}_{T_{j}} \right]. \end{array} $$
By reciprocity, the receive array response kernel, \(\phantom {\dot {i}\!}{A}_{R_{i}}({q}, \boldsymbol {\hat {\kappa }})\), is
$$\begin{array}{*{20}l} {A}_{R_{i}}(q, \tau) &= e^{\mathrm{i}2\pi q \tau},\ \tau \in [-1, 1],\ q \in \left[-{L}_{R_{i}}, {L}_{R_{i}} \right], \end{array} $$
where \(\phantom {\dot {i}\!}\tau \equiv \cos \theta _{R_{i}} \in [-1, 1]\) is the cosine of the elevation angle relative to the R i array. Note that the transmit and receive array response kernels are identical to the kernels of the Fourier transform and inverse Fourier transform, respectively, a relationship we will further explore in Section 2.5.
Scattering responses
The scattering response kernel, \(H_{ij}(\boldsymbol {\hat {\kappa }},\boldsymbol {\hat {k}})\), gives the amplitude and phase of the path departing from T j at direction \(\boldsymbol {\hat {k}}\) and arriving at R i at direction \(\boldsymbol {\hat {\kappa }}\). Since we are considering linear arrays which only resolve the cosine of the elevation angle, we can consider H ij (τ,t) which gives the superposition of the amplitude and phase of all paths emanating from T j with an elevation angle whose cosine is t and arriving at R i at an elevation angle whose cosine is τ.
As is done in [24], motivated by measurements showing that scattering paths are clustered with respect to the transmitter and receiver, we adopt a model that focuses on the boundary of the scattering clusters rather than the discrete paths themselves, as illustrated in Fig. 2.
Let \(\phantom {\dot {i}\!}\Theta _{T_{ij}}^{(k)}\) denote the angle subtended at transmitter T j by the k th cluster that scatters to R i , and let \(\Theta _{T_{ij}} = \bigcup _{k}\Theta _{T_{ij}}^{(k)}\) be the total transmit scattering interval from T j to R i . This scattering interval, \(\phantom {\dot {i}\!}\Theta _{T_{ij}}\), is the set of directions that when illuminated by T j scatters energy to R i . In Fig. 2, to avoid clutter, we illustrate the case in which \(\phantom {\dot {i}\!}\Theta _{T_{ij}}^{(k)}\) is a single contiguous angular interval, but in general, the interval will be non-contiguous and consist of several individual clusters. Similarly let \(\phantom {\dot {i}\!}\Theta _{R_{ij}}^{(k)}\) denote the corresponding angle subtended at R i by the k th cluster illuminated by T j , and let \(\phantom {\dot {i}\!}\Theta _{R_{ij}} = \bigcup _{k}\Theta _{R_{ij}}^{(k)}\) be set of directions from which energy can be incident on R i from T j .
Thus, we see in Fig. 2 that from the point-of-view of the base-station transmitter, T 2, \(\Theta _{T_{22}}\) is the angular interval over which the base station can radiate signals that will reach the intended downlink receiver, R 2. The angular interval, \(\phantom {\dot {i}\!}\Theta _{T_{12}}\), is the interval in which the base station's radiated signals will backscatter to the base station's own receiver, R 1, as self-interference. Likewise, from the point-of-view of the base station receiver, R 1, \(\phantom {\dot {i}\!}\Theta _{R_{11}}\), is the interval over which the base station may receive signals from the uplink transmitter, T 1, while \(\phantom {\dot {i}\!}\Theta _{R_{12}}\) is the interval in which self-interference may be present. Clearly, the extent to which the self-interference intervals and the signal-of-interest intervals overlap will have a major impact on the degrees of freedom of the network. Because linear arrays can only resolve the cosine of the elevation angle t≡ cosθ, we define the "effective" scattering intervals for the transmit and receive arrays, respectively, as
$$\begin{aligned} &\Psi_{T_{ij}} \equiv \left\{t: \arccos(t) \in \Theta_{T_{ij}} \right\} \subset [-1,1],\\ &\Psi_{R_{ij}} \equiv \left\{\tau: \arccos(\tau) \in \Theta_{R_{ij}} \right\} \subset [-1,1]. \end{aligned} $$
Define the size of the transmit and receive scattering intervals as \( |\Psi _{T_{ij}}| = \int _{\Psi _{T_{ij}}} t\, dt\) and \( |\Psi _{R_{ij}}| = \int _{\Psi _{R_{ij}}} \tau \, d\tau. \)
As in [24], we assume the following characteristics of the scattering responses:
H ij (τ,t)≠0 only if \((\tau,t) \in \Psi _{R_{ij}}\times \Psi _{T_{ij}}\).
\(\int ||H_{ij}(\tau,t)||dt \neq 0\ \forall \ \tau \in \Psi _{R_{ij}}\).
\(\int ||H_{ij}(\tau,t)||d\tau \neq 0\ \forall \ t \in \Psi _{T_{ij}}\).
The point spectrum of H ij (·,·), excluding 0, is infinite.
H ij (·,·) is Lebesgue measurable, that is \(\quad \int _{-1}^{1} \int _{-1}^{1} |H_{ij}(\tau,t)|^{2} \,d\tau \,dt < \infty.\)
The first condition means that the scattering response is zero unless the angle of arrival and angle of departure both lie within their respective scattering intervals. The second condition means that in any direction of departure, \(t \in \Psi _{T_{ij}}\), there exists at least one path from transmitter T j receiver R i . Similarly, the third condition implies that in any direction of arrival, \(\tau \in \Psi _{R_{ij}}\), there exists at least one path from T j to R i . The fourth condition means that there are many paths from the transmitter to the receiver within the scattering intervals, so that the number of propagation paths that can be resolved within the scattering intervals is limited by the length of the arrays and not by the number of paths. The final condition aids our analysis by ensuring the corresponding integral operator is compact, but is also a physically justified assumption since one could argue for the stricter assumption \(\int _{-1}^{1} \int _{-1}^{1} |H_{ij}(\tau,t)|^{2} \,d\tau \,dt \leq 1\), since no more energy can be scattered than is transmitted.
Hilbert space of wave-vectors
We can now write the original input-output relation given in (2) and (3) as
$$\begin{array}{*{20}l} y_{1}(q) &= \int_{\! \Psi_{R_{11}}} {A}_{R_{1}}(q,\tau) \int_{ \Psi_{T_{11}}} H_{11}(\tau,t) \int_{ -{L}_{T_{1}}}^{ {L}_{T_{1}}} {A}_{T_{1}}(t,p) x_{1}(p)\, d\tau\, dt\, dp \\ &\quad+ \int_{ \Psi_{R_{12}}} {A}_{R_{1}}(q,\tau) \int_{\! \Psi_{T_{12}}} H_{12}(\tau,t) \int_{ -{L}_{T_{2}}}^{ {L}_{T_{2}}} {A}_{T_{2}}(t,p) x_{2}(p)\, d\tau\, dt\, dp\\ &\quad+ z_{1}(q), \end{array} $$
$$\begin{array}{*{20}l} y_{2}(q) & = \int_{\Psi_{R_{22}}} {A}_{R_{2}}(q,\tau) \int_{\Psi_{T_{22}}} H_{22}(\tau,t) \int_{-{L}_{T_{2}}}^{{L}_{T_{2}}} {A}_{T_{2}}(t,p) x_{2}(p)\, d\tau\, dt\, dp\\ &\quad+ z_{2}(q). \end{array} $$
The channel model of (7) and (8) is expressed in the array domain, that is the transmit and receive signals are expressed as the current distributions excited along the array. Just as one can simplify a signal processing problem by leveraging the Fourier integral to transform from the time domain to the frequency domain, we can leverage the transmit and receive array responses to transform the problem from the array domain to the wave-vector domain. In other words, we can express the transmit and receive signals as field distributions over direction rather than current distributions over position along the array. In fact, for our case of the unipolarized linear array, the transmit and receive array responses are the Fourier and inverse-Fourier integral kernels, respectively.
Let \(\mathcal {T}_{j}\) be the space of all field distributions that transmitter T j 's array of length \({L}_{T_{j}}\) can radiate towards the available scattering clusters, \(\Psi _{T_{jj}}\cup \Psi _{T_{ij}}\) (both signal-of-interest and self-interference). In the vernacular of [24], \(\mathcal {T}_{j}\) is the space of field distributions array-limited to \({L}_{T_{j}}\) and wavevector-limited to \(\Psi _{T_{jj}}\cup \Psi _{T_{ij}}\). To be precise, define \(\mathcal {T}_{j}\) to be the Hilbert space of all square-integrable functions \(X_{j}:\Psi _{T_{jj}}\cup \Psi _{T_{ij}}\rightarrow \mathbb {C}\), that can be expressed as
$$X_{j}(t) = \int_{-{L}_{T_{j}}}^{{L}_{T_{j}}} {A}_{T_{j}}(t,p) x_{j}(p)\, dp,\ \quad t\in \Psi_{T_{jj}}\cup\Psi_{T_{ij}} $$
for some \(x_{j}(p),\ p\in [-{L}_{T_{j}},{L}_{T_{j}}]\phantom {\dot {i}\!}\). The inner product defined for this Hilbert Space between two member functions, \(\phantom {\dot {i}\!}U_{j},V_{j}\in \mathcal {T}_{j}\), is the usual inner product: \( \langle U_{j},V_{j}\rangle = \int _{\Psi _{T_{jj}}\cup \Psi _{T_{ij}}} U_{j}(t)V_{j}^{*}(t)\, dt. \) Likewise, let \(\mathcal {R}_{i}\) be the space of field distributions that can be incident on receiver R i from the available scattering clusters, \(\phantom {\dot {i}\!}\Psi _{R_{ii}}\cup \Psi _{R_{ij}}\), and resolved by an array of length \({L}_{R_{i}}\). More precisely, \(\mathcal {R}_{i}\) is the Hilbert space of all square-integrable functions \(Y_{i}:\Psi _{R_{ii}}\cup \Psi _{R_{ij}} \rightarrow \mathbb {C}\), that can be expressed as
$$Y_{i}(\tau) = \int_{-{L}_{R_{i}}}^{{L}_{R_{i}}} {A}_{R_{i}}^{*}(q,\tau) y_{i}(q)\, dq,\ \quad \tau\in \Psi_{R_{ii}}\cup\Psi_{R_{ij}} $$
for some \(y_{i}(q),\ q\in [-{L}_{R_{i}},{L}_{R_{i}}]\phantom {\dot {i}\!}\), with the same inner product. From [24], we know that the dimension of these array-limited and wavevector-limited transmit and receive spaces are, respectively,
$$\begin{array}{*{20}l} \text{dim}\,\mathcal{T}_{j} &= 2{L}_{T_{j}} |\Psi_{T_{jj}}\cup\Psi_{T_{ij}}|\text{, and} \end{array} $$
$$\begin{array}{*{20}l} \text{dim}\,\mathcal{R}_{i} &= 2{L}_{R_{i}} |\Psi_{R_{ii}}\cup\Psi_{R_{ij}}|. \end{array} $$
We can define the scattering integrals in (7) and (8) as operators mapping from one Hilbert space to another. Define the operator \(\mathsf {H}_{ij}:\mathcal {T}_{j}\rightarrow \mathcal {R}_{i}\) by
$$\begin{array}{*{20}l} (\mathsf{H}_{ij}X_{j})(\tau) = \int_{\Psi_{T_{ij}} \cup \Psi_{T_{jj}}} H_{ij}(\tau,t) X_{j}(t)\,dt,\ \tau \in \Psi_{R_{ij}}\cup \Psi_{R_{ii}}. \end{array} $$
We can now write the channel model of (7) and (8) in the wave-vector domain as
$$\begin{array}{*{20}l} Y_{1} &= \mathsf{H}_{11}X_{1} + \mathsf{H}_{12}X_{2} + Z_{1}, \end{array} $$
$$\begin{array}{*{20}l} Y_{2} &= \mathsf{H}_{22}X_{2} + Z_{2}, \end{array} $$
where \(X_{j} \in \mathcal {T}_{j}\), for j=1,2 and \(Y_{i}, Z_{i} \in \mathcal {R}_{i}\) for i=1,2. The following lemma states key properties of the scattering operators in (12–13) that we will leverage in our analysis.
Lemma 1
The scattering operators H ij , (i,j)∈{(1,1),(2,2),(1,2)} have the following properties:
The scattering operator, \(\mathsf {H}_{ij}: \mathcal {T}_{j}\rightarrow \mathcal {R}_{i}\), is a compact operator.
The dimension of the range of the scattering operator, dim R(H ij )≡dim N(H ij )⊥, (i.e., the dimension of the space orthogonal to the operator's nullspace) is given by \(\phantom {\dot {i}\!}\text {dim}\,R(\mathsf {H}_{ij}) = 2\,\text {min}\{{L}_{T_{j}} |\Psi _{T_{ij}}|, {L}_{R_{i}} |\Psi _{R_{ij}}| \}. \)
There exists a singular system \(\left \{\sigma _{ij}^{(k)}, U_{ij}^{(k)}, V_{ij}^{(k)}\right \}_{k=1}^{\infty }\) for scattering operator H ij , where the singular value \(\sigma _{ij}^{(k)}\) is nonzero if and only if \(k\leq 2\,\text {min}\{{L}_{T_{j}} |\Psi _{T_{ij}}|, {L}_{R_{i}} |\Psi _{R_{ij}}| \}\).
Property 1 holds because H ij (·,·), the kernel of integral operator H ij , is square integrable, and an integral operator with a square integrable kernel is compact (see Theorem 8.8 of [32]). Property 2 is just a restatement of the main result of [24]. Property 3 follows from the first two properties: The compactness of H ij , established in Property 1, implies the existence of a singular system, since there exists a singular system for any compact operator (see Section 16.1 of [32]). Property 2 implies that only the first \(2\,\text {min}\{{L}_{T_{j}} |\Psi _{T_{ij}}|, {L}_{R_{i}} |\Psi _{R_{ij}}| \}\) of the singular values will be nonzero, since the \(\left \{U_{ij}^{(k)}\right \}\) corresponding to nonzero singular values form a basis for R(H ij ), which has dimension \(2\,\text {min}\{{L}_{T_{j}} |\Psi _{T_{ij}}|, {L}_{R_{i}} |\Psi _{R_{ij}}| \}\). See Lemma 5 in Appendix B for a description of the properties of singular systems for compact operators, or see Section 2.2 of [33] or Section 16.1 of [32] for a thorough treatment. □
Spatial degrees-of-freedom analysis
We now give the main result of the paper: a characterization of the spatial degrees-of-freedom region for the PBT channel model applied to a full-duplex base station with uplink and downlink flows.
Let d 1 and d 2, respectively, denote the spatial degrees of freedom of the uplink data flow from T 1 to R 1, and the downlink data flow from T 2 to R 2. The spatial degrees-of-freedom region, \(\mathcal {D}_{\mathsf {FD}}\), of the three-node full-duplex channel is the convex hull of all spatial degrees-of-freedom tuples, (d 1,d 2), satisfying
$$\begin{array}{*{20}l} d_{1} \leq &\ d_{1}^{\mathsf{max}} = 2\,\text{min} \left({L}_{T_{1}} |\Psi_{T_{11}}|, {L}_{R_{1}} |\Psi_{R_{11}}| \right), \end{array} $$
$$\begin{array}{*{20}l} {d_{1} + d_{2}} \leq &\ d_{\mathsf{sum}}^{\mathsf{max}} = 2{L}_{T_{2}} |\Psi_{T_{22}} \!\setminus \!\Psi_{T_{12}}| + 2{L}_{R_{1}} |\Psi_{R_{11}} \setminus \Psi_{R_{12}}|\\ &+2\,\text{max} ({L}_{T_{2}} |\Psi_{T_{12}}|, {L}_{R_{1}} |\Psi_{R_{12}}|). \end{array} $$
The degrees-of-freedom region characterized by Theorem 1, \(\mathcal {D}_{\mathsf {FD}}\), is the pentagon-shaped region shown in Fig. 4. Figure 5 a shows a geometric interpretation of the parameters in Theorem 1. The achievability part of Theorem 1 is given in Section 3.1, and the converse is given in Section 3.2.
Degrees-of-freedom region, \(\mathcal {D}_{\mathsf {FD}}\)
Diagrams illustrating the geometrical interpretation of the degrees-of-freedom region \(\mathcal {D}_{\mathsf {FD}}\) and the achievability strategy. a Array lengths and scattering intervals. b Cartoon illustrating achievability of the corner point \(\left (d_{1}, d_{2}\right) = \left (d_{1}^{\mathsf {max}}, d_{\mathsf {sum}}^{\mathsf {max}} - d_{1}^{\mathsf {max}}\right)\)
Achievability
Overview of achievability proof: Before launching into the full proof, we would like to give a brief sketch of the achievability of corner point (d1′,d2′) of the degrees-of-freedom region \(\mathcal {D}_{\mathsf {FD}}\) shown in Fig. 4. The steps for achieving the degrees-of-freedom tuple \((d_{1}', d_{2}') = (d_{1}^{\mathsf {max}}, d_{\mathsf {sum}}^{\mathsf {max}} - d_{1}^{\mathsf {max}})\) are illustrated in Fig. 5 b.
First, the uplink transmitter, T 1, transmits the maximum number of data streams that the uplink channel will support, \(d_{1}^{\mathsf {max}} = 2\,\text {min} ({L}_{T_{1}} |\Psi _{T_{11}}|, {L}_{R_{1}} |\Psi _{R_{11}}|)\) (illustrated by blue arrows in Fig. 5 b). The base station downlink transmitter, T 2, must then structure its transmit signal such that it does not interfere with the base station receiver's reception of these \(d_{1}^{\mathsf {max}}\) data streams, as is described in the following steps.
Second, the base station transmitter, T 2, transmits as many data streams as can be supported in the interval \(\Psi _{T_{22}} \setminus \Psi _{T_{12}}\) (illustrated by red arrows in Fig. 5 b), which is the interval over which signal will couple to the downlink user R 2, but will not present any self-interference to the base station's own receiver R 1.
Third, the base station transmits as many data streams as possible in the interval \(\Psi _{T_{22}} \cap \Psi _{T_{12}}\) while ensuring that the self-interference is only incident on the base station receiver, R 1 over the interval \(\Psi _{R_{11}} \setminus \Psi _{R_{12}}\) (illustrated by green arrows in Fig. 5 b), which is the interval over which no uplink signal form T 1 will be incident on receiver R 1. This step occupies a majority of the proof.
The final step in the achievability proof is to show that if the transmission strategies described in steps 1–3 are employed, that the receivers, R 1 and R 2, can successfully recover the d 1- and d 2-dimensional data streams, respectively.
Full achievability proof: We establish achievability of \(\mathcal {D}_{\mathsf {FD}}\) by way of two lemmas. The first lemma shows the achievability of two specific spatial degrees-of-freedom tuples, and the second shows that these tuples are indeed the corner points of \(\mathcal {D}_{\mathsf {FD}}\).
The spatial degree-of-freedom tuples (d1′,d2′) and (d1″,d2″), given below, are achievable:
$$\begin{array}{@{}rcl@{}} d_{1}^{\prime} =\text{min}\left\{2{L}_{T_{1}} |\Psi_{T_{11}}|, 2{L}_{R_{1}} |\Psi_{R_{11}}|\right\}, \end{array} $$
$$\begin{array}{@{}rcl@{}} d_{2}^{\prime} =\left\{\begin{array}{l} \text{min}\left\{d_{T_{2}},2{L}_{R_{2}} |\Psi_{R_{22}}|\right\}, {L}_{T_{1}} |\Psi_{T_{11}}| \geq {L}_{R_{1}} |\Psi_{R_{11}}|\\ \text{min}\left\{\delta_{T_{2}},2{L}_{R_{2}} |\Psi_{R_{22}}|\right\}, \text{otherwise } \end{array}, \right. \end{array} $$
$$\begin{array}{@{}rcl@{}} d_{1}^{\prime\prime} =\left\{\begin{array}{l}\text{min}\left\{2{L}_{T_{1}} |\Psi_{T_{11}}|, d_{R_{1}}\right\}, {L}_{R_{2}} |\Psi_{R_{22}}| \geq {L}_{T_{2}} |\Psi_{T_{22}}|\\ \text{min}\left\{2{L}_{T_{1}} |\Psi_{T_{11}}|, \delta_{R_{1}} \right\}, \text{otherwise} \end{array}, \right. \end{array} $$
$$\begin{array}{@{}rcl@{}} d_{2}^{\prime\prime} = \text{min}\left\{2{L}_{T_{2}} |\Psi_{T_{22}}|, 2{L}_{R_{2}} |\Psi_{R_{22}}|\right\}, \end{array} $$
where the terms \(\phantom {\dot {i}\!}d_{T_{2}}\), \(\phantom {\dot {i}\!}\delta _{T_{2}}d_{R_{1}}\), and \(\phantom {\dot {i}\!}\delta _{R_{1}}\) are given in (17–20) within the table at the bottom of the page.
Due to the symmetry of the problem, it suffices to demonstrate achievability of only the first spatial degree-of-freedom pair in Lemma 2, (d1′,d2′), as the second pair, (d1″,d2″), follows from symmetry. Thus we seek to prove the achievability of the tuple (d1′,d2′) given in (17-18). We will show achievability of (d1′,d2′) in the case where \({L}_{T_{1}} |\Psi _{T_{11}}| \geq {L}_{R_{1}} |\Psi _{R_{11}}|\), for which
$$\begin{array}{*{20}l} d_{1}' &=2{L}_{R_{1}} |\Psi_{R_{11}}|, \end{array} $$
$$\begin{array}{*{20}l} d_{2}' &=\text{min}\left\{d_{T_{2}},2{L}_{R_{2}} |\Psi_{R_{22}}|\right\}, \end{array} $$
$$ \begin{aligned} d_{T_{2}} &= 2{L}_{T_{2}} |\Psi_{T_{22}} \setminus \Psi_{T_{12}}|\\ &\quad+ \text{min} \left\{ \begin{array}{c} 2{L}_{T_{2}} |\Psi_{T_{22}} \cap \Psi_{T_{12}}|, \\ 2({L}_{T_{2}} |\Psi_{T_{12}}| - {L}_{R_{1}} |\Psi_{R_{12}}|)^{+} + 2{L}_{R_{1}} |\Psi_{R_{12}} \setminus \Psi_{R_{11}}| \end{array} \right\}. \end{aligned} $$
Achievability of (d1′,d2′) in the \({L}_{T_{1}} |\Psi _{T_{11}}| < {L}_{R_{1}} |\Psi _{R_{11}}|\) case is analogous.
We now begin the steps to show achievability of (25) and (26). □
Defining key subspaces
We first define key subspaces of the transmit and receive wave-vector spaces (\(\mathcal {T}_{1}\), \(\mathcal {T}_{2}\), \(\mathcal {R}_{1}\), and \(\mathcal {R}_{2}\)) that will be crucial in demonstrating achievability.
Subspaces of \(\boldsymbol{\mathcal {T}}_{2}\): Recall that \(\mathcal {T}_{2}\) is the space of all field distributions that can be radiated by the base station transmitter, T 2, in the direction of the scatterer intervals, \(\Psi _{T_{22}} \cup \Psi _{T_{12}}\), (both signal-of-interest and self-interference). Let \(\mathcal {T}_{22\setminus 12} \subseteq \mathcal {T}_{2}\) be the subspace of field distributions that can be transmitted by T 2, which are nonzero only in the interval \(\Psi _{T_{22}}\setminus \Psi _{T_{12}}\),
$$ \mathcal{T}_{22\setminus12} \equiv \text{span}\left\{ X_{2} \in \mathcal{T}_{2}: X_{2}(t) = 0\ \forall\ t \in \Psi_{T_{12}} \right\}. $$
More intuitively, \(\mathcal {T}_{22\setminus 12}\) is the space of transmissions from the base station which couple only to the intended downlink user, and do not couple back to the base station receiver as self-interference. Similarly let \(\mathcal {T}_{12} \subseteq \mathcal {T}_{2}\) be the subspace of functions that are only nonzero in the interval \(\Psi _{T_{12}}\),
$$ \mathcal{T}_{12} \equiv \text{span} \left\{ X_{2} \in \mathcal{T}_{2}: X_{2}(t) = 0\ \forall\ t \notin \Psi_{T_{12}} \right\}, $$
that is, the space of base station transmissions which do couple to the base station receiver as self-interference. Finally, let \(\mathcal {T}_{22\cap 12}\subseteq \mathcal {T}_{12} \subseteq \mathcal {T}_{2}\) be the subspace of field distributions that are nonzero only in the interval \(\Psi _{T_{22}}\cap \Psi _{T_{12}}\),
$$\begin{array}{*{20}l} \mathcal{T}_{22\cap12} &\equiv \text{span} \{ X_{2} \in \mathcal{T}_{2}: X_{2}(t) = 0\ \forall\ t \notin \Psi_{T_{22}} \cap \Psi_{T_{12}} \}, \end{array} $$
the space of base station transmission which couple both to the downlink user and to the base station receiver. From the result of [24], we know that the dimension of each of these transmit subspaces of \(\mathcal {T}_{1}\) is as follows:
$$\begin{array}{*{20}l} \text{dim}\,\mathcal{T}_{12} &= 2{L}_{T_{2}} |\Psi_{T_{12}}|, \end{array} $$
$$\begin{array}{*{20}l} \text{dim}\,\mathcal{T}_{22\setminus12} &= 2{L}_{T_{2}} |\Psi_{T_{22}} \setminus \Psi_{T_{12}}|, \end{array} $$
$$\begin{array}{*{20}l} \text{dim}\,\mathcal{T}_{22\cap12} &= 2{L}_{T_{2}} |\Psi_{T_{22}} \cap \Psi_{T_{12}}|. \end{array} $$
We say that Hilbert space \(\mathcal {A}\) is the orthogonal direct sum of Hilbert spaces \(\mathcal {B}\) and \(\mathcal {C}\) if any \(a\in \mathcal {A}\) can be decomposed as a=b+c, for some \(b\in \mathcal {B}\) and \(c \in \mathcal {C}\), where a and b are orthogonal. We use the notation \(\mathcal {A} = \mathcal {B} \oplus \mathcal {C}\) to denote that A is the orthogonal direct sum of \(\mathcal {B}\) and \(\mathcal {C}\).
One can check that \(\mathcal {T}_{12}\) and \(\mathcal {T}_{22\setminus 12}\) are constructed such that they form an orthogonal direct sum for space \(\mathcal {T}_{2}\):
$$ \mathcal{T}_{2} = \mathcal{T}_{12} \oplus \mathcal{T}_{22\setminus12}, $$
thus any \(X_{2} \in \mathcal {T}_{2}\) can be written as \(\phantom {\dot {i}\!}X_{2} = X_{2_{\mathsf {Orth}}} + X_{2_{\mathsf {Int}}}\), for some \(\phantom {\dot {i}\!}X_{2_{\mathsf {Orth}}}\in \mathcal {T}_{22\setminus 12}\) and \(\phantom {\dot {i}\!}X_{2_{\mathsf {Int}}}\in \mathcal {T}_{12}\), such that \(\phantom {\dot {i}\!}X_{2_{\mathsf {Orth}}} \perp X_{2_{\mathsf {Int}}}.\) By the construction of \(\mathcal {T}_{22\setminus 12}\), \(\phantom {\dot {i}\!}\mathsf {H}_{12}X_{2_{\mathsf {Orth}}} = 0\), since \(H_{12}(\tau,t) = 0\forall \, t\notin \Psi _{T_{12}}\phantom {\dot {i}\!}\) and \(\phantom {\dot {i}\!}X_{2_{\mathsf {Orth}}}\in \mathcal {T}_{22\setminus 12}\) implies \(\phantom {\dot {i}\!}X_{2_{\mathsf {Orth}}}(t) = 0\ \forall \ t \in \Psi _{T_{12}}\). In other words, \(\phantom {\dot {i}\!}X_{2_{\mathsf {Orth}}}\in \mathcal {T}_{22\setminus 12}\) is zero everywhere the integral kernel H 12(τ,t) is nonzero. Thus, any transmitted field distribution that lies in the subspace \(\mathcal {T}_{22\setminus 12}\) will not present any interference to R 2.
Subspaces of \(\boldsymbol{\mathcal {T}}_{1}\): Recall that \(\mathcal {T}_{1}\) is the space of all field distributions that can be radiated by the uplink user transmitter, T 1, towards the available scatterers. Let \(\mathcal {T}_{11} \subseteq \mathcal {T}_{1}\) be the subspace of field distributions that can be transmitted by T 1's continuous linear array of length \({L}_{T_{1}}\) which are nonzero only in the interval \(\Psi _{T_{11}}\), more precisely
$$ \mathcal{T}_{11} \equiv \text{span} \{ X_{1} \in \mathcal{T}_{1}: X_{1}(t) = 0\ \forall\ t \notin \Psi_{T_{11}} \}. $$
More intuitively, \(\mathcal {T}_{11}\) is the space of transmissions from the uplink user which will couple to the base station receiver. From the result of [24], we know that
$$\begin{array}{*{20}l} \text{dim}\,\mathcal{T}_{11} = 2{L}_{T_{1}} |\Psi_{T_{11}}|. \end{array} $$
Note that \(\mathcal {T}_{11}=\mathcal {T}_{1}\), since we have assumed \(\Psi _{T_{21}} = \emptyset \). Although \(\mathcal {T}_{11}\) is thus redundant, we define it for notational consistency.
Subspaces of \(\boldsymbol{\mathcal {R}}_{1}\): Recall that \(\mathcal {R}_{1}\) is the space of all incident field distributions that can be resolved by the base station receiver, R 1. Let \(\mathcal {R}_{12} \subseteq \mathcal {R}_{1}\) to be the subspace of received field distributions which are nonzero only for \(\tau \in \Psi _{R_{12}}\), that is
$$ \mathcal{R}_{12} \equiv \text{span} \left\{ Y_{1} \in \mathcal{R}_{1}: Y_{1}(\tau) = 0\ \forall\ \tau \notin \Psi_{R_{12}} \right\}. $$
Less formally, \(\mathcal {R}_{12}\) is the space of receptions at the base station which could have emanated from the base stations own transmitter. Similarly, let \(\mathcal {R}_{12\setminus 11}\subseteq \mathcal {R}_{12} \subseteq \mathcal {R}_{1}\) be the subspace of received field distributions that are only nonzero for \(\tau \in \Psi _{R_{12}} \setminus \Psi _{R_{11}}\),
$$ \mathcal{R}_{12\setminus11} \equiv \text{span} \left\{ Y_{1} \in \mathcal{R}_{1}: Y_{1}(\tau) = 0\ \forall\ \tau \in \Psi_{R_{11}} \right\}. $$
Less formally, \(\mathcal {R}_{12\setminus 11}\) is the space of receptions at the base station which could have emanated from the base station transmitter, but could not have emanated from the uplink user. Finally, define \(\mathcal {R}_{11} \subseteq \mathcal {R}_{1}\) to be the subspace of received field distributions that are nonzero only for \(\tau \in \Psi _{R_{11}}\),
$$ \mathcal{R}_{11} \equiv \text{span} \left\{ Y_{1} \in \mathcal{R}_{1}: Y_{1}(\tau) = 0\ \forall\ \tau \notin \Psi_{R_{11}} \right\}, $$
the space of base station receptions which could have emanated from the intended uplink user. Note that \( \mathcal {R}_{1} = \mathcal {R}_{11} \oplus \mathcal {R}_{12\setminus 11}. \) From the result of [24], we know the dimension of each of the above base-station receive subspaces is as follows:
$$\begin{array}{*{20}l} \text{dim}\, \mathcal{R}_{11} &= 2{L}_{R_{1}} |\Psi_{R_{11}}|, \end{array} $$
$$\begin{array}{*{20}l} \text{dim}\, \mathcal{R}_{12\setminus11} &= 2{L}_{R_{1}} |\Psi_{R_{12}} \setminus \Psi_{R_{11}}|, \end{array} $$
$$\begin{array}{*{20}l} \text{dim}\, \mathcal{R}_{12} &= 2{L}_{R_{1}} |\Psi_{R_{12}}|. \end{array} $$
Subspaces of \(\boldsymbol{\mathcal {R}}_{2}\): Recall that \(\mathcal {R}_{2}\) is the space of all incident field distributions that can be resolved by the downlink user receiver, R 2. Let \(\mathcal {R}_{22} \subseteq \mathcal {R}_{2}\) to be the subspace of received field distributions which are nonzero only for \(\tau \in \Psi _{R_{22}}\), that is
Note that \(\mathcal {R}_{22}=\mathcal {R}_{2}\), since we have assumed \(\Psi _{R_{21}} = \emptyset \). Although \(\mathcal {R}_{22}\) is thus redundant, we define it for notational consistency. By substituting the subspace dimensions given above into (25) and (27), we can restate the degree-of-freedom pair whose achievability we are establishing as
$$\begin{array}{*{20}l} \left(d_{1}',d_{2}' \right) = \big(\text{dim}\, \mathcal{R}_{11},\ \text{min}\left\{d_{T_{2}},\text{dim} \mathcal{R}_{22} \right\}\big), \text{ where } \end{array} $$
$$\begin{array}{*{20}l} d_{T_{2}} &= \text{dim} \mathcal{T}_{22\setminus12}\\ &\quad+ \text{min} \left\{ \begin{array}{c} \text{dim} \mathcal{T}_{22\cap12}, \\ \left(\text{dim} \mathcal{T}_{12} - \text{dim} \mathcal{R}_{12} \right)^{+} + \text{dim} \mathcal{R}_{12\setminus11} \end{array} \right\}. \end{array} $$
Now that we have defined the relevant subspaces, we can show how these subspaces are leveraged in the transmission and reception scheme that achieves the spatial degrees-of-freedom tuple (d1′,d2′).
Spatial processing at each transmitter/receiver
We now give the transmission schemes at each transmitter and the recovery schemes at each receiver.
Processing at uplink user transmitter, T 1 : Recall that \(d_{1}' = \text {dim} \mathcal {R}_{11}\) is the number of spatial degrees-of-freedom we wish to achieve for Flow 1, the uplink flow. Let \(\left \{ \chi _{1}^{(k)} \right \}_{k=1}^{d_{1}'},\ \chi _{1}^{(i)} \in \mathbb {C},\) be the d1′ symbols that T 1 wishes to transmit to R 1. We know from Lemma 1 that there exists a singular value expansion for H 11, so let \(\left \{\sigma _{11}^{(k)}, U_{11}^{(k)}, V_{11}^{(k)}\right \}_{k=1}^{\infty }\) be a singular system for the operator \(\mathsf {H}_{11}: \mathcal {T}_{1}\rightarrow \mathcal {R}_{1}\) (see Lemma 5 in Appendix B for the definition of a singular system).
Note that the functions \( \left \{ V_{11}^{(k)} \right \}_{k=1}^{\text {dim} \mathcal {T}_{1}}\) form an orthonormal basis for \(\mathcal {T}_{1}\), and since \(d_{1}' = \text {dim} \mathcal {R}_{11} \leq \text {dim} \mathcal {T}_{1}\), there are at least as many such basis functions as there are symbols to transmit.
We construct X 1, the transmit wave-vector signal transmitted by T 1, as
$$ X_{1} = \sum_{k=1}^{d_{1}'} \chi_{1}^{(k)} V_{11}^{(k)}. $$
Processing at the base station transmitter, T 2 :
Recall that \(d_{2}' =\text {min}\left \{d_{T_{2}},2{L}_{R_{2}} |\Psi _{R_{22}}|\right \}\), where \(d_{T_{2}}\) is given in (27), is the number of spatial degrees-of-freedom we wish to achieve for Flow 2, the downlink flow. Let \(\left \{ \chi _{2}^{(k)} \right \}_{k=1}^{d_{2}'}\) be the d2′ symbols that T 2 wishes to transmit to R 2. We split the T 2 transmit signal into the sum of two orthogonal components, \(X_{2_{\mathsf {Orth}}} \in \mathcal {T}_{22\setminus 12} \) and \(X_{2_{\mathsf {Int}}} \in \mathcal {T}_{12}\), so that the wave-vector signal transmitted by T 2 is
$$\begin{array}{*{20}l} X_{2} = X_{2_{\mathsf{Orth}}} + X_{2_{\mathsf{Int}}}, \quad X_{2_{\mathsf{Orth}}} &\in \mathcal{T}_{22\setminus12}, \quad X_{2_{\mathsf{Int}}} \in \mathcal{T}_{12}. \end{array} $$
Recall that \(\phantom {\dot {i}\!}X_{2_{\mathsf {Orth}}}\in \mathcal {T}_{22\setminus 12}\) implies \(\mathsf {H}_{12}X_{2_{\mathsf {Orth}}} = 0\phantom {\dot {i}\!}\). Thus, we can construct \(\phantom {\dot {i}\!}X_{2_{\mathsf {Orth}}} \in \mathcal {T}_{22\setminus 12}\) without regard to the structure of H 12. Let \(\left \{ Q_{22\setminus 12}^{(i)} \right \}_{i=1}^{\text {dim} \mathcal {T}_{22\setminus 12}}\) be an arbitrary orthonormal basis for \(\mathcal {T}_{22\setminus 12}\), and let
$$ d_{2_{\mathsf{Orth}}}' \equiv \text{min}\left\{\text{dim} \mathcal{T}_{22\setminus12}, \text{dim} \mathcal{R}_{22} \right\}, $$
be the number of symbols that T 2 will transmit along \(X_{2_{\mathsf {Orth}}}\). We construct \(X_{2_{\mathsf {Orth}}}\) as
$$\begin{array}{*{20}l} X_{2_{\mathsf{Orth}}} = \sum\limits_{i=1}^{d_{2_{\mathsf{Orth}}}'}\chi_{2}^{(i)} Q_{22\setminus12}^{(i)}. \end{array} $$
Recall that there are d2′ total symbols that T 2 wishes to transmit, and we have transmitted \(d_{2_{\mathsf {Orth}}}'\) symbols along \(X_{2_{\mathsf {Orth}}}\), thus there are d2′−d2 Orth ′ symbols remaining to transmit along \(X_{2_{\mathsf {Int}}}\). Let
$$\begin{array}{*{20}l} d_{2_{\mathsf{Int}}}' &\equiv d_{2}' - d_{2_{\mathsf{Orth}}}'\\ &=\text{min} \left\{ \begin{array}{c} \text{dim} \mathcal{T}_{22\cap12}, \\ \left(\text{dim} \mathcal{T}_{12} - \text{dim} \mathcal{R}_{12} \right)^{+} + \text{dim} \mathcal{R}_{12\setminus11},\\ \left(\text{dim} \mathcal{R}_{22} - \text{dim} \mathcal{T}_{22\setminus12}\right)^{+} \end{array} \right\}. \end{array} $$
Now since \(\phantom {\dot {i}\!}X_{2_{\mathsf {Int}}}\in \mathcal {T}_{12}\), \(\phantom {\dot {i}\!}\mathsf {H}_{12}X_{2_{\mathsf {Int}}}\) is nonzero in general, \(\phantom {\dot {i}\!}X_{2_{\mathsf {Int}}}\) will present interference to R 1. Therefore, we must construct \(\phantom {\dot {i}\!}X_{2_{\mathsf {Int}}}\) such that it communicates \(\phantom {\dot {i}\!}d_{2_{\mathsf {Int}}}'\) symbols to R 2, without impeding R 1 from recovering the d1′ symbols transmitted from T 1. Thus, the construction of \(\phantom {\dot {i}\!}X_{2_{\mathsf {Int}}}\in \mathcal {T}_{12}\) will indeed depend on the structure of H 12.
First consider the case where \(\text {dim} \mathcal {T}_{12} \leq \text {dim} \mathcal {R}_{12}\). In this case, Eq. (50), which gives the number of symbols that must be transmitted along \(X_{2_{\mathsf {Int}}}\), simplifies to \(d_{2_{\mathsf {Int}}}' = \text {min} \left \{\text {dim} \mathcal {T}_{22\cap 12}, \text {dim} \mathcal {R}_{12\setminus 11}, (\text {dim} \mathcal {R}_{22} - \text {dim} \mathcal {T}_{22\setminus 12})^{+}\right \}\). Let \(\left \{\sigma _{12}^{(k)}, U_{12}^{(k)}, V_{12}^{(k)}\right \}_{k=1}^{\infty }\) be a singular system for H 12. From Property 3 of Lemma 1, we know that \(\sigma _{12}^{(k)}\) is zero for \(k>\text {dim} \mathcal {T}_{12}\) and nonzero for \(k\leq \text {dim} \mathcal {T}_{12}\). Note that \( \left \{ V_{12}^{(k)} \right \}_{k=1}^{\text {dim} \mathcal {T}_{12}}\) is an orthonormal basis for \(\mathcal {T}_{12}\). In the case of \(\text {dim} \mathcal {T}_{12} \leq \text {dim} \mathcal {R}_{12}\) (the case for which are constructing \(X_{2_{\mathsf {Int}}}\)), we have
$$\begin{array}{*{20}l} d_{2_{\mathsf{Int}}}' &= \text{min} \left\{{\vphantom{\left(a^{a}\right)^{1}}}\text{dim} \mathcal{T}_{22\cap12},\ \text{dim} \mathcal{R}_{12\setminus11}, \left(\text{dim} \mathcal{R}_{22}\right.\right.\\ &\qquad\quad\left.\left.- \text{dim} \mathcal{T}_{22\setminus12}\right)^{+} \right\}, \end{array} $$
$$\begin{array}{*{20}l} & \leq \text{dim} \mathcal{T}_{22\cap12} \ \leq \text{dim} \mathcal{T}_{12}, \end{array} $$
so that there are at least as many \(V_{12}^{(k)}\)'s as there are symbols to transmit along \(X_{2_{\mathsf {Int}}}\). We construct \(X_{2_{\mathsf {Int}}}\) as
$$ X_{2_{\mathsf{Int}}} = \sum_{k=1}^{d_{2_{\mathsf{Int}}}'} \chi_{2}^{\left(k+ d_{2_{\mathsf{Orth}}}'\right)}V_{12}^{(k)}. $$
Now we will consider the construction of \(X_{2_{\mathsf {Int}}}\) for the other case where \(\text {dim} \mathcal {T}_{12} > \text {dim} \mathcal {R}_{12}\). In the \(\text {dim} \mathcal {T}_{12} > \text {dim} \mathcal {R}_{12}\) case Eq. (50), which gives the number of symbols that must be transmitted along \(X_{2_{\mathsf {Int}}}\), simplifies to
$$d_{2_{\mathsf{Int}}}' = \text{min} \left\{ \begin{array}{c} \text{dim} \mathcal{T}_{22\cap12}, \\ \left(\text{dim} \mathcal{T}_{12} - \text{dim} \mathcal{R}_{12} \right) + \text{dim} \mathcal{R}_{12\setminus11},\\ \left(\text{dim} \mathcal{R}_{22} - \text{dim} \mathcal{T}_{22\setminus12}\right)^{+} \end{array} \right\}.$$
Note that the signal that R 1 receives from T 1 will lie only in \(\mathcal {R}_{11}\). Thus, if we can ensure that the signal from T 2 falls in the orthogonal space, \(\mathcal {R}_{12\setminus 11}\), then we have avoided interference. Let \(\mathsf {H}''_{12}:\mathcal {T}_{12}\rightarrow \mathcal {R}_{12}\) be the restriction of \(\mathsf {H}_{12}:\mathcal {T}_{2}\rightarrow \mathcal {R}_{1}\) to domain \(\mathcal {T}_{12}\) and codomain \(\mathcal {R}_{12}\). We consider the constriction, H12″, instead of H 12 so that the preimage under H12″ is a subset of \(\mathcal {T}_{12}\), so that any functions within this preimage have not already been used in constructing \(X_{2_{\mathsf {Orth}}}\). We can characterize the requirement that Y 1(τ) not be interfered over \(\tau \in \Psi _{R_{11}}\) as \(\mathsf {H}^{\prime \prime }_{12}X_{2_{\mathsf {Int}}} \in \mathcal {R}_{12\setminus 11},\) or equivalently \(X_{2_{\mathsf {Int}}} \in \mathcal {P}_{12\setminus 11},\) where
$$ \mathcal{P}_{12\setminus11} \equiv {\mathsf{H}^{\prime\prime}_{12}}^{\leftarrow}(\mathcal{R}_{12\setminus11}) \subseteq \mathcal{T}_{12}, $$
is the preimage of \(\mathcal {R}_{12\setminus 11}\) under H12″. Thus, any function in \(\mathcal {P}_{12\setminus 11}\) can be used for signaling to R 2 without interfering X 1 at R 1. The number of symbols that can be transmitted will thus depend on the dimension of this interference-free preimage. Corollary 1 in the appendix states that if \(\mathsf {C}:\mathcal {X}\rightarrow \mathcal {Y}\) is a linear operator with closed range, and \(\mathcal {S}\) is a subspace of the range of C, \(\mathcal {S} \subset R(\mathsf {C})\), then \(\text {dim} {\mathsf {C}}^{\leftarrow }(\mathcal {S}) = \text {dim} N(\mathsf {C}) + \text {dim}(\mathcal {S})\). Note that R(H12″) has finite dimension (namely 2 \(\text {min}\left \{{L}_{T_{2}} \Psi _{T_{12}},{L}_{R_{1}}\Psi _{R_{12}} \right \}<\infty \)), and since any finite dimensional subspace of a normed space is closed, R(H12″) is closed. Further, note that since we are considering the case where \(\text {dim} \mathcal {T}_{12} > \text {dim} \mathcal {R}_{12}\), it is easy to see that \(R(\mathsf {H}^{\prime \prime }_{12}) = \mathcal {R}_{12}\), which implies \(\mathcal {R}_{12\setminus 11}\subseteq R(\mathsf {H}^{\prime \prime }_{12})\), since \(\mathcal {R}_{12\setminus 11}\subseteq \mathcal {R}_{12}\) by construction. Thus, the linear operator \(\mathsf {H}^{\prime \prime }_{12}:\mathcal {T}_{12}\rightarrow \mathcal {R}_{12}\) and the subspace \(\mathcal {R}_{12\setminus 11}\) satisfy the conditions on operator C and subspace \(\mathcal {S}\), respectively, in the hypothesis of Corollary 1. Thus, we can apply Corollary 1 to show that, when \(\text {dim} \mathcal {T}_{12} > \text {dim} \mathcal {R}_{12}\), the dimension of \(\mathcal {P}_{12\setminus 11}\) is given by
$$\begin{array}{*{20}l} \text{dim}\mathcal{P}_{12\setminus11} &= \text{dim} N(\mathsf{H}^{\prime\prime}_{12}) + \text{dim}\mathcal{R}_{1\setminus11} \\ &= (\text{dim} \mathcal{T}_{12} - \text{dim} \mathcal{R}_{12}) + \text{dim} \mathcal{R}_{12\setminus11} \\ & \geq \text{min} \left\{ \begin{array}{c} \text{dim} \mathcal{T}_{22\cap12}, \\ (\text{dim} \mathcal{T}_{12} - \text{dim} \mathcal{R}_{12}) + \text{dim} \mathcal{R}_{12\setminus11},\\ (\text{dim} \mathcal{R}_{22} - \text{dim} \mathcal{T}_{22\setminus12})^{+} \end{array} \right\}\\ &= d_{2_{\mathsf{Int}}}', \quad \text{dim} \mathcal{T}_{12} > \text{dim} \mathcal{R}_{12}. \end{array} $$
Therefore, the dimension of \(\mathcal {P}_{12\setminus 11}\), the preimage of \(\mathcal {R}_{12\setminus 11}\) under H12″, is indeed large enough to allow T 2 to transmit the remaining \(d_{2_{\mathsf {Int}}}'\) symbols along the basis functions of \(\text {dim}\mathcal {P}_{12\setminus 11}\). Let \(\ \left \{P_{12}^{(i)} \right \}_{i=1}^{\text {dim}\mathcal {P}_{12\setminus 11}}\) be an orthonormal basis for \(\mathcal {P}_{12\setminus 11}\). Then, we construct \(X_{2_{\mathsf {Int}}}\) as
$$\begin{array}{*{20}l} X_{2_{\mathsf{Int}}} &= \sum\limits_{k=1}^{d_{2_{\mathsf{Int}}}'}\chi_{2}^{\left(k+d_{2_{\mathsf{Orth}}}'\right)}P_{12}^{(k)}. \end{array} $$
In summary, combining all cases we see that the wavevector transmitted by T 2 is
$$ \begin{aligned} X_{2} &= X_{2_{\mathsf{Orth}}} + X_{2_{\mathsf{Int}}} = \sum_{i=1}^{d_{2_{\mathsf{Orth}}}'}\chi_{2}^{(i)} Q_{22\setminus12}^{(i)} + \sum_{k=1}^{d_{2_{\mathsf{Int}}}'} \chi_{2}^{\left(k+d_{2_{\mathsf{Orth}}}'\right)}\\ &\quad\times\left(V_{12}^{(k)}1{\left(\text{dim} \mathcal{T}_{12} \leq \text{dim} \mathcal{R}_{12}\right)} + P_{12}^{(k)}1{\left(\text{dim} \mathcal{T}_{12} > \text{dim} \mathcal{R}_{12}\right)} \right) \end{aligned} $$
$$ \begin{aligned} &= \sum_{i=1}^{d_{2_{\mathsf{Orth}}}'}\chi_{2}^{(i)} Q_{22\setminus12}^{(i)} +\!\!\! \sum_{i=1+d_{2_{\mathsf{Orth}}}'}^{d_{2}'} \chi_{2}^{(i)} \left(V_{12}^{\left(i-d_{2_{\mathsf{Orth}}}'\right)}1{\left(\text{dim} \mathcal{T}_{12} \leq \text{dim} \mathcal{R}_{12}\right)}\right.\\ &\left.\qquad\qquad\qquad\qquad\qquad\quad\;\;+ P_{12}^{\left(i-d_{2_{\mathsf{Orth}}}'\right)}1{\left(\text{dim} \mathcal{T}_{12} > \text{dim} \mathcal{R}_{12}\right)} \right) \end{aligned} $$
$$ \begin{aligned} & = \sum_{i=1}^{d_{2}'}\chi_{2}^{(i)}B_{2}^{(i)},\quad \text{where} \quad \\&\quad B_{2}^{(i)} = \left\{ \begin{array}{lr} Q_{22\setminus12}^{(i)} & : i \leq d_{2_{\mathsf{Orth}}}' \\ V_{12}^{\left(i- d_{2_{\mathsf{Orth}}}'\right)} & : i > d_{2_{\mathsf{Orth}}}',\ \text{dim} \mathcal{T}_{12} \leq \text{dim} \mathcal{R}_{12} \\ P_{12}^{\left(i- d_{2_{\mathsf{Orth}}}'\right)} & : i > d_{2_{\mathsf{Orth}}}',\ \text{dim} \mathcal{T}_{12} > \text{dim} \mathcal{R}_{12} \end{array}. \right. \end{aligned} $$
Now that we have constructed X 1, the uplink wavevector signal transmitted on the the uplink user, and X 2, the wavevector signal transmitted on the dowlink by the base station, we show how the base station receiver, R 1 and the downlink user R 2 process their received signals to detect the original information-bearing symbols.
Processing at the base station receiver, R 1 : We need to show that R 1 can obtain at least \(d_{1}' = \text {dim} \mathcal {R}_{11}\) independent linear combinations of the d1′ symbols transmitted from T 1, and that each of these linear combinations are corrupted only by noise, and not interference from T 2.
In the case where \(\text {dim} \mathcal {T}_{12} > \text {dim} \mathcal {R}_{12}\), T 2 constructed X 2 such that H 12 X 2 is orthogonal to any function in \(\mathcal {R}_{11}\). Therefore, R 1 can eliminate interference from T 2 by simply projecting Y 1 onto \(\mathcal {R}_{11}\) to recover the \(\text {dim} \mathcal {R}_{11}\) linear combinations it needs. We now formalize this projection onto \(\mathcal {R}_{11}\). Recall that the set of left-singular functions of H 11, \(\left \{U_{11}^{(l)} \right \}_{l=1}^{\text {dim} \mathcal {R}_{11}},\) form an orthonormal basis for \(\mathcal {R}_{11}\). In the case where \(\text {dim} \mathcal {T}_{12} > \text {dim} \mathcal {R}_{12}\), receiver R 2 constructs the set of complex scalars \(\left \{ \xi _{1}^{(l)} \right \}_{l=1}^{\text {dim} \mathcal {R}_{11}},\quad \xi _{1}^{(l)} = \left \langle Y_{1}, U_{11}^{(l)} \right \rangle.\) One can check that result of each of these projections is
$$\begin{array}{*{20}l} \xi_{1}^{(l)} = \sigma_{11}^{(l)} \chi_{1}^{(l)} + \left\langle Z_{1}, U_{11}^{(l)} \right\rangle,\quad l = 1,2,\dots, \text{dim} \mathcal{R}_{11}, \end{array} $$
and thus obtains each of the \(d_{1}'=\text {dim} \mathcal {R}_{11}\) linear combinations of the intended symbols corrupted only by noise, as desired. Moreover, in this case the obtained linear combinations are already diagonalized, with the lth projection only containing a contribution from the lth desired symbol.
In the case where \(\text {dim} \mathcal {T}_{12} \leq \text {dim} \mathcal {R}_{12}\), H 12 X 2 in general will not be orthogonal to every function in \(\mathcal {R}_{11}\), and some slightly more sophisticated processing must be performed to decouple the interference from the signal of interest. First, R 1 can recover \(\text {dim} \mathcal {R}_{11\setminus 12}\) interference-free linear combinations by projecting its received signal, Y 1, onto \(\mathcal {R}_{11\setminus 12}\). Let \(\left \{J_{11\setminus 12}^{(l)}\right \}_{l=1}^{\text {dim} \mathcal {R}_{11\setminus 12}}\) be an orthonormal basis for \(\mathcal {R}_{11\setminus 12}\). Receiver R 1 forms a set of complex scalars
$$\left\{ \xi_{1}^{(l)} \right\}_{l=1}^{\text{dim} \mathcal{R}_{11\setminus12}},\quad \xi_{1}^{(l)} = \langle Y_{1}, J_{11\setminus12}^{(l)} \rangle. $$
Note that each \(J_{11\setminus 12}^{(l)}\) will be orthogonal to H 12 X 2 for any X 2 since each \(J_{11\setminus 12}^{(l)} \in \mathcal {R}_{11\setminus 12}\), and \(\mathsf {H}_{12}X_{2} \in \mathcal {R}_{12}\) for any X 2, and \(\mathcal {R}_{11\setminus 12}\) is the orthogonal complement of \(\mathcal {R}_{12}\). Therefore, each \(\xi _{1}^{(l)}\) will be interference free, i.e., will be a linear combination of the symbols \(\left \{\chi _{1}^{\left (l\right)}\right \}_{l=1}^{d_{1}'}\) plus noise, and will contain no contribution from the \(\left \{\chi _{2}^{\left (l\right)}\right \}_{l=1}^{d_{2}'}\) symbols. One can check that these \(\text {dim}\, \mathcal {R}_{11\setminus 12}\) projections result in
$$\begin{array}{*{20}l} \xi_{1}^{(l)} &= \sum\limits_{m=1}^{d_{1}'} \sigma_{11}^{(l)}{\left\langle U_{11}^{(m)}, J_{11\setminus12}^{(l)} \right\rangle} \chi_{1}^{\left(m\right)} + \left\langle Z_{1}, J_{11\setminus12}^{(l)} \right\rangle,\\l &= 1,2,\dots, \text{dim} \mathcal{R}_{11\setminus12}. \end{array} $$
It remains to obtain \(d_{1}' - \text {dim} \,\mathcal {R}_{11\setminus 12} = \text {dim} \,\mathcal {R}_{11} - \text {dim} \,\mathcal {R}_{11\setminus 12} = \text {dim}\, \mathcal {R}_{11\cap 12}\) more independent and interference-free linear combinations of T 1's symbols so that R 1 can solve the system and recover the symbols. Receiver R 1 will obtain these linear combinations via a careful projection onto a subspace of \(\mathcal {R}_{12}\) (which is the orthogonal complement of \(\mathcal {R}_{11\setminus 12}\), the space onto which we have already projected Y 1 to obtain the first \(\text {dim}\, \mathcal {R}_{11\setminus 12}\) linear combinations). Recall that the set of left-singular functions of H 12, \(\left \{U_{12}^{(l)} \right \}_{l=1}^{\text {dim}\, \mathcal {R}_{12}}\), form an orthonormal basis for \(\mathcal {R}_{12}\). Receiver R 1 obtains the remaining \(\mathcal {R}_{11\cap 12}\) linear combinations by projecting Y 1 onto the last \(\text {dim}\, \mathcal {R}_{11\cap 12}\) of these basis functions, forming \(\left \{ \xi _{1}^{(l)}\right \}_{l=\text {dim}\, \mathcal {R}_{11\setminus 12}+1}^{\text {dim}\, \mathcal {R}_{11}}\) by computing
$$\begin{array}{*{20}l} \xi_{1}^{\left(k + \text{dim}\, \mathcal{R}_{11\setminus12}\right)} &= \left\langle Y_{1}, U_{12}^{(\text{dim}\, \mathcal{R}_{12} - k)} \right\rangle, \end{array} $$
$$\begin{array}{*{20}l} &\quad k = 0,1,\dots, \text{dim}\, \mathcal{R}_{11\cap12}-1,\\ & = \left\langle \mathsf{H}_{11}X_{1} + \mathsf{H}_{12}X_{2} + Z_{1}, U_{12}^{(\text{dim}\, \mathcal{R}_{12} - k)} \right\rangle \end{array} $$
$$\begin{array}{*{20}l} & = \left\langle \mathsf{H}_{11}X_{1}, U_{12}^{(\text{dim}\, \mathcal{R}_{12} - k)} \right\rangle\\& \quad+ \left\langle\mathsf{H}_{12}X_{2}, U_{12}^{(\text{dim}\, \mathcal{R}_{12} - k)} \right\rangle\\ &\quad+ \left\langle Z_{1}, U_{12}^{(\text{dim}\, \mathcal{R}_{12} - k)} \right\rangle. \end{array} $$
We compute the terms of Eq. (64) individually. The contribution of T 1's transmit wavevector is
$$\begin{array}{*{20}l} &\left\langle \mathsf{H}_{11}X_{1},\ U_{12}^{(\text{dim} \,\mathcal{R}_{12} - k)} \right\rangle\\ &= \left\langle \sum\limits_{m=1}^{d_{1}'} \sigma_{11}^{(m)} U_{11}^{(m)} \langle V_{11}^{(m)}, X_{1} \rangle,\ U_{12}^{(\text{dim}\, \mathcal{R}_{12} - k)} \right\rangle \end{array} $$
$$\begin{array}{*{20}l} &= \left\langle \sum_{m=1}^{d_{1}'} \sigma_{11}^{(m)} U_{11}^{(m)} \left\langle V_{11}^{(m)}, \sum\limits_{i=1}^{d_{1}'} \chi_{1}^{(i)} V_{11}^{(i)} \right\rangle,\ U_{12}^{(\text{dim}\, \mathcal{R}_{12} - k)} \right\rangle \end{array} $$
$$\begin{array}{*{20}l} &= \sum\limits_{m=1}^{d_{1}'} \sigma_{11}^{(m)}\left\langle U_{11}^{(m)},\ U_{12}^{(\text{dim} \,\mathcal{R}_{12} - k)} \right\rangle \chi_{1}^{(m)},\quad k = 0,1,\dots,\\ &\quad\text{dim} \,\mathcal{R}_{11\cap12}-1. \end{array} $$
In the first step, (65), we use the singular function decomposition of H 11. In the second step, (66), we plug in the construction of X 1 given in 46, and in the last step, (67), we leverage the fact that \(\sum _{i} \chi _{1}^{\left (i\right)} \left \langle V_{11}^{(m)}, V_{11}^{(i)} \right \rangle = \chi _{1}^{\left (m\right)}\), due to the orthonormality of the right singular functions. The contribution of T 2's interfering wavevector is
$$\begin{array}{*{20}l} &\left\langle \mathsf{H}_{12}X_{2},\ U_{12}^{(\text{dim} \,\mathcal{R}_{12} - k)} \right\rangle\\[-2pt] &= \left\langle \mathsf{H}_{12} \left(X_{2_{\mathsf{Orth}}} + X_{2_{\mathsf{Int}}}\right),\ U_{12}^{(\text{dim}\, \mathcal{R}_{12} - k)} \right\rangle\\[-2pt] &= \left\langle \mathsf{H}_{12} X_{2_{\mathsf{Int}}},\ U_{12}^{\left(\text{dim}\, \mathcal{R}_{12} - k\right)} \right\rangle \end{array} $$
$$\begin{array}{*{20}l} &= \left\langle \sum\limits_{m=1}^{\text{dim}\, \mathcal{T}_{12}} \sigma_{12}^{(m)} U_{12}^{(m)} \left\langle V_{12}^{(m)}, X_{2_{\mathsf{Int}}} \right\rangle,\right.\\[-2pt] &\qquad\left.U_{12}^{(\text{dim}\, \mathcal{R}_{12} - k)} {\vphantom{\sum\limits_{m=1}^{\text{dim}}}}\right\rangle \end{array} $$
$$\begin{array}{*{20}l} &= \left\langle \sum\limits_{m=1}^{\text{dim}\, \mathcal{T}_{12}} \sigma_{12}^{(m)} U_{12}^{(m)} \left\langle V_{12}^{(m)}, \sum\limits_{i=1}^{d_{2_{\mathsf{Int}}}'} \chi_{2}^{\left(i+ d_{2_{\mathsf{Orth}}}'\right)}V_{12}^{(i)} \right\rangle,\right.\\[-2pt] &\left.\qquad U_{12}^{(\text{dim}\, \mathcal{R}_{12} - k)} {\vphantom{\sum_{m=1}^{\text{dim}}}}\right\rangle \end{array} $$
$$\begin{array}{*{20}l} &= \left\langle \sum\limits_{i=1}^{d_{2_{\mathsf{Int}}}'} \chi_{2}^{\left(i+ d_{2_{\mathsf{Orth}}}'\right)} \sum\limits_{m=1}^{\text{dim} \,\mathcal{T}_{12}} \sigma_{12}^{(m)} U_{12}^{(m)} \left\langle V_{12}^{(m)}, V_{12}^{(i)} \right\rangle,\right.\\[-2pt] &\left.\qquad U_{12}^{(\text{dim} \,\mathcal{R}_{12} - k)} {\vphantom{\sum_{m=1}^{\text{dim}}}}\right\rangle \end{array} $$
$$\begin{array}{*{20}l} &= \sum_{i=1}^{d_{2_{\mathsf{Int}}}'} \chi_{2}^{\left(i+ d_{2_{\mathsf{Orth}}}'\right)} \sigma_{12}^{(i)}\left\langle U_{12}^{(i)},\ U_{12}^{(\text{dim}\, \mathcal{R}_{12} - k)} \right\rangle, \end{array} $$
$$\begin{array}{*{20}l} &= \sum_{i=1}^{d_{2_{\mathsf{Int}}}'} \chi_{2}^{\left(i+ d_{2_{\mathsf{Orth}}}'\right)} \sigma_{12}^{(i)} \delta_{(i,\text{dim}\, \mathcal{R}_{12} - k)}, \end{array} $$
$$\begin{array}{*{20}l} &=0,\quad k = 0,1,\dots, \text{dim}\, \mathcal{R}_{11\cap12}-1,. \end{array} $$
In step (68) above, we use the fact that \(\phantom {\dot {i}\!}\mathsf {H}_{12} X_{2_{\mathsf {Orth}}} = 0\) by the construction of \(X_{2_{\mathsf {Orth}}}\). Step (69) uses the singular function expansion of H 12, step (70) substitutes the construction of \(X_{2_{\mathsf {Int}}}\), step (71) rearranges terms and step (72) leverages the orthonormality of the right singular functions. In the last step, (74), we have leveraged that when \(\text {dim}\, \mathcal {T}_{12}\leq \text {dim}\, \mathcal {R}_{12} \), \(d_{2_{\mathsf {Int}}}' \leq \text {dim}\, \mathcal {R}_{12\setminus 11}\) (see Eq. 50), which means the largest value of i in the summation, \(d_{2_{\mathsf {Int}}}'\), is smaller that the smallest value of \(\text {dim}\, \mathcal {R}_{12} - k\) under consideration, \(\text {dim}\, \mathcal {R}_{12} - \mathcal {R}_{11\cap 12} + 1 = \text {dim}\, \mathcal {R}_{12\setminus 11} + 1\), so that the delta-function \(\delta _{(i,\text {dim}\, \mathcal {R}_{12} - k)}\) will never evaluate to one. Substituting (67) and (74) back into (64) shows that the output symbols obtained by projecting Y 1 onto the last \(\mathcal {R}_{11\cap 12}\) functions of \(\left \{U_{12}^{(l)} \right \}_{l=1}^{\text {dim}\, \mathcal {R}_{12}}\) are
$$\begin{array}{*{20}l} \xi_{1}^{(k + \text{dim}\, \mathcal{R}_{11\setminus12})} &= \sum\limits_{m=1}^{d_{1}'} \sigma_{11}^{(m)}{\left\langle U_{11}^{(m)}, U_{12}^{(\text{dim}\, \mathcal{R}_{12} - k)} \right\rangle} \chi_{1}^{\left(m\right)}\\ &\quad+ \left\langle Z_{1}, U_{12}^{({\text{dim}\, \mathcal{R}_{12} - k})} \right\rangle,\ {\scriptstyle k = 0,1,\dots, \text{dim}\, \mathcal{R}_{11\cap12}-1}. \end{array} $$
Combining the processing in all cases, we see that receiver R 1 has formed a set of d1′ complex scalars \(\left \{ \xi _{1}^{(l)}\right \}_{l=1}^{d_{1}'}\), such that
$$\begin{array}{*{20}l} \xi_{1}^{(l)} = \sum\limits_{m=1}^{d_{1}'}a_{1}^{(lm)} \chi_{1}^{(m)} + \zeta_{1}^{(l)},\qquad l=1,2, \dots, d_{1}', \end{array} $$
$$ \begin{aligned} &a_{1}^{(lm)} \\ &=\!\!\left\{\! \begin{array}{lr} \delta_{lm} \sigma_{11}^{(l)} & : \text{dim}\, \mathcal{T}_{12} > \text{dim}\, \mathcal{R}_{12} \\ \sigma_{11}^{(m)}{\left\langle U_{11}^{(m)}, J_{11\setminus12}^{(l)} \right\rangle} & \!:\!\text{dim}\, \mathcal{T}_{12} \leq \text{dim}\, \mathcal{R}_{12},\ l \leq \text{dim}\, \mathcal{R}_{11\setminus12} \\ \sigma_{11}^{(m)}{\left\langle U_{11}^{(m)}, U_{12}^{(\text{dim}\, \mathcal{R}_{12} + \text{dim}\, \mathcal{R}_{11\setminus12}-l)} \right\rangle} & :\!\text{dim}\, \mathcal{T}_{12} \leq \text{dim}\, \mathcal{R}_{12},\ l > \text{dim}\, \mathcal{R}_{11\setminus12} \end{array}\!, \right. \end{aligned} $$
$$ \begin{aligned} \zeta_{1}^{(l)} = \left\{ \begin{array}{lr} {\left\langle Z_{1}, U_{11}^{(l)} \right\rangle} & : \text{dim}\, \mathcal{T}_{12} > \text{dim}\, \mathcal{R}_{12} \\ {\left\langle Z_{1}, J_{11\setminus12}^{(l)} \right\rangle} & :\text{dim}\, \mathcal{T}_{12} \leq \text{dim}\, \mathcal{R}_{12},\ l \leq \text{dim}\, \mathcal{R}_{11\setminus12} \\ {\left\langle Z_{1}, U_{12}^{(\text{dim}\, \mathcal{R}_{12} + \text{dim}\, \mathcal{R}_{11\setminus12}-l)} \right\rangle} & :\text{dim}\, \mathcal{T}_{12} \leq \text{dim}\, \mathcal{R}_{12},\ l > \text{dim}\, \mathcal{R}_{11\setminus12} \end{array}\!. \right. \end{aligned} $$
Thus, as desired, in all cases the base station receiver R 1 is able to obtain d1′ interference-free linear combinations of the d1′ symbols from the uplink user transmitter T 1. Now, we move to the processing at the downlink user receiver.
Processing at R 2 : We wish to show that the downlink receiver, R 2, can recover the d2′ symbols transmitted by the base station transmitter, T w . Let \(\left \{\sigma _{22}^{(k)}, U_{22}^{(k)}, V_{22}^{(k)}\right \}\) be a singular system for the operator H 22, and let \(r_{22}\equiv \text {min}\left \{2{L}_{T_{2}} |\Psi _{T_{22}}|,2{L}_{R_{2}} |\Psi _{R_{22}}|\right \}\). From Property 2 of Lemma 1, we know that \(\sigma _{22}^{(k)}\) is zero for all k>r 22 and nonzero for k≤r 22, so that
$$\begin{array}{*{20}l} Y_{2} &= \mathsf{H}_{22}X_{2} + Z_{2}\ =\ \sum_{k=1}^{r_{22}} \sigma_{22}^{(k)} U_{22}^{(k)} \left\langle V_{22}^{(k)}, X_{2} \right\rangle + Z_{2}. \end{array} $$
One can check that
$$ \xi_{2}^{(l)} = \left\langle U_{22}^{(l)}, Y_{2} \right\rangle = \sum_{m=1}^{d_{2}'} a_{2}^{(lm)} \chi_{2}^{(m)} + \zeta_{2}^{(l)}, \qquad l = 1,\dots,d_{2}', $$
$$ \begin{aligned} a_{2}^{(lm)} = \left\{ \begin{array}{lr} \sigma_{22}^{(l)}{\left\langle V_{22}^{(l)}, Q_{22\setminus12}^{(m)} \right\rangle} & : m \leq d_{2_{\mathsf{Orth}}}' \\ \sigma_{22}^{(l)}{\left\langle V_{22}^{(l)}, V_{12}^{\left(m-d_{2_{\mathsf{Orth}}}'\right)} \right\rangle} & : m > d_{2_{\mathsf{Orth}}}',\ \text{dim}\, \mathcal{T}_{12} \leq \text{dim}\, \mathcal{R}_{12} \\ \sigma_{22}^{(l)}{\left\langle V_{22}^{(l)}, P_{12}^{\left(m-d_{2_{\mathsf{Orth}}}'\right)} \right\rangle} & : m > d_{2_{\mathsf{Orth}}}',\ \text{dim}\, \mathcal{T}_{12} > \text{dim}\, \mathcal{R}_{12} \end{array} \right. \end{aligned} $$
$$ \zeta_{2}^{(l)} = \langle U_{22}^{(l)}, Z_{2} \rangle. $$
Reducing to parallel point-to-point vector channels
The above processing at each transmitter and receiver has allowed the receivers R 1 and R 2 to recover the symbols
$$\begin{array}{*{20}l} \xi_{1}^{(l)} &= \sum\limits_{m=1}^{d_{1}'}a_{1}^{(lm)} \chi_{1}^{(m)} + \zeta_{1}^{(l)},\qquad l=1,2, \dots, d_{1}', \end{array} $$
$$\begin{array}{*{20}l} \xi_{2}^{(l)} &= \sum\limits_{m=1}^{d_{2}'} a_{2}^{(lm)} \chi_{2}^{(m)} + \zeta_{2}^{(l)}, \qquad l = 1,\dots,d_{2}', \end{array} $$
respectively, where the linear combination coefficients, \(a_{1}^{(lm)}\) and \(a_{2}^{(lm)}\), are given in (77) and (81), respectively, and the additive noise on each of the recovered symbols, \(\zeta _{1}^{(l)}\) and \(\zeta _{2}^{(l)}\), are given in (78) and (82), respectively.
We can rewrite (83-84) in matrix notation as
$$\begin{array}{*{20}l} \boldsymbol{\xi}_{1} &= \boldsymbol{A}_{1}\boldsymbol{\chi}_{1} + \boldsymbol{\zeta}_{1}, \end{array} $$
where χ 1 and χ 2 are the d1′×1 and d2′×1 vectors of input symbols for transmitters T 1 and T 2, respectively, ζ 1 and ζ 2 are the d1′×1 and d2′×1 vectors of additive noise, respectively, and A 1 and A 2 are d1′×d1′ and d2′×d2′ square matrices whose elements are taken from \(a_{1}^{(lm)}\) and \(a_{2}^{(lm)}\), respectively. The matrices A 1 and A 2 will be full rank for all but a measure-zero set of channel response kernels. Also, since each of the \(\zeta _{j}^{(l)}\)'s are linear combinations of Gaussian random variables, the the noise vectors, ζ 1 and ζ 2, are Gaussian distributed. Therefore, the spatial processing has reduced the original channel to two parallel full-rank Gaussian vector channels: the first a d1′×d1′ channel and the second a d2′×d2′ channel, which are well known to have d1′ and d2′ degrees-of-freedom, respectively [34]. Therefore, the spatial degrees-of-freedom pair (d1′,d2′) is indeed achievable.
The degree-of-freedom pairs (d1′,d2′) and (d1″,d2″), are the corner points of \(\mathcal {D}_{\mathsf {FD}}\), that is
$$\begin{array}{*{20}l} (d_{1}',d_{2}') & = \left(d_{1}^{\mathsf{max}}, \text{min}\left\{d_{2}^{\mathsf{max}}, d_{\mathsf{sum}}^{\mathsf{max}} - d_{1}^{\mathsf{max}}\right\} \right) \end{array} $$
$$\begin{array}{*{20}l} (d_{1}^{\prime\prime},d_{2}^{\prime\prime}) & = \left(\text{min}\left\{d_{1}^{\mathsf{max}},d_{\mathsf{sum}}^{\mathsf{max}} - d_{2}^{\mathsf{max}}\right\}, d_{2}^{\mathsf{max}}\right). \end{array} $$
Note that it is sufficient to prove only Eq. (87), as Eq. (88) follows by the symmetry of the expressions. It is easy to see that \(d_{1}' = \text {min}\left \{2{L}_{T_{1}} |\Psi _{T_{11}}|, 2{L}_{R_{1}} |\Psi _{R_{11}}|\right \} = d_{1}^{\mathsf {max}}\), but it is not so obvious that \(d_{2}' = \text {min}\left \{d_{2}^{\mathsf {max}}, d_{\mathsf {sum}}^{\mathsf {max}} - d_{1}^{\mathsf {max}}\right \}\). However, one can verify that \(d_{2}' = \text {min}\{d_{2}^{\mathsf {max}}, d_{\mathsf {sum}}^{\mathsf {max}} - d_{1}^{\mathsf {max}}\}\) by evaluating the left- and right-hand sides for all combinations of the conditions
$$\begin{array}{*{20}l} {L}_{T_{1}}|\Psi_{T_{11}}| &\lesseqgtr {L}_{R_{1}}|\Psi_{R_{11}}|, \end{array} $$
$$\begin{array}{*{20}l} {L}_{T_{2}}|\Psi_{T_{12}}| &\lesseqgtr {L}_{R_{1}}|\Psi_{R_{12}}| \end{array} $$
and observing equality in each of the four cases. Table 1 shows the expressions to which d2′ and \(\text {min}\left \{d_{2}^{\mathsf {max}}, d_{\mathsf {sum}}^{\mathsf {max}} - d_{1}^{\mathsf {max}}\right \}\) both simplify in each of the four possible cases. □
Table 1 Verifying that the corner points of inner and outer bounds coincide
Lemmas 2 and 3 show that the corner points of \(\mathcal {D}_{\mathsf {FD}}\), (d1′,d2′) and (d1″,d2″) are achievable. And, thus, all other points within \(\mathcal {D}_{\mathsf {FD}}\) are achievable via time sharing between the schemes that achieve the corner points.
To establish the converse part of Theorem 1, we must show that the region \(\mathcal {D}_{\mathsf {FD}}\), which we have already shown is achievable, is also an outer bound on the degrees-of-freedom, i.e., we want to show that if an arbitrary degree-of-freedom pair (d 1,d 2) is achievable, then \((d_{1},d_{2}) \in \mathcal {D}_{\mathsf {FD}}\). It is easy to see that if (d 1,d 2) is achievable, then the singe-user constraints on \(\mathcal {D}_{\mathsf {FD}}\), given in (14) and (15), must be satisfied as the degrees-of-freedom for each flow cannot be more than the point-to-point degrees-of-freedom shown in [24]. Thus, the only step remaining in the converse is to establish an outer bound on the sum degrees-of-freedom which coincides with \(d_{\mathsf {sum}}^{\mathsf {max}}\), the sum-degrees-of-freedom constraint on the achievable region, \(\mathcal {D}_{\mathsf {FD}}\), given in (16).
Thus, to conclude the converse argument, we will now prove the following Genie-aided outer bound on the sum degrees-of-freedom which coincides with the sum-degrees-of-freedom constraint on the achievable region.
$$ \begin{aligned} {d_{1} + d_{2}} \leq \ d_{\mathsf{sum}}^{\mathsf{max}} &= 2{L}_{T_{2}} |\Psi_{T_{22}} \setminus \Psi_{T_{12}}| + 2{L}_{R_{1}} |\Psi_{R_{11}} \setminus \Psi_{R_{12}}|\\ &\quad+2 \,\text{max} ({L}_{T_{2}} |\Psi_{T_{12}}|, {L}_{R_{1}} |\Psi_{R_{12}}|). \end{aligned} $$
Sketch of proof: Before diving into the full proof, we would first like to give a brief overview of the steps in the converse proof. Our process for proving Lemma 4 is twofold.
1) First, a genie expands the transmit scattering intervals \(\Psi _{T_{22}}\) and \(\Psi _{T_{12}}\) until the two intervals are fully overlapped, and likewise expands expands \(\Psi _{R_{11}}\) and \(\Psi _{R_{12}}\) until they are fully overlapped, as shown in Fig. 6. To ensure that the net manipulation of the genie can only enlarge \(\mathcal {D}_{\mathsf {FD}}\), the genie also increases the array lengths \({L}_{T_{2}}\) and \({L}_{R_{1}}\) sufficiently for any added interference due to the expansion of \(\Psi _{T_{12}}\) and \(\Psi _{R_{12}}\) to be compensated by the increased array lengths.
2) After the above genie manipulation is performed, the maximum of the T 2 and R 1 signaling dimensions are equal to \(d_{\mathsf {sum}}^{\mathsf {max}}\) in constraint (16), and since the scattering intervals are overlapped, the channel model becomes the Hilbert space equivalent of the well-studied MIMO Z-channel [35, 36]. The Hilbert space analog to the bounding techniques employed in [35, 36] are then leveraged to conclude the converse proof.
Genie-aided channel model
We prove Lemma 4 by way of a Genie that aids the transmitters and receivers by enlarging the scattering intervals and lengthening the antenna arrays in a way that can only enlarge the degrees-of-freedom region. Applying the point-to-point bounds to the Genie-aided system in a careful way then establishes the outer bound. Assume an arbitrary scheme achieves the degrees-of-freedom pair (d 1,d 2). Thus receivers R 1 and R 2 can decode their corresponding messages with probability of error approaching zero. We must show that the assumption of (d 1,d 2) being achievable implies the constraint in Eq. (91).
Let a Genie expand both scattering intervals at T 2 into the union of the two scattering intervals, that is expand \(\Psi _{T_{22}}\) and \(\Psi _{T_{12}}\) to
$$\Psi'_{T_{22}} = \Psi'_{T_{12}} = \Psi'_{T_{2}} \equiv \Psi_{T_{22}} \cup \Psi_{T_{12}}. $$
Likewise, the Genie expands the scattering intervals at R 1 into their union, that is expand \(\Psi _{R_{11}}\) and \(\Psi _{R_{12}}\) to
$$\Psi'_{R_{11}} = \Psi'_{R_{12}} \equiv \Psi'_{R_{1}} = \Psi_{R_{11}} \cup \Psi_{R_{12}}. $$
The Genie's expansion of \(\Psi _{T_{22}}\) to \(\Psi '_{T_{2}}\) can only enlarge the degrees-of-freedom region, as T 2 could simply not transmit in the added interval \(\Psi '_{T_{2}} \setminus \Psi _{T_{22}}\) (i.e., ignore the added dimensions for signaling to R 2) to obtain the original scenario. Likewise, expanding \(\Psi _{R_{11}}\) to \(\Psi '_{R_{1}}\) will only enlarge the degrees-of-freedom region as R 1 can ignore the portion of the wavevector received over \(\Psi '_{R_{1}}\setminus \Psi _{R_{11}}\) to obtain the original scenario. However, expanding the interference scattering clusters, \(\Psi _{T_{12}}\) and \(\Psi _{R_{12}}\), to \(\Psi '_{T_{2}}\) and \(\Psi '_{R_{1}}\), respectively, can indeed shrink the degrees-of-freedom region due to the additional interference caused by the added overlap with the signal-of-interest intervals \(\Psi _{T_{22}}\) and \(\Psi _{R_{22}}\), respectively. We need a final Genie manipulation to compensate for this added interference, so that the net Genie manipulation can only enlarge the degrees-of-freedom region. Therefore, in the next step we will have the Genie lengthen the arrays at T 2 and R 1 sufficiently to allow any interference introduced by expanding \(\Psi _{T_{12}}\) and \(\Psi _{R_{12}}\), to \(\Psi '_{T_{2}}\) and \(\Psi '_{R_{1}}\), respectively, to be zero-forced without sacrificing any previously available degrees of freedom. Expansion of \(\Psi _{T_{12}}\) to \(\Psi '_{T_{2}} \equiv \Psi _{T_{22}} \cup \Psi _{T_{12}}\) causes the dimension of the interference that T 2 presents to R 1 to increase by at most \(2{L}_{T_{2}}|\Psi _{T_{22}} \setminus \Psi _{T_{12}}|\). Therefore, let the Genie also lengthen R 1's array from \(2{L}_{R_{1}}\) to \(2{L'}_{R_{1}} = 2{L}_{R_{1}} + 2{L}_{T_{2}} \frac {|\Psi _{T_{22}} \setminus \Psi _{T_{12}}|} {|\Psi _{R_{11}} \cup \Psi _{R_{12}}|}\), so that the dimension of the total receive space at R 1, \(\text {dim}\,\mathcal {R}_{1}\), is increased from \(\text {dim}\,\mathcal {R}_{1} = 2{L}_{R_{1}} |\Psi _{R_{11}} \cup \Psi _{R_{12}}| \) to
$$\begin{array}{*{20}l} \text{dim}\,\mathcal{R}'_{1} &= 2{L'}_{R_{1}} |\Psi_{R_{11}} \cup \Psi_{R_{12}}| \end{array} $$
$$\begin{array}{*{20}l} & = \left(2{L}_{R_{1}} + 2{L}_{T_{2}} \frac{|\Psi_{T_{22}} \setminus \Psi_{T_{12}}|} {|\Psi_{R_{11}} \cup \Psi_{R_{12}}|}\right) |\Psi_{R_{11}} \cup \Psi_{R_{12}}| \end{array} $$
$$\begin{array}{*{20}l} &= 2{L}_{R_{1}} |\Psi_{R_{11}} \cup \Psi_{R_{12}}| + 2{L}_{T_{2}}|\Psi_{T_{22}} \setminus \Psi_{T_{12}}| \end{array} $$
$$\begin{array}{*{20}l} & = \text{dim}\mathcal{R}_{1} + 2{L}_{T_{2}}|\Psi_{T_{22}} \setminus \Psi_{T_{12}}|. \end{array} $$
We observe in (95), that the Genie's lengthening of the T 2 array by \(2{L}_{T_{2}} \frac {|\Psi _{T_{22}} \setminus \Psi _{T_{12}}|} {|\Psi _{R_{11}} \cup \Psi _{R_{12}}|}\) has increased the dimension of R 1's total receive signal space by \( 2{L}_{T_{2}}|\Psi _{T_{22}} \setminus \Psi _{T_{12}}|\), which is the worst case increase in the dimension of the interference from T 2 due to expansion of \(\Psi _{T_{12}}\) to \( \Psi _{T_{22}} \cup \Psi _{T_{12}}\). Therefore, the dimension of the subspace of \(\mathcal {R}'_{1}\) which is orthogonal to the interference from T 2 will be at least as large as in the original orthogonal space of \(\mathcal {R}_{1}\). Thus, the combined expansion of \(\Psi _{T_{12}}\) to \(\Psi '_{T_{2}}\) and lengthening of the R 1 array to \({L'}_{R_{1}}\) can only enlarge the degrees-of-freedom region. Analogously, expansion of \(\Psi _{R_{12}}\) to \(\Psi '_{R_{1}} \equiv \Psi _{R_{11}}\cup \Psi _{R_{12}}\) increases the dimension of \(\mathcal {R}_{12}\), the subspace of R 1's receive space which is vulnerable to interference from T 2, by at most \(2{L}_{R_{1}} |\Psi _{R_{11}}\setminus \Psi _{R_{12}}|\). Therefore, let the Genie lengthen T 2's array from \(2{L}_{T_{2}}\) to \(2{L'}_{T_{2}} = 2{L}_{T_{2}} + 2{L}_{R_{1}} \frac {|\Psi _{R_{11}} \setminus \Psi _{R_{12}}|} {|\Psi _{T_{22}} \cup \Psi _{T_{12}}|}\), so that the dimension of the transmit space at T 2, \(\text {dim}\,\mathcal {T}_{2}\), is increased from \(\text {dim}\,\mathcal {T}_{2} = 2{L}_{T_{2}} |\Psi _{T_{22}} \cup \Psi _{T_{12}}| \) to
$$\begin{array}{*{20}l} \text{dim}\,\mathcal{T}'_{2} &= 2{L'}_{T_{2}} |\Psi_{T_{22}} \cup \Psi_{T_{12}}| \end{array} $$
$$\begin{array}{*{20}l} & = \left(2{L}_{T_{2}} + 2{L}_{R_{1}} \frac{|\Psi_{R_{11}} \setminus \Psi_{R_{12}}|} {|\Psi_{T_{22}} \cup \Psi_{T_{12}}|} \right)|\Psi_{T_{22}} \cup \Psi_{T_{12}}| \end{array} $$
$$\begin{array}{*{20}l} &= 2{L}_{T_{2}} |\Psi_{T_{22}} \cup \Psi_{T_{12}}| + 2{L}_{R_{1}} |\Psi_{R_{11}}\setminus\Psi_{R_{12}}| \end{array} $$
$$\begin{array}{*{20}l} &= \text{dim}\,\mathcal{T}_{2} + 2{L}_{R_{1}} |\Psi_{R_{11}}\setminus\Psi_{R_{12}}|. \end{array} $$
We see in (99) that the Genie's lengthening of T 2's array to \(2{L'}_{T_{2}}\) increases the dimension of T 2's transmit signal space by \(2{L}_{R_{1}} |\Psi _{R_{11}}\setminus \Psi _{R_{12}}|\), which is the worst case increase in the dimension of the subspace of R 1's receive subspace vulnerable to interference from T 2. Therefore, T 1 can leverage these extra \(2{L}_{R_{1}} |\Psi _{R_{11}}\setminus \Psi _{R_{12}}|\) dimensions to zero force to the subspace of R 1's receive space that has become vulnerable to interference from T 2 due to the expansion \(\Psi _{R_{12}}\) to \(\Psi '_{R_{1}}\). Thus, the net effect of the Genie's expansion of T 2's interference scattering interval, \(\Psi _{R_{12}}\), to \(\Psi '_{R_{1}}\) and lengthening of the T 2 array to \(2{L'}_{T_{2}}\) can only enlarge the degrees-of-freedom region.
The Genie-aided channel is illustrated in Fig. 6, which emphasizes the fact that the Genie has made the channel fully-coupled in the sense that the signal-of-interest scattering and the interference scattering intervals are identical: any direction of departure from T 2 which scatters to R 2 also scatters to R 1, and any direction of arrival to R 1 which signal can be received from T 1 is a direction from which signal can be received from T 2. Note that for the Genie-aided channel,
$$\text{max}(\text{dim}\,\mathcal{T}'_{2}, \text{dim}\,\mathcal{R}'_{1}) $$
$$ = 2\,\text{max}({L'}_{T_{2}}|\Psi'_{T_{2}}|, {L'}_{R_{1}}|\Psi'_{R_{1}}|) $$
$$ = 2\,\text{max} \left\{ \begin{array}{c} \left({L}_{T_{2}} + {L}_{R_{1}} \frac{|\Psi_{R_{11}}\setminus{\Psi_{R_{12}}|}}{|\Psi'_{T_{2}}|}\right)|\Psi'_{T_{2}}|,\\ \left({L}_{R_{1}} + {L}_{T_{2}} \frac{|\Psi_{T_{22}}\setminus{\Psi_{T_{12}}|}}{|\Psi'_{R_{1}}|}\right)|\Psi'_{R_{1}}| \end{array} \right\} $$
$$ = 2\,\text{max} \left\{ \begin{array}{c} {L}_{T_{2}}|\Psi'_{T_{2}}| + {L}_{R_{1}} |\Psi_{R_{11}}\setminus\Psi_{R_{12}}|,\\ {L}_{R_{1}}|\Psi'_{R_{1}}| + {L}_{T_{2}} |\Psi_{T_{22}}\setminus{\Psi_{T_{12}}|} \end{array} \right\} $$
$$ = 2\,\text{max} \left\{ \begin{array}{c} {L}_{T_{2}}|\Psi_{T_{22}} \cup \Psi_{T_{12}}| + {L}_{R_{1}} |\Psi_{R_{11}}\setminus\Psi_{R_{12}}|,\\ {L}_{R_{1}}|\Psi_{R_{11}} \cup \Psi_{R_{12}}| + {L}_{T_{2}} |\Psi_{T_{22}}\setminus{\Psi_{T_{12}}|} \end{array} \right\} $$
$$ \begin{aligned} = 2\,\text{max} \left\{ \begin{array}{c} {L}_{T_{2}}\left(|\Psi_{T_{12}}| + |\Psi_{T_{22}} \setminus \Psi_{T_{12}}|\right) + {L}_{R_{1}} |\Psi_{R_{11}}\setminus\Psi_{R_{12}}|,\\ {L}_{R_{1}}\left(|\Psi_{R_{12}}| + |\Psi_{R_{11}} \setminus \Psi_{R_{12}}|\right) + {L}_{T_{2}} |\Psi_{T_{22}}\setminus{\Psi_{T_{12}}|} \end{array} \right\} \end{aligned} $$
$$ \begin{aligned} = 2\,\text{max} \left\{ \begin{array}{c} {L}_{T_{2}}|\Psi_{T_{12}}| + {L}_{T_{2}}|\Psi_{T_{22}} \setminus \Psi_{T_{12}}| + {L}_{R_{1}} |\Psi_{R_{11}}\setminus\Psi_{R_{12}}|,\\ {L}_{R_{1}}|\Psi_{R_{12}}| + {L}_{R_{1}}|\Psi_{R_{11}} \setminus \Psi_{R_{12}}|+ {L}_{T_{2}} |\Psi_{T_{22}}\setminus{\Psi_{T_{12}}|} \end{array} \right\} \end{aligned} $$
$$ \begin{aligned} &= 2\,\text{max} ({L}_{T_{2}} |\Psi_{T_{12}}|, {L}_{R_{1}} |\Psi_{R_{12}}|) + 2{L}_{T_{2}} |\Psi_{T_{22}} \setminus \Psi_{T_{12}}|\\ &\quad+ 2{L}_{R_{1}} |\Psi_{R_{11}} \setminus \Psi_{R_{12}}|, \end{aligned} $$
which is the outer bound on sum degrees-of-freedom that we wish to prove. Thus, if we can show that for the Genie-aided channel
$$ \begin{aligned} d_{1} + d_{2} &\leq 2\,\text{max}\left({L'}_{T_{2}}|\Psi'_{T_{2}}|, {L'}_{R_{1}}|\Psi'_{R_{1}}|\right)\\ &= \text{max}\left(\text{dim}\,\mathcal{T}'_{2}, \text{dim}\,\mathcal{R}'_{1}\right) \end{aligned} $$
then the converse is established. Because the Genie-aided channel is now fully coupled, it is similar to the continuous Hilbert space analog of the full-rank discrete-antennas MIMO Z interference channel. Thus, the remaining steps in the converse argument are inspired by the techniques used in [35–37] for outer bounding the degrees-of-freedom of the MIMO interference channel.
Consider the case in which \(\text {dim} \,\mathcal {T}'_{2} \leq \text {dim}\,\mathcal {R}'_{1}\). Since our Genie has enforced \(\Psi '_{T_{22}} = \Psi '_{T_{12}}\) and we have assumed \(\text {dim}\, \mathcal {T}'_{2} \leq \text {dim}\,\mathcal {R}'_{1}\), receiver R 1 has access to the entire signal space of T 2, i.e., T 2 cannot zero force to R 1. Moreover, by our hypothesis that (d 1,d 2) is achieved, R 1 can decode the message from T 1, and can thus reconstruct and subtract the signal received from T 1 from its received signal.
Since R 1 has access to the entire signal-space of T 2, after removing the signal from T 1 the only barrier to R 1 also decoding the message from T 2 is the receiver noise process. If it is not already the case, let a Genie lower the noise at receiver R 1 until T 2 has a better channel to R 1 than R 2 (this can only increase the capacity region since R 1 could always locally generate and add noise to obtain the original channel statistics). By hypothesis, R 2 can decode the message from T 2, and since T 2 has a better channel to R 1 than R 2, R 1 can also decode the message from T 1.
Since R 1 can decode the messages from both T 1 and T 2, we can bound the degrees-of-freedom region of the Genie-aided channel by the corresponding point-to-point channel in which T 1 and T 2 cooperate to jointly communicate their messages to R 1, which has degrees-of-freedom \(\text {min}\left (\text {dim}\, \mathcal {T}'_{1} + \text {dim}\,\mathcal {T}'_{2},\ \text {dim}\,\mathcal {R}'_{1}\right)\), which implies that
$$ d_{1} + d_{2} \leq \text{dim}\, \mathcal{R}'_{1}, \quad \text{when} \,\text{dim}\, \mathcal{T}'_{2} \leq\text{dim}\,\mathcal{R}'_{1}. $$
Now, consider the alternate case in which \(\text {dim}\, \mathcal {T}'_{2}< \text {dim}\,\mathcal {R}'_{1}\). In this case, we let a Genie increase the length of the R 1 array once more from \(2{L'}_{R_{1}}\) to \(2{L''}_{R_{1}} = 2{L'}_{T_{2}} \frac {|\Psi '_{T_{2}}|}{|\Psi '_{R_{1}}|} > 2{L'}_{R_{1}}\), so that the dimension of the receive signal space at R 1, which we now call \(\mathcal {R}''_{1}\), is expanded to
$$\begin{array}{*{20}l} \text{dim} \,\mathcal{R}^{\prime\prime}_{1} &= 2{L'}_{R_{2}} |\Psi'_{R_{1}}| =\left(2{L'}_{T_{2}} \frac{|\Psi'_{T_{2}}|}{|\Psi'_{R_{1}}|} \right) |\Psi'_{R_{1}}| \end{array} $$
$$\begin{array}{*{20}l} &= 2{L'}_{T_{2}} |\Psi'_{T_{2}}| = \text{dim}\, \mathcal{T}'_{2}. \end{array} $$
Since \(\text {dim}\, \mathcal {R}''_{1} = \text {dim}\, \mathcal {T}'_{2}\) and \(\Psi '_{T_{22}} = \Psi '_{T_{12}}\), R 1 again has access to the entire transmit signal space of T 2, we can use the same argument we leveraged above in the \(\text {dim} \mathcal {T}'_{2} \leq \text {dim}\,\mathcal {R}'_{1}\) case to show that
$$ {}d_{1} + d_{2} \leq \text{dim} \,\mathcal{R}^{\prime\prime}_{1} = \text{dim}\, \mathcal{T}'_{2}, \quad \text{when} \,\text{dim} \,\mathcal{T}'_{2} > \text{dim}\,\mathcal{R}'_{1}. $$
Combining the bounds in (108) and (111) yields,
$$\begin{array}{*{20}l} d_{1} + d_{2} &\leq \,\text{max}(\text{dim}\,\mathcal{T}'_{2}, \text{dim}\,\mathcal{R}'_{1})\\ &= 2\,\text{max}\left({L'}_{T_{2}}|\Psi'_{T_{2}}|, {L'}_{R_{1}}|\Psi'_{R_{1}}|\right) \end{array} $$
$$\begin{array}{*{20}l} &= 2\,\text{max} \left({L}_{T_{2}} |\Psi_{T_{12}}|, {L}_{R_{1}} |\Psi_{R_{12}}|\right)\\ &\quad+ 2{L}_{T_{2}} |\Psi_{T_{22}} \setminus \Psi_{T_{12}}| + 2{L}_{R_{1}} |\Psi_{R_{11}} \setminus \Psi_{R_{12}}| \end{array} $$
thus showing that the sum-degrees-of-freedom bound of Eq. (16) in Theorem 1 must hold for any achievable degree-of-freedom pair. □
Combining Lemma 4 with the trivial point-to-point bounds establishes that the region \(\mathcal {D}_{\mathsf {FD}}\), given in Theorem 1, is an outer bound on any achievable degrees-of-freedom pair, thus establishing the converse part of Theorem 1.
Impact on full-duplex design
We have characterized, \(\mathcal {D}_{\mathsf {FD}}\), the degrees-of-freedom region achievable by a full-duplex base-station which uses spatial isolation to avoid self-interference while transmitting the uplink signal while simultaneously receiving. Now, we wish to discuss how this result impacts the operation of full-duplex base stations. In particular, we aim to ascertain in what scenarios full-duplex with spatial isolation outperforms half-duplex, and are there scenarios in which full-duplex with spatial isolation achieves an ideal rectangular degrees-of-freedom region (i.e., both the uplink flow and downlink flow achieving their respective point-to-point degrees-of-freedom).
To answer the above questions, we must first briefly characterize \(\mathcal {D}_{\mathsf {HD}}\), the region of degrees-of-freedom pairs achievable via half-duplex mode, i.e., by time-division-duplex between uplink and downlink transmission. It is easy to see that the half-duplex achievable region is characterized by
$$\begin{array}{*{20}l} d_{1} &\leq \alpha \text{min}\left\{2{L}_{T_{1}} |\Psi_{T_{11}}|, 2{L}_{R_{1}} |\Psi_{R_{11}}|\right\}, \end{array} $$
$$\begin{array}{*{20}l} d_{2} & \leq (1-\alpha)\text{min}\left\{2{L}_{T_{2}} |\Psi_{T_{22}}|, 2{L}_{R_{2}} |\Psi_{R_{22}}|\right\}, \end{array} $$
where α∈[0,1] is the time sharing parameter. Obviously \(\mathcal {D}_{\mathsf {HD}}\subseteq \mathcal {D}_{\mathsf {FD}}\), but we are interested in contrasting the scenarios for which \(\mathcal {D}_{\mathsf {HD}}\subset \mathcal {D}_{\mathsf {FD}}\), and full-duplex spatial isolation strictly outperforms half-duplex time division, and the scenarios for which \(\mathcal {D}_{\mathsf {HD}}=\mathcal {D}_{\mathsf {FD}}\) and half-duplex can achieve the same performance as full-duplex. We will consider two particularly interesting cases: the fully spread environment, and the symmetric spread environment.
Overlapped scattering case
Consider the worst case for full-duplex operation in which the self-interference backscattering intervals perfectly overlap the forward scattering intervals of the signals-of interest. By "overlapped" we mean that the directions of departure from the base station transmitter, T 2, that scatter to the intended downlink receiver, R 2, are identical to the directions of departure that backscatter to the base station receiver, R 1, as self-interference, so that \(\Psi _{T_{11}} = \Psi _{T_{12}}\). Likewise, the directions of arrival to the base station receiver, R 1, of the intended uplink signal from T 1 are identical to the directions of arrival of the backscattered self-interference from T 2, so that \(\Psi _{R_{22}} = \Psi _{T_{12}}\). To reduce the number of variables in the degrees-of-freedom expressions, we assume each of the scattering intervals are of size |Ψ|, so that \( {|\Psi _{T_{11}}| \,=\, |\Psi _{R_{11}}| \,=\, |\Psi _{T_{22}}| = |\Psi _{R_{22}}| = |\Psi _{T_{12}}|= |\Psi _{R_{12}}| \equiv |\Psi |.} \) We further assume that the base station arrays are of length \(2{L}_{R_{1}} = 2{L}_{T_{2}} = 2{L}_{\mathsf {BS}}\), and the user arrays are of equal length \(2{L}_{T_{1}} = 2{L}_{R_{2}} = 2{L}_{\mathsf {Usr}}\). In this case, the full-duplex degrees-of-freedom region, \(\mathcal {D}_{\mathsf {FD}}\), simplifies to
$$\begin{array}{*{20}l} {}d_{i} \leq |\Psi|\text{min}\{2{L}_{\mathsf{BS}},2{L}_{\mathsf{Usr}}\},i=1,2;\quad d_{1} + d_{2} \leq 2{L}_{\mathsf{BS}}|\Psi| \end{array} $$
while the half-duplex achievable region, \(\mathcal {D}_{\mathsf {HD}}\) simplifies to
$$\begin{array}{*{20}l} d_{1} + d_{2} \leq |\Psi|\text{min}\{2{L}_{\mathsf{BS}},2{L}_{\mathsf{Usr}}\}. \end{array} $$
The following remark characterizes the scenarios for which full-duplex with spatial isolation beats half-duplex.
In the overlapped scattering case, \(\mathcal {D}_{\mathsf {HD}} \subset \mathcal {D}_{\mathsf {FD}}\) when 2L BS >2L Usr , else \(\mathcal {D}_{\mathsf {HD}} = \mathcal {D}_{\mathsf {FD}}\).
We see that full-duplex outperforms half-duplex only if the base station arrays are longer than the user arrays. This is because in the overlapped scattering case, the only way to spatially isolate the self-interference is zero forcing, and zero forcing requires extra antenna resources at the base station. When 2L BS ≤2L Usr , the base station has no extra antenna resources it can leverage for zero forcing, and thus, spatial isolation of the self-inference is no better than isolation via time division. However, when 2L BS >2L Usr , the base station transmitter can transmit (2L BS −2L Usr )|Ψ| zero-forced streams on the downlink without impeding the reception of the the full 2L Usr |Ψ| streams on the uplink, enabling a sum-degrees-of-freedom gain of (2L BS −2L Usr )|Ψ| over half-duplex. Indeed when the base station arrays are at least twice as long as the user arrays, the degrees-of-freedom region is rectangular, and both uplink and downlink achieve the ideal 2L Usr |Ψ| degrees-of-freedom.
Symmetric spread
The previous overlapped scattering case is the worst case for full duplex operation. Let us now consider the more general case where the self-interference backscattering and the signal-of-interest forward scattering are not perfectly overlapped. This case illustrates the impact of the overlap of the scattering intervals on full-duplex performance. Once again, to reduce the number of variables, we will make following symmetry assumptions. Assume all the arrays in the network, the two arrays on the base station as well as the array on each of the user devices, are of the same length 2L, that is \(2{L}_{T_{1}} = 2{L}_{R_{1}} = 2{L}_{T_{2}} = 2{L}_{R_{2}} \equiv 2L.\) Also, assume that the size of the forward scattering intervals to/from the intended receiver/transmitter is the same for all arrays \(|\Psi _{T_{11}}| = |\Psi _{R_{11}}| = |\Psi _{T_{22}}| = |\Psi _{R_{22}}| \equiv |\Psi _{\mathsf {Fwd}}|,\) and that the size of the backscattering interval is the same at the base station receiver as at the base station trasmitter \( |\Psi _{T_{12}}| = |\Psi _{R_{12}}| \equiv |\Psi _{\mathsf {Back}}|.\) Finally, assume the amount of overlap between the backscattering and the forward scattering is the same at the base station transmitter as at the base station receiver so that \(|\Psi _{T_{22}}\cap \Psi _{T_{12}}| = |\Psi _{R_{11}}\cap \Psi _{R_{12}}| \equiv |\Psi _{\mathsf {Fwd}} \cap \Psi _{\mathsf {Back}}| = |\Psi _{\mathsf {Fwd}}| - |\Psi _{\mathsf {Fwd}} \setminus \Psi _{\mathsf {Back}}|.\)
We call Ψ Back the backscatter interval since it is the angle subtended at the base station by the back-scattering clusters, while we call Ψ Fwd the forward interval, since it is the angle subtended by the clusters that scatter towards the intended transmitter/receiver. In this case, the full-duplex degree-of-freedom region, \(\mathcal {D}_{\mathsf {FD}}\) simplifies to
$$\begin{array}{*{20}l} d_{i} &\leq 2L|\Psi_{\mathsf{Fwd}}|,\ i=1,2 \end{array} $$
$$\begin{array}{*{20}l} d_{1} + d_{2} &\leq 2L(2 |\Psi_{\mathsf{Fwd}}\setminus\Psi_{\mathsf{Back}}| + |\Psi_{\mathsf{Back}}|) \end{array} $$
while the half-duplex achievable region, \(\mathcal {D}_{\mathsf {HD}}\) is
$$\begin{array}{*{20}l} d_{1} + d_{2} &\leq 2L|\Psi_{\mathsf{Fwd}}|. \end{array} $$
Comparing \(\mathcal {D}_{\mathsf {FD}}\) and \(\mathcal {D}_{\mathsf {HD}}\) above we see that in the case of symmetric scattering, \(\mathcal {D}_{\mathsf {HD}} = \mathcal {D}_{\mathsf {FD}}\) if and only if Ψ Fwd =Ψ Back , else \(\mathcal {D}_{\mathsf {HD}} \subset \mathcal {D}_{\mathsf {FD}}\) (we are neglecting the trivial case of L=0).
Thus, the full-duplex spatial isolation region is strictly larger than the half-duplex time-division region unless the forward interval and the backscattering interval are perfectly overlapped. The intuition is that when Ψ Fwd =Ψ Back the scattering interval is shared resource, just as is time, thus trading spatial resources is equivalent to trading time-slots. However, if Ψ Fwd ≠Ψ Back , there is a portion of space exclusive to each user which can be leveraged to improve upon time division. Moreover, inspection of \(\mathcal {D}_{\mathsf {FD}}\) above leads to the following remark.
In the case of symmetric scattering, the degrees-of-freedom region is rectangular if and only if
$$ |\Psi_{\mathsf{Back}}\setminus\Psi_{\mathsf{Fwd}}| \geq |\Psi_{\mathsf{Fwd}}\cap\Psi_{\mathsf{Back}}|. $$
The above remark can be verified by comparing (118) and (119) observing that the sum-rate bound, (119), is only active when
$$ 2|\Psi_{\mathsf{Fwd}}\setminus\Psi_{\mathsf{Back}}| + |\Psi_{\mathsf{Back}}| \geq 2|\Psi_{\mathsf{Fwd}}|. $$
Straightforward set-algebraic manipulation of condition (122) shows that it is equivalent to (121).
The intuition is that because Ψ Back ∖Ψ Fwd are the set directions in which the base station couples to itself but not to the users, the corresponding 2L|Ψ Back ∖Ψ Fwd | dimensions are useless for spatial multiplexing, and therefore "free" for zero forcing the self-interference, which has maximum dimension 2L|Ψ Fwd ∩Ψ Back |. Thus, when |Ψ Back ∖Ψ Fwd |≥|Ψ Fwd ∩Ψ Back |, we can zero force any self-interference that is generated, without sacrificing any resource needed for spatial multiplexing to intended users.
Consider a numerical example in which |Ψ Fwd |=1 and |Ψ Back |=1, thus the overlap between the two, |Ψ Fwd ∩Ψ Back |, can vary from zero to one. Figure 7 plots the half-duplex region, \(\mathcal {D}_{\mathsf {HD}}\), and the full-duplex region, \(\mathcal {D}_{\mathsf {FD}}\), for several different values of overlap, |Ψ Fwd ∩Ψ Back |. We see that when Ψ Fwd =Ψ Back so that |Ψ Fwd ∩Ψ Back |=1, both \(\mathcal {D}_{\mathsf {HD}}\) and \(\mathcal {D}_{\mathsf {FD}}\) are the same triangular region. When |Ψ Fwd ∩Ψ Back |=0.75, we get a rectangular region. Once |Ψ Fwd ∩Ψ Back |≤0.5, |Ψ Back ∖Ψ Fwd | becomes greater than 0.5, such that condition of (121) is satisfied and the degree-of-freedom region becomes rectangular.
Symmetric-spread degree-of-freedom regions for different amounts of scattering overlap
The overall takeaway is that as the amount of backscattering increases, more degrees of freedom must be sacrificed to achieve sufficient self-interference suppression, and less degrees of freedom are left for signaling to the desired users. Therefore, full-duplex operation, where self-interference is suppressed by beamforming, is indeed feasible, when the antenna array at the base station is sufficiently large, and when the backscattering is sufficiently limited.
Simulation example
We now consider a simple simulation example that illustrates the results of Theorem 1. In particular, this example illustrates that as the angular spread of the backscattering increases, more transmit degrees of freedom must be sacrificed in order to sufficiently suppress self-interference at the base station. Theorem 1 was derived under several theoretical assumptions which are relaxed in this simulation to show that the same trends still apply. The channel model of (2) focuses on backscattering only; in this simulation, we also consider the direct-path self-interference from the transmit array to the receive array. Moreover, Theorem 1 was derived using continuous linear arrays, but to make the simulation closer to practical implementations, we consider discrete arrays rather than continuous arrays.
As depicted in Fig. 8, the base station transmit array is a 36-element linear array, and likewise for the receive array. The separation between antenna elements within each array is Δ=λ/2, where λ is the wavelength. These discrete arrays would therefore roughly correspond to a continuous arrays of length L=18 (normalized by wavelength). The transmit and receive arrays are parallel to each other and side-by-side, with a separation between the transmit and receive arrays of 5λ. We model the antenna elements as ideal point sources, such that the direct-path channel between transmit antenna m and receive antenna n can be simply modeled as [38, 39]
$$ \left[ \boldsymbol{C}_{\mathsf{direct}} \right]_{nm}= \frac{e^{j k r_{nm}}} {r_{nm}}, $$
Simulated base station array configuration. Both the transmit and receive arrays have 36 antennas with half-wavelength spacing between antenna elements, and 5-wavelength spacing between the transmit and receive arrays
where, r nm is the distance between antennas m and n, \(k = \frac {2\pi }{\lambda }\) is the wavenumber, and \(j = \sqrt {-1}\). As is done in the simulation examples of [25], the backscattering is modeled via a simple discretization of the original continuous channel model
$$ \begin{aligned} \left[ \boldsymbol{C}_{\mathsf{scat}} \right]_{nm} &= \frac{1}{\Delta} C_{12}(q_{n},p_{m})\\ &= \int \int {A}_{R_{1}}(q_{n},\tau) H_{12}(\tau,t) {A}_{T_{2}}(t,p_{n}) \, d\tau\, dt\, \end{aligned} $$
where C 12 is the continuous self-interference channel response described in Eqs. (2)–(8). We generate channel realizations by drawing H(τ,t) from a two-dimensional white gaussian process over \((\tau,t) \in \Psi _{R_{12}} \times \Psi _{T_{12}}\), and set H(τ,t)=0 for \((\tau,t) \notin \Psi _{R_{12}} \times \Psi _{T_{12}}\). As in the symmetric-spread example, for convenience we let \(\Psi _{T_{12}} = \Psi _{R_{12}} = \Psi _{\mathsf {Back}}\). The total self-interference channel is H self =C direct +α C scat ., where α is a scalar chosen such that the backscattered self-interference is 20 dB weaker (on average) than the direct-path self-interference. We assume the noise floor is 80 dB below the transmit signal power. We consider the case of no backscattering, as well as cases where the backscattering subtends angles of 15°, 45°, 90°, and the fully backscattered case where the backscattering subtends 180°.
We simulate a transmit beamforming scheme inspired by the degrees-of-freedom achievability proof of section 3. Let d T denote the dimension of the base stations transmit signal (i.e., the number of data streams the base station wishes to transmit).4 In the achievability proof, the base station transmitter avoids self-interference by projecting the d T transmit symbols onto the nullspace of the self-interference channel. Here, we generalize this nullspace-projection approach by having the base station transmitter project its d T transmit symbols onto the d T weakest singular vectors (i.e., the d T left singular vectors corresponding to the d T smallest singular values) of the self-interference channel, H self . This beamforming approach, which we call "soft nulling" allows a flexible tradeoff between number of downlink dimensions, d T , and the amount of self-interference generated: better self-interference suppression can be achieved by sacrificing transmit dimension. This concept of soft nulling is explored in depth in [27].
The results of the simulation are shown in Fig. 9. We see that for the case of no backscattering, the self-interference can be suppressed to the noise floor while maintaining a 32-dimensional downlink signal, only sacrificing 4 of the 36 downlink dimensions in order to suppress self-interference. However, in concurrence with the trend predicted by Theorem 1, as the angular spread of the backscattering increases, more transmit dimension must be sacrificed in order to suppress the self-interference to the noise floor. Merely increasing the backscattering spread to 15° has a large impact: only 22 downlink transmit dimensions can be maintained while suppressing the self-interference to the noise floor—14 of the 36 transmit dimensions must be sacrificed to suppress the self-interference. In the case of a fully-backscattered self-interference channel, the self-interference cannot be suppressed to the noise floor even if only one transmit dimension is used. In summary, we see that the angular spread of the backscattering dictates how many transmit dimensions must be sacrificed in order to sufficiently suppress self-interference.
Simulation results: self-interference versus dimension of downlink transmit signal
Full-duplex operation presents an opportunity for base stations to as much as double their spectral efficiency by both transmitting downlink signal and receiving uplink signal at the same time in the same band. The challenge to full-duplex operation is high-powered self-interference that is received both directly from the base station transmitter and backscattered from nearby objects. The receiver can be spatially isolated from the transmitter by leveraging multi-antenna beamforming to avoid self-interference, but such beamforming can also decrease the degrees-of-freedom of the intended uplink and downlink channels. We have leveraged a spatial antenna-theory-based channel model to analyze the spatial degrees-of-freedom available to a full-duplex base station. The analysis has shown the full-duplex operation can indeed outperform half-duplex operation when either (1) the base station arrays are large enough for the base station to zero-force the backscattered self-interference or (2) the backscattering directions are not fully overlapped with the forward scattering directions, so that the base station can leverage the non-overlapped intervals for interference free signaling to/from the intended users.
1 An additional challenge is the potential for the uplink user's transmission to interfere with the downlink user's reception, but in this paper we focus solely on the challenge of self-interference.
2 We acknowledge that a continuous array which can support arbitrary current distributions may not be feasible to construct in practice due to the complications of feeding the array and achieving impedance match. However, as has been shown in the work of [24–26], a continuous array is nonetheless a very useful theoretical construct to develop performance bounds for any discrete antenna array subject to the same size constraint.
3 There is extensive ongoing research on scheduling algorithms to select uplink and downlink users such that the uplink user generates little interference to the downlink user [40–44] (and references within). Thus, we make the simplifying assumption that there is no channel from the uplink transmitter, T 1, to the downlink receiver, R 2. This assumption allows the analysis to focus on the challenge of backscattered self-interference. An extension of this work, [45], which is outside the scope of this paper, focuses on the challenge of inter-user interference in a full-duplex network, and provides analysis for the case where there is a nonzero channel from T 1 to R 2.
4 We call d T the "dimension of the transmit signal" instead of "degrees of freedom", because in this simulation, where SNR is finite, the term "degrees of freedom" is not correct by the rigorous definition used in the prior analysis.
Appendix A: Functional analysis definitions
Let \(\mathcal {X}\) be a Hilbert space, the orthogonal complement of \(\mathcal {S} \subseteq \mathcal {X}\), denoted \(\mathcal {S}^{\perp }\), is the subset \( \mathcal {S}^{\perp } \equiv \{x \in \mathcal {X}: \langle x,u\rangle =0\ \forall \ u\in \mathcal {S} \}.\) Let \(\mathcal {X}\) and \(\mathcal {Y}\) be vector spaces (e.g., Hilbert spaces) and let \(\mathsf {C}:\mathcal {X}\rightarrow \mathcal {Y}\) be a linear operator. Let \(\mathcal {S} \subseteq \mathcal {Y}\) be a subspace of \(\mathcal {Y}\). The nullspace of C, denoted N(C), is the subspace \(N(\mathsf {C}) \equiv \{x\in \mathcal {X}: \mathsf {C}x = 0 \}.\) The range of C, denoted R(C), is the subspace \(R(\mathsf {C}) \equiv \{\mathsf {C}x: x\in \mathcal {X}\}.\) The preimage of \(\mathcal {S}\) under C, \({\mathsf {C}}^{\leftarrow }(\mathcal {S})\), is the subspace (one can check that if \(\mathcal {S}\) is a subspace then \({\mathsf {C}}^{\leftarrow }(\mathcal {S})\) is a subspace also). \({\mathsf {C}}^{\leftarrow }(\mathcal {S}) \equiv \{x \in \mathcal {X}: \mathsf {C}x \in \mathcal {S} \}.\) The rank of C is the dimension of the range of C. A fundamental result in functional analysis is that the dimension of the range of C is also the dimension of the orthogonal complement of the nullspace of C (i.e. the coimage of C) so that we can write \( \mathop {\text {rank}} \mathsf {C} \equiv \text {dim} \,R(\mathsf {C}) = \text {dim}\, N(\mathsf {C})^{\perp }. \)
Appendix B: functional analysis lemmas
Let \(\mathcal {X}\) and \(\mathcal {Y}\) be Hilbert spaces and let \(\mathsf {C}:\mathcal {X}\rightarrow \mathcal {Y}\) be a compact linear operator. There exists a singular system {σ k ,v k ,u k }, for C defined as follows. The set of functions {u k } form an orthonormal basis for \(\overline {R(\mathsf {C})}\), the closure of the range of C, and the set of functions {v k } form an orthonormal basis for N(C)⊥, the coimage of C. The set of positive real numbers σ k , called the singular values of C, are the nonzero eigenvalues of (C ∗ C) arranged in decreasing order. The singular system diagonalizes C in the sense that for any (σ k ,v k ,u k )∈{σ k ,v k ,u k }, C v k =σ k u k . Moreover, the operation of C on any \({x}\in \mathcal {X}\) can be expanded as \( \mathsf {C} {x} = \sum _{k} \sigma _{k} \langle {x}, {v}_{k} \rangle {u}_{k}, \) which is called the singular value expansion of C x. See Section 16.1 and 16.2 of [32] for a proof.
Let \(\mathcal {X}\) and \(\mathcal {Y}\) be Hilbert spaces and let \(\mathsf {C}:\mathcal {X}\rightarrow \mathcal {Y}\) be a linear operator with closed range. There exists a unique linear operator C +, called the Moore-Penrose pseudoinverse of C, with the following properties: (i) C + C x=x ∀x∈N(C)⊥ (ii) CC + y=y ∀y∈R(C) (iii) R(C +)=N(C)⊥ (iv) N(C +)=R(C)⊥.
See Definition 2.2 and Proposition 2.3 of [33] for a proof.
Let \(\mathcal {X}\)and \(\mathcal {Y}\) be finite-dimensional Hilbert spaces and let \(\mathsf {C}:\mathcal {X}\rightarrow \mathcal {Y}\)be a linear operator with closed range. Let \(\mathcal {S} \subseteq \mathcal {Y}\) be a subspace of \(\mathcal {Y}\). Then the dimension of the preimage of \(\mathcal {S}\) under C is
$$ \text{dim} \,{\mathsf{C}}^{\leftarrow}(\mathcal{S}) = \text{dim}\, N(\mathsf{C}) + \text{dim}\,(R(\mathsf{C})\cap \mathcal{S}). $$
For notational convenience, let \(d_{P} \equiv \text {dim}\, {\mathsf {C}}^{\leftarrow }(\mathcal {S})\), d N ≡dim N(C), and \(d_{R\cap \mathcal {S}} \equiv \text {dim}\,(R(\mathsf {C})\cap \mathcal {S})\). Thus we wish to show that \(d_{P} = d_{N} + d_{R\cap \mathcal {S}}\). First note that \( N(\mathsf {C}) \subseteq {\mathsf {C}}^{\leftarrow }(\mathcal {S}) \), since \(\mathcal {S}\) is a subspace and hence contains the zero vector, and the preimage of the zero vector under C is the nullspace of C. Denote the intersection between the preimage of S under C and the orthogonal complement of the nullspace of C (i.e., the coimage) as
$$ \mathcal{B} \equiv {\mathsf{C}}^{\leftarrow}(\mathcal{S}) \cap N(\mathsf{C})^{\perp}. $$
Note that \(\mathcal {B}\) is a subspace of \(\mathcal {X}\) since the intersection of any collection of subspaces is itself a subspace (see Thm. 1 on p. 3 of [46]). Every \(x\in {\mathsf {C}}^{\leftarrow }(\mathcal {S})\) can be expressed as x=w+u for some w∈N(C) and \(u \in \mathcal {B}\), and 〈w,u〉=0 for any w∈N(C) and \(u \in \mathcal {B}\). Thus, we can say that the preimage, \({\mathsf {C}}^{\leftarrow }(\mathcal {S})\), is the orthogonal direct sum of subspaces N(C) and \(\mathcal {B}\) ([32] Def. 4.26), a relationship we note we denote as \( {\mathsf {C}}^{\leftarrow }(\mathcal {S}) = N(\mathsf {C}) \oplus \mathcal {B}. \)
Let \(\{ a_{i} \}_{i=1}^{d_{N}}\) be a basis for N(C) and \(\{ b_{i} \}_{i=1}^{d_{\mathcal {B}}}\) be a basis for \(\mathcal {B}\), where d N =dim N(C) and \(d_{\mathcal {B}} = \text {dim}\, \mathcal {B}\). Construct the set \(\{ e_{i} \}_{i=1}^{d_{N}+d_{\mathcal {B}}}\) according to
$$ \{ e_{i} \}_{i=1}^{d_{N}} = \{ a_{i} \}_{i=1}^{d_{N}},\qquad\{ e_{i} \}_{i=d_{N}+1}^{d_{N}+d_{\mathcal{B}}} = \{ b_{i} \}_{i=1}^{d_{\mathcal{B}}}. $$
We claim that \(\{ e_{i} \}_{i=1}^{d_{N}+d_{\mathcal {B}}}\) forms a basis for \({\mathsf {C}}^{\leftarrow }(\mathcal {S})\). To check that \(\{ e_{i} \}_{i=1}^{d_{N}+d_{\mathcal {B}}}\) is a basis for \({\mathsf {C}}^{\leftarrow }(\mathcal {S})\), we must first show \(\{ e_{i} \}_{i=1}^{d_{N}+d_{\mathcal {B}}}\) spans \({\mathsf {C}}^{\leftarrow }(\mathcal {S})\), and then show that the elements of \(\{ e_{i} \}_{i=1}^{d_{N}+d_{\mathcal {B}}}\) are linearly independent. Consider an arbitrary \(x\in {\mathsf {C}}^{\leftarrow }(\mathcal {S})\). Since \({\mathsf {C}}^{\leftarrow }(\mathcal {S}) = N(\mathsf {C}) \oplus \mathcal {B}\), x=w+u for some w∈N(C) and \(u \in \mathcal {B}\). Since by construction, \(\{ e_{i} \}_{i=1}^{d_{N}}\) is a basis for N(C) and \(\{ e_{i} \}_{i=1+d_{N}}^{d_{N}+d_{\mathcal {B}}}\) is a basis for N(C), one can choose λ i such that that \(w = \sum _{i=1}^{d_{N}} \lambda _{i} e_{i}\) and \(u = \sum _{i=1+d_{N}}^{d_{N}+d_{\mathcal {B}}} \lambda _{i} e_{i}\). Thus,
$$ x = w + v = \sum_{i=1}^{d_{N}} \lambda_{i} e_{i} + \sum_{i=1+d_{N}}^{d_{N}+d_{\mathcal{B}}} \lambda_{i} e_{i} = \sum_{i=1}^{d_{N}+d_{\mathcal{B}}} \lambda_{i} e_{i} $$
for some λ i . Thus \(\{ e_{i} \}_{i=1}^{d_{N}+d_{\mathcal {B}}}\) spans \({\mathsf {C}}^{\leftarrow }(\mathcal {S})\). Now let us show linear independence: that \(\sum _{i=1}^{d_{N}+d_{\mathcal {B}}} \lambda _{i} e_{i} = 0\) if and only if λ i =0 for all \(i\in \{1,2,\dots,d_{N}+d_{\mathcal {B}}\}\). The "if" part is trivial, thus it remains to show that \(\sum _{i=1}^{d_{N}+d_{\mathcal {B}}} \lambda _{i} e_{i} = 0\) implies λ i =0 ∀i. The condition \(\sum _{i=1}^{d_{N}+d_{\mathcal {B}}} \lambda _{i} e_{i} = 0\) implies
$$\begin{array}{*{20}l} \sum\limits_{i=1}^{d_{N}} \lambda_{i} e_{i} = -\sum\limits_{i=d_{N}+1}^{d_{N}+d_{\mathcal{B}}} \lambda_{i} e_{i}, \end{array} $$
which implies w=−u for some w∈N(C) and \(u \in \mathcal {B}\). Every element of N(C) is orthogonal to every element of \(\mathcal {B}\) by construction, hence the only way Eq. (129) can be satisfied is if w=u=0, that is if both sides of Eq. (129) are zero, implying λ i =0 for all \(i\in \{1,2,\dots,d_{N}+d_{\mathcal {B}}\}\) as desired. Thus, we have shown \(\{ e_{i} \}_{i=1}^{d_{N}+d_{\mathcal {B}}}\) is a basis for \({\mathsf {C}}^{\leftarrow }(\mathcal {S})\), and hence
$$ d_{P} = d_{N}+d_{\mathcal{B}}. $$
Consider the set \(\left \{\mathsf {C} e_{i}\right \}_{i=1+d_{N}}^{d_{N}+d_{\mathcal {B}}}\). By the definition of range, each element of the set \(\left \{\mathsf {C} e_{i}\right \}_{i=1+d_{N}}^{d_{N}+d_{\mathcal {B}}}\) is in R(C), and since by construction each e i is in \({\mathsf {C}}^{\leftarrow }(\mathcal {S})\), each element of \(\left \{\mathsf {C} e_{i}\right \}_{i=1+d_{N}}^{d_{N}+d_{\mathcal {B}}}\) is also in \(\mathcal {S}\). We therefore have that
$$ \text{span} \left\{\mathsf{C} {e_{i}}\right\}_{i=1+d_{N}}^{d_{N}+d_{\mathcal{B}}} \subseteq R(\mathsf{C}) \cap \mathcal{S}, $$
and since there are \(d_{\mathcal {B}}\) elements in \(\left \{\mathsf {C} e_{i}\right \}_{i=1+d_{N}}^{d_{N}+d_{\mathcal {B}}}\), it must be that \(d_{\mathcal {B}} \leq d_{R\cap \mathcal {S}}.\) Substituting the above inequality into Eq. (130) gives \(d_{P} \leq d_{N} + d_{R\cap \mathcal {S}}.\)
To complete the proof we must show that \(d_{P} \geq d_{N} + d_{R\cap \mathcal {S}}\). Let \(\{s_{i}\}_{i=1}^{d_{R\cap \mathcal {S}}}\) be a basis for \(R(\mathsf {C}) \cap \mathcal {S}\). By assumption R(C) is closed, thus we have by Lemma 6 that the Moore-Penrose pseudoinverse, C +, exists and satisfies the properties listed in Lemma 6. Consider the set \(\left \{\mathsf {C}^{+} s_{i}\right \}_{i=1}^{d_{R\cap \mathcal {S}}}\). We claim that
$$ \text{span} \left\{\mathsf{C}^{+} s_{i}\right\}_{i=1}^{d_{R\cap\mathcal{S}}} \subseteq N(\mathsf{C})^{\perp} \cap {\mathsf{C}}^{\leftarrow}(\mathcal{S}) \equiv \mathcal{B}. $$
By property (iv) in Lemma 6, we have that C + s i ∈N(C)⊥ for each \(\mathsf {C}^{+} s_{i}\in \left \{\mathsf {C}^{+} s_{i}\right \}_{i=1}^{d_{R\cap \mathcal {S}}}\). Since s i ∈R(C), we have that C(C + s i )=s i by property (ii) of the pseudoinverse, and since \(s_{i} \in \mathcal {S}\), we have that \(\mathsf {C} \mathsf {C}^{+} s_{i} = s_{i} \in \mathcal {S}\) for each \(\mathsf {C}^{+} s_{i}\in \left \{\mathsf {C}^{+} s_{i}\right \}_{i=1}^{d_{R\cap \mathcal {S}}}\). Thus, each element of \(\{\mathsf {C}^{+} s_{i}\}_{i=1}^{d_{R\cap \mathcal {S}}}\) is also in \({\mathsf {C}}^{\leftarrow }(\mathcal {S})\), the preimage of \(\mathcal {S}\) under C. Thus we have that each element of \( \left \{\mathsf {C}^{+} s_{i}\right \}_{i=1}^{d_{R\cap \mathcal {S}}}\) is in \(N(\mathsf {C})^{\perp } \cap {\mathsf {C}}^{\leftarrow }(\mathcal {S})\) which justifies the claim of Eq. (132). Now, Eq. (132) implies that \( d_{R\cap \mathcal {S}} \leq d_{\mathcal {B}} \). Substituting the above inequality into Eq. (130) gives \(d_{P} \geq d_{N} + d_{R\cap \mathcal {S}},\) concluding the proof. □
Corollary 1
Let \(\mathcal {X}\) and \(\mathcal {Y}\) be finite-dimensional Hilbert spaces and let \(\mathsf {C}:\mathcal {X}\rightarrow \mathcal {Y}\) be a linear operator with closed range. Let \(\mathcal {S} \subseteq R(\mathsf {C})\subseteq \mathcal {Y}\) be a subspace of the range of C. Then, the dimension of the preimage of \(\mathcal {S}\) under C is \( \text {dim}\, {\mathsf {C}}^{\leftarrow }(\mathcal {S}) = \text {dim}\, N(\mathsf {C}) + \text {dim}\,(\mathcal {S}). \)
The proof follows trivially from Lemma 7 by noting that since \(\mathcal {S} \subseteq R(\mathsf {C})\), \(R(\mathsf {C})\cap \mathcal {S} = \mathcal {S}\), which we substitute into Eq. 125 to obtain the corollary. □
DW Bliss, PA Parker, AR Margetts, in Proceedings of the 2007 IEEE/SP 14th Workshop on Statistical Signal Processing. Simultaneous transmission and reception for improved wireless network performance (Institute of Electrical and Electronics Engineers (IEEE)New York, 2007), pp. 478–482.
AK Khandani. Methods for spatial multiplexing of wireless two-way channels, (2010). US Patent US 7817641 B1. https://www.google.com/patents/US7817641.
B Radunovic, D Gunawardena, P Key, APN Singh, V Balan, G Dejean, Rethinking indoor wireless: low power, low frequency, full duplex (2009). Microsoft Research, Technical Report # MSR-TR-2009-148, https://www.microsoft.com/en-us/research/publication/rethinking-indoor-wireless-low-power-low-frequency-full-duplex/.
M Duarte, A Sabharwal, in Proc. 2010 Asilomar Conference on Signals and Systems. Full-duplex wireless communications using off-the-shelf radios: feasibility and first results (Institute of Electrical and Electronics Engineers (IEEE)New York, 2010).
J Choi, M Jain, K Srinivasan, P Levis, S Katti, in MobiCom 2010. Achieving single channel, full duplex wireless communication (Association of Computing Machinery (ACM) PublicationsNew York, 2010).
M Jain, JI Choi, T Kim, D Bharadia, S Seth, K Srinivasan, P Levis, S Katti, P Sinha, in MobiCom 2011. Practical, real-time, full duplex wireless (New York, 2011), pp. 301–312. http://doi.acm.org/10.1145/2030613.2030647.
M Duarte, C Dick, A Sabharwal, Experiment-driven characterization of full-duplex wireless systems. IEEE Trans. Wireless Commun. 11(12), 4296–4307 (2012).
A Sahai, G Patel, A Sabharwal, Pushing the limits of full-duplex: design and real-time implementation (2011). Rice Univeristy, Technical Report # TREE1104. https://arxiv.org/abs/1107.0607.
MA Khojastepour, K Sundaresan, S Rangarajan, X Zhang, S Barghi, in ACM 1480 Workshop on Hot Topics in Networks. The case for antenna cancellation for scalable full-duplex wireless communications (ACMNew York, 2011), pp. 17:1–17:6.
E Aryafar, MA Khojastepour, K Sundaresan, S Rangarajan, M Chiang, in 1483 Proceedings of the 18th annual international conference on Mobile 1484 computing and networking. MIDU: enabling MIMO full duplex, ser. 1485 Mobicom '12 (ACMNew York, 2012), pp. 257–268.
M Duarte, A Sabharwal, V Aggarwal, R Jana, K Ramakrishnan, C Rice, N Shankaranarayanan, Design and characterization of a full-duplex multiantenna system for Wi-Fi networks. IEEE Trans. Vehicular Technol. 63(3), 1160–1177 (2014).
M Duarte, Full-duplex wireless: Design, implementation and characterization (2012). Ph.D.dissertation, Rice University. http://warp.rice.edu/trac/wiki/DuartePhDThesis.
T Riihonen, S Werner, R Wichman, in Wireless Communications and Networking Conference, WCNC 2009. Comparison of full-duplex and half-duplex modes with a fixed amplify-and-forward relay (IEEE, 2009) (Institute of Electrical and Electronics Engineers (IEEE)New York, 2009), pp. 1–5.
T Riihonen, S Werner, R Wichman, Z Eduardo, in IEEE 10th Workshop on Signal Processing Advances in Wireless CommunicationsCommunications, 2009. SPAWC '09. On the feasibility of full-duplex relaying in the presence of loop interference (Institute of Electrical and Electronics Engineers (IEEE)New York, 2009), pp. 275–279.
B Day, A Margetts, D Bliss, P Schniter, Full-duplex bidirectional MIMO: achievable rates under limited dynamic range. IEEE Trans. Signal Process. 60(7), 3702–3713 (2012).
B Day, A Margetts, D Bliss, P Schniter, Full-duplex MIMO relaying: achievable rates under limited dynamic range. IEEE J. Selected Areas Commun. 30(8), 1541–1553 (2012).
T Riihonen, R Wichman, J Hamalainen, in IEEE International Symposium on Wireless Communication Systems. 2008. ISWCS '08. Co-phasing full-duplex relay link with non-ideal feedback information (Institute of Electrical and Electronics Engineers (IEEE)New York, 2008), pp. 263–267.
T Riihonen, S Werner, J Cousseau, R Wichman, in 42nd Asilomar Conference on Signals, Systems and Computers, 2008. Design of co-phasing allpass filters for fullduplex OFDM relays (Institute of Electrical and Electronics Engineers (IEEE)New York, 2008), pp. 1030–1034.
T Riihonen, S Werner, R Wichman, Mitigation of loopback self-interference in full-duplex MIMO relays. IEEE Trans. Signal Process. 59(12), 5983–5993 (2011).
E Everett, M Duarte, C Dick, A Sabharwal, in Asilomar Conference on Signals, Systems and Computers. Empowering full-duplex wireless communication by exploiting directional diversity (Institute of Electrical and Electronics Engineers (IEEE)New York, 2011).
E Everett, A Sahai, A Sabharwal, Passive self-interference suppression for full-duplex infrastructure nodes. IEEE Trans. Wireless Commun. 13(2), 680–694 (2014).
A Sahai, G Patel, C Dick, A Sabharwal, On the impact of phase noise on active cancelation in wireless full-duplex. IEEE Trans. Vehicular Technol. 62(9), 4494–4510 (2013).
A Sabharwal, P Schniter, D Guo, DW Bliss, S Rangarajan, R Wichman, In-band full-duplex wireless: Challenges and opportunities. IEEE Journal on Selected Areas in Communications. 32(9), 1637–1652 (2014).
A Poon, R Brodersen, D Tse, Degrees of freedom in multiple-antenna channels: a signal space approach. IEEE Trans. Inf. Theory. 51(2), 523–536 (2005).
A Poon, D Tse, R Brodersen, Impact of scattering on the capacity, diversity, and propagation range of multiple-antenna channels. IEEE Trans. Inf. Theory. 52(3), 1087–1100 (2006).
A Poon, D Tse, Degree-of-freedom gain from using polarimetric antenna elements. IEEE Trans. Inf. Theory. 57(9), 5695–5709 (2011).
E Everett, C Shepard, L Zhong, A Sabharwal, SoftNull: many-antenna full-duplex wireless via digital beamfoming. IEEE Trans. Wireless Commun. 15(12), 8077–8092 (2016).
A Poon, M Ho, in IEEE International Conference on Communications, 2003. ICC '03. Indoor multiple-antenna channel characterization from 2 to 8 GHz, vol 5 (Institute of Electrical and Electronics Engineers (IEEE)New York, 2003), pp. 3519–3523.
Q Spencer, B Jeffs, M Jensen, A Swindlehurst, Modeling the statistical time and angle of arrival characteristics of an indoor multipath channel. IEEE J. Selected Areas Commun. 18(3), 347–360 (2000).
RJ-M Cramer, An evaluation of ultra-wideband propagation channels (2000). Ph.D. dissertation, University of Southern California.
R Heddergott, P Truffer, Statistical characteristics of indoor radio propagation in NLOS scenarios. European Cooperation in the field of scientific and technical research, Valencia, Spain, In COST. 259:, 1–15 (2000).
N Young, An Introduction to Hilbert Space (Cambridge University Press, Cambridge, 1988).
Book MATH Google Scholar
HW Engl, M Hanke, A Neubauer, Regularization of Inverse Problems (Kluwer Academic Publishers, Dordrecht, 1996).
E Telatar, Capacity of multi-antenna gaussian channels. European Trans. Telecommu. 10(6), 585–595 (1999).
L Ke, Z Wang, Degrees of freedom regions of two-user MIMO Z and full interference channels: the benefit of reconfigurable antennas. IEEE Trans. Inf. Theory. 58(6), 3766–3779 (2012).
S Jafar, M Fakhereddin, Degrees of freedom for the MIMO interference channel. IEEE Trans. Inf. Theory. 53(7), 2637–2642 (2007).
S Krishnamurthym, S Jafar, in 2012 IEEE Global Communications Conference (GLOBECOM). Degrees of freedom of 2-user and 3-user rank-deficient MIMO interference channels (Institute of Electrical and Electronics Engineers (IEEE)New York, 2012), pp. 2462–2467.
DNC Tse, P Viswanath, Fundamentals of Wireless Communication (Cambridge University Press, Cambrige, 2005).
CA Balanis, Antenna Theory: Analysis and Design, 3rd ed., (Wiley-Interscience, Hoboken, 2005).
A Tang, X Wang, A-Duplex: Medium access control for efficient coexistence between full duplex and half duplex communications. IEEE Trans. Wireless Commun. 14(10), 15871–58851 (2015).
JY Kim, O Mashayekhi, H Qu, M Kazadiieva, P Levis, JANUS: a novel MAC protocol for full duplex radio. Stanford Univerisity, Tech. Rep. CSTR. 2.7(23) (2013). http://hci.stanford.edu/cstr/reports/2013-02.pdf.
N Singh, D Gunawardena, A Proutiere, B Radunovic, H Balan, P Key, in Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks (WiOpt), 2011. Efficient and fair MAC for wireless networks with self-interference cancellation (Institute of Electrical and Electronics Engineers (IEEE)New York, 2011), pp. 94–101.
Q Gao, G Chen, L Liao, Y Hua, in Computing, Networking and Communications (ICNC), 2014 International Conference on. Full-duplex cooperative transmission scheduling in fast-fading MIMO relaying wireless networks (Institute of Electrical and Electronics Engineers (IEEE)New York, 2014), pp. 771–775.
C Karakus, SN Diggavi, Opportunistic scheduling for full-duplex uplink-downlink networks (2015). CoRR, vol. abs/1504.05898. http://arxiv.org/abs/1504.05898.
Y Chen, A Sabharwal, Degrees of freedom of spatial self-interference suppression for in-band full-duplex with inter-node interference (2016). arXiv preprint arXiv: 1606.05809. https://arxiv.org/pdf/1606.05809.pdf.
PD Lax, Funtional Analysis (Wiley, New York, 2002).
This work was partially supported by National Science Foundation (NSF) Grants CNS 0923479, CNS 1012921, CNS 1161596 and NSF Graduate Research Fellowship 0940902.
Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
Evan Everett & Ashutosh Sabharwal
Evan Everett
Ashutosh Sabharwal
Correspondence to Evan Everett.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Everett, E., Sabharwal, A. Spatial degrees-of-freedom in large-array full-duplex: the impact of backscattering. J Wireless Com Network 2016, 286 (2016). https://doi.org/10.1186/s13638-016-0781-3
Spatial Isolation
Full-duplex Base Station
Downlink User
Self-interference Channel
Achievability Proof
Full-Duplex Radio: Theory, Design, and Applications
|
CommonCrawl
|
Two boxes are connected to each other by a string as shown in the figure
two boxes are connected to each other by a string as shown in the figure What is the A 15-N net force is applied for 6. Two blocks A and B of masses 10 kg and 15 kg are placed in contact with each other rest on a rough horizontal surface as shown in the figure. One point In the system shown above, the block of mass Mi is determine each of the following: c . 4 kg each and the Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. The 10-N box slides without friction on the Answer to: Two boxes are connected to each other by a string as shown in the figure. Assuming the pulley to be smooth and mass less, the tension in the string connecting B and C is nearly 1) 12 N 2) 17. kilograms, as shown in the diagram above. of magnitude 80. The coefficient of static friction between block A and the plane is 0. 1. asked • 11/24/12 In Figure 5-49, three connected blocks are pulled to the right on a horizontal frictionless table by a force of magnitude T3 = 64. A 1. The block on the table has mass m 1 = kg and the hanging block has mass m 2 = kg. Figure 8 Sparks are produced when alpha particles produce ionisation in the air gap. 5) Two 10-kilogram boxes are connected by a massless string that passes over Calculate each of the following. For Block 1, this acceleration is up the incline, or in the +x direction. So, I'm gonna have to subtract three kilograms times 9. Use g = 10 m/s2. If each block has an acceleration of 2. **In the figure to the right, two boxes of masses m and 3m are connected by a string while a force F is. 0-kg box. If m1 = 5. (b) Find the magnitude of the accelerations of the objects. What is true about the tension Tin the string? 10 N 30N OT< 30 N C) 7-10ND)T> 30N E) T-20N A) T- 30 N Two boxes are connected to each other by a string as shown in the figure. The two boxes are linked by a rope which passes over a pulley at the top of the incline, as shown in the diagram. If, when the system starts from rest, m2 falls 1. 00 8 with the vertical. 202. 0 kg) are connected by a massless string that passes over a massless, frictionless pulley. 500 kg and is in the form of a uniform solid disk. 8 Mar 2016 Two boxes of masses m1 and m2 are descending vertically at a falling and each mass has an acceleration of g/11, so the tension in the string Two boxes connected by a lightweight cord are resting on a table. When the masses are released, their accelerations have magnitude 4. Find (b) the magnitude of the acceleration of the objects, (c) the tension in the string, and (d) the speed of each. 0 MΩis connected in series with a resistor and the spark counter as shown in Figure 8. Objects A and B each of mass m are connected by light inextensible cord. Jul 02, 2013 · boxes are ml = 10 kg, m2 = 20 kg, and m3 = 30 kg. Jun 09, 2019 · 104. 2grams each are hung by two strings (each 1. The coefficient of kinetic friction between each block and the surface is 0. 6. 0 kg , (a) Draw a free body diagram for each object. The coefficient of friction between the blocks and surface is 0. (22 pts) Two masses are connected as shown in the figure below. A hand pulls string 1, which is attached to block A, so that the blocks move upward and gradually slow down. The table and the pulley are frictionless. A string or rope exerts a contact force on an object when it pulls on it. What is A 10-kg block is connected to a 40-kg block as shown in the figure. Problem 5. 30 kg and 7. 00s. The pulley hangs from the ceiling. 0 N weight of the picture is supported by string 1. What fraction of mass must be removed from one block and added to the other, so that the system has an acceleration of 1 / 5 t h of the acceleration due to gravity Two boxes of fruit on a frictionless horizontal surface are connected by a light string as in Figure P4. On the left, we show the objects just sitting on each other. The nooses will be loose and wide. Find T, the tension in the connecting rope, and the acceleration of the blocks. 71, are connected by a string of negligible mass passing over a pulley of radius 0. 0 m/s2 to the right, what is the magnitude F of the applied force? Two bodies of mass, 3 kg and 4 kg, are suspended at the ends of a massless string passing over a frictionless pulley. 5-kg block resting on a frictionless surface as shown. Two blocks are connected by a string of negligible mass that passes over massless pulleys that turn with. Virtual Work Theorem It states that the work done by the internal forces on a system is zero. The strings are tied to the rod with separation d = 2. Am I gonna have any other forces that try to prevent the system from moving? You might think the force of gravity on this 12 kilogram box, but look, that doesn't really, in and of itself, prevent the system from moving or not Two boxes, A and B, are connected by a lightweight cord and are resting on a smooth (frictionless) table. 23. 8 meters per second squared. 5-31, but only one is shown. Two teams of nine members each engage in a tug of war. (Figure 1) A. Parallel Aiding Inductors. 0° with respect to the horizontal. 06 m and are taut. 00 kg are connected by a light string that passes over a frictionless pulley as in Figure P4. If the whole system lie on a smooth horizontal plane, then the acceleration of each particle towards each other is (a) $\displaystyle \frac{\sqrt 3 F}{2m} $ (b) $\displaystyle \frac{ F}{2\sqrt Figure 1. The blocks 10 Aug 2018 The upper block is hung by another string. The height of two central columns B & C are 49 cm each. With one exception, each of the following units can be used to express mass. please help!!!! :( Feb 17, 2019 · 26. The boxes have masses of 12. 3 Mar 2013 Slide text: Two boxes are connected by a cord running over a pulley. Calculate the ratio of the speed of the Moon in its ancient A ball is whirled on the end of a string in a horizontal circle of radius R at constant 39. The pulley is a solid disk of mass m p and radius r. negligible friction, as shown in the figure above. Two blocks which are connected to each other by means of a massless string are placed on two inclined planes as shown in figure. 30. 0°, find the second force (a) in unit-vector notation and as (b) a magnitude and (c) an angle relative to the positive direction of the x axis. 00 kg crate. Assume the incline is frictionless and take m1 = 2. If the thread makes an angle of 30° with the positive plate as shown, what is the magnitude of the charge density on each plate? (a) 2. Block B slides over the horizontal top surface of a stationary block C and the block A slides along the vertical side of C, both with the same uniform speed. The string must be hung on the nails in such a way that the painting falls down if any of the two nails is pulled out of the wall. Tension T1 and T2 are respectively? - 7239182 Two blocks of masses 5 kg and 7 kg are connected by uniform rope of mass 4 kg as shown in the figure. The pulley is ideal and the string has negligible mass. Suppose the coefficient Motion of Three Boxes Connected by Two Strings. Aug 22, 2014 · Two masses m and M (m < M) are joined by a light string passing over a smooth and light pulley (as shown) A) The acceleration of each mass is g M m M m B) The tension in the string connecting masses is g M m 2Mm C) The thrust acting on the pulley is g M m 4Mm D) The centre of mass of the system (i. 95 kg, and θ = 50. A constant upward force . What is the tension in the rope? (A) 50 N throughout the rope (B) 75 N throughout the rope Jun 09, 2019 · The direction of the induced current is as shown in the figure, according to Lenz's law which states that the indeed current flows always in such a direction as to oppose the change which is giving rise to it. Jan 25, 2016 · Figure 2 A non-uniform rod AB has length 4 m and weight 120 N. Draw a free-body diagram for each mass. 6 m Jun 25, 2016 · Consider two blocks of masses m 1 and m 2 placed in contact with each other on a friction less horizontal surface. 71. High School Physics Chapter 20 Section 1 5 Nov 2019 Click here to get an answer to your question ✍️ Two boxes are connected to each other by a string as shown in the figure. Tension in 1. asked Oct 31, 2018 in Laws of motion by Minu ( 46k points) laws of motion N7) Two blocks A and B are connected by a string named string 2. 2) and try to give it a continuous up-and-down motion, with a little adjustment of the pace of oscillations, you can make at least the following waveforms: Figure 2. The net downward force of the system is (A) 40 N (B) 160 N (C) 200 N (D) 16 N 5. A Figure 5. 75 kg, m2 = 2. 8) You are lowering two boxes, one on top of the other, down the ramp shown in 5 Oct 2011 m2 connected by a string as shown in the diagram at right. mono (8 pts) Three identical masses M, connected by massless strings, are being pulled to the other side of a 400 m wide river whose waters flow downsream at 2 m/s. The spring is under a tension T and it has a mass density m. 3 m are placed along a diameter of turn table. Two blocks with masses m1 = 1. Taliyah H. The spring is rigidly connected to a metal rod at its other end. 2 × 10 –9 C/m 2 (e) 4. Since the tension term in each equation refers to the same tension(on the same string!) we can set the right side of each equation equal to each other: m1a + f1 = F - f2 - m2a. m 1 of block 1. The two outer columns A & D are open to the atmosphere. Figure 16. 5°. FIGURE 2-2 13) Two boxes of masses m and 2m are in contact with each other on a frictionless 17) In the Atwood machine shown in Fig. The coefficient of static friction is mu_s = 0. Suppose counter-balance the different effects the two weights (i. 43 kg . 0kg. An Atwood's Machine consists of two unequal masses connected by a single string that passes over an ideally massless and frictionless pulley as in Figure 4. Find (a) the magnitude of the acceleration of each block and (b) the tension in the string. The centre of mass of the rod is at the point G where AG = 2. Oct 02, 2011 · Two blocks, one with mass m1 = 0. Two concave glass refracting surfaces, each with radius of curvature R = 35 cm and refractive index , are placed facing each other in air as shown in figure. Two objects are connected by a light string that passes over a frictionless pulley as shown in the figure below. 3 N 14. 2 N 4) 13. What is the kinetic energy of box B just before it I believe the simplest way to solve our system of two equations is to solve for T in both: T = m1a + f1. 00 m/s2. 0-kg box, as shown in Fig. If the string has a length of 5. Drape the string loop over your right wrist keeping the two strings separated without any twists. If you pluck a string under tension, a transverse wave moves in the positive x-direction, as shown in . A properly-formed Navajo Opening should be much less angular than Opening A. When each is given charge Q, the angle between each string and vertical line becomes16°. B? Please Express your answer to two significant figures and Two objects A and B, of masses 5 kg and 20 kg respectively, are connected by a massless string passing over a frictionless pulley at the top of an inclined plane, as shown in the figure. 10, and m 2 = 2. 40 m and r2 = 0. 0 kg and 10. Find the mass M1, given that M2 (4. 4 between all surfaces (a) To what angle $\theta$ must the plane be inclined for sliding to commence? Feb 21, 2007 · Objects with masses m1 = 8. Find: a. The coefficient of kinetic friction between box A and the table is 0. 4-54), with m 2 initially farther down the slope what is the acceleration of each block? (4 ed) 5. 00 kg and can slide along a rough plane inclined 30. 34. (a) On the free-body diagrams, four different forces are shown. There is a node on one end, but an antinode on the other. The system is released from rest and. the contact forces between the boxes. The strings are tied to the rod with separation d = 1. Two blocks connected by a string are on a horizontal frictionless surface. This net force causes the string to stretch/compress depending whether the leading mass or the lagging mass has a larger mass. 00 kg hangs from the small pulley. Nov 14, 2014 · Two blocks are tied together with a string as shown in the diagram. 5 kg as shown in the figure below. Draw free-body diagrams showing and Jun 14, 2018 · In all the above cases dataset contain numerical value, string value, character value, categorical value, connection (one user connected to another user). Assume the strings are massless and do not stretch. The weight of the hanging mass provides tension in the string, which helps to accelerate the cart along the track. (a) Determine T1 and T2, the tensions in the two parts of the string. All of the boxes are identical, and the accelerations of the boxes are indicated in each figure. 6 kg B)4. Suppose the top three resistors all lead to 80 kg are connected by a massless string over a pulley in the shape of a solid disk is applied to the block as shown and the block slides to the right. The only difference between the two Apr 04, 2019 · Two masses m1 = 5 kg and m2 =10 kg connected by an inextensible string over a frictionless pulley, are moving as shown in the figure. 0 m and are taut. The coefficient of kinetic friction between the two boxes is 0. physics. The masses of the pulley and the string connecting the objects are completely negligible. Atwood's machine is a device where two masses, M and m, are connected by a string passing over a pulley. The two rails of a railway track, insulated from each other and the ground, are connected to a millivolt meter. The system is pulled by a force F = 1 0 N , then tension T 1 = . 00-kg object and the 5. Physics College Physics Three objects are connected by light strings as shown in Figure P4. What is true about the tension T in the string? Two blocks A and B are connected to each other by a string and a spring. 5) show the forces acting on each of the masses. 14kg ball is connected by means of two massless strings to a vertical, rotating rod. (b) Find the accleration of the Oct 18, 2014 · Homework Statement You are lowering two boxes, one on top of the other, down the ramp shown in the figure by pulling on a rope parallel to the surface of the ramp. There are two forces on the 2. A force F applied on the upper string produces an acceleration of 2m/s2 in the upward direction in both the blocks. What is the acceleration of the system? All we need to note to solve this is that due to connection by the string the acceleration of the two bodies must be equal and opposite to each other as long as the string is taught. A 45. The block on the frictionless incline is moving up with a constant acceleration of 2. When the blocks are spun around on a horizontal frictionless surface at an angular speed of 1. (a) Determine T 1 and T 2, the tensions in the two parts of the string. It Sep 04, 2015 · Consider that two inductors are connected in parallel with self inductances L 1 and L 2, and which are mutually coupled with mutual inductance M as shown in below figure. Consider a cart on a low-friction track as shown in Fig. When the engines on a rocket ship in deep space, far from any other objects, are turned off, **In the Atwood machine, shown on the diagram, two masses M and m are contact with each other on a frictionless surface. Initially the masses are held at rest at the same level. 18 kg . So, wejust calculated the accleration for the two blocks moving together,at = 2. psychiatric, psychological, tax, legal, investment, accounting, or other 16 Mar 2020 In a series-connected string of holiday ornament bulbs, if one bulb gets shorted out, The current through the other two resistors will: Four resistors are connected in series with a 6. (a) Physics Two masses m and M are attached with strings as shown. The 10 N box slides without friction on the horizontal table Two boxes are connected to each other by a string as shown in the figure. Strategy. Box A has mass 1 kg Blocks A and B and C are connected by a massless string and placed as shown, with Block A Here is the force diagram for this problem. gap. What is the kinetic energy of box B just before it reaches the floor? KE f =1 2 m B v2= m B m A +m B PE B0 = 1 2 9. Each pulley has a mass of 0. Extend hands apart. F. Two objects are connected by a light string that passes over a frictionless pulley as shown in Figure P5. 2 nC, and they come to equilibrium when each string is at an angle of u 5 5. Both Two block of masses M and m are connected to each other by a massless string and spring of force constant k as shown in the figure. 15:- Two bodies of masses 10 kg and 20 kg respectively kept on a smooth, horizontal surface are tied to the ends of a light string. The string connecting the 4. The string is assumed to be light (i. Q: In figure two identical particles each of mass m are tied together with an inextensible string. Starting from rest, box B descends 12. In the wall, there are two nails, horizontally next to each other. 0 m/s 2, and θ = 30. Two small metallic spheres, each of mass m 5 0. The string 7) Two objects having masses m1 and m2 are connected to each other as shown in the figure and are released from rest. 0 N. 5 × 10 –9 C/m 2 (d) 2. (b) Find the tension in the rope. 200 g, are suspended as pendulums by light strings of length L as shown in Figure P23. (a) Find the acceleration of the system. The system is released from rest. solving for a: a = (F - f1 - f2)/(m1 + m2) 5) In Figure, a 4. 59m, what Two blocks A and B of respective masses 4 kg and 6 kg lie on a smooth horizontal surface and are connected by a light inextensible string. 6 kg 20) mg*sin(20deg) +mu*mg*cos(20deg) = 2kg*g 21)A container explodes and breaks into three fragments that fly off 120° apart from each other, Two blocks are connected by a string of negligible mass that passes over massless pulleys that turn with. What fraction of mass must be removed from one block and added to the other, so that the system has an acceleration of 1 / 5 t h of the acceleration due to gravity Two blocks A and B of masses 10 kg and 15 kg are placed in contact with each other rest on a rough horizontal surface as shown in the figure. There should be two parallel strings oriented inward, one connected the index fingers and the other connected the thumbs. The masses move such that the portion of the string between P 1 and P 2 is parallel to the incline and the portion of the string between P 2 and M 3 is horizontal. The mass . • Two boxes are connected to each other as shown. What are the (a) tension in the lower string and , (b) How many revolution per minute does it make? Two boxes, m 1 = 1. 4–22a. The coefficient of kinetic friction between the ramp and the lower box In the case of the two boxes, the second box (one at the front) is indeed canceling out the force of the first box (the one doing the pushing) but only by being accelerated. Box A, of mass 8. We call this a tension force, represented by the symbol T. Two blocks are connected by a massless rope as shown below. Consider the figure (a), in which inductors L 1 and L 2 are connected in parallel with their magnetic fields aiding. Figure 2, two boxes A and are connected to each end of a light B vertical rope. (b)This figure could not possibly be a normal mode on the string because it does not satisfy the boundary conditions. Three equal masses A, B and C each 2kg connected by strings are arranged as shown in the figure. If you have an account directly with Fetch each box must be set up with the For your boxes to share content, they all need to be connected on the same account. 25. 85 kg, and m3 = 4. You have a painting with a string attached to it. 030 kg, are connected to one another by a string. A horizontal force of 200 N is applied to block A. The small mass element oscillates perpendicular to the wave motion as a result of the restoring force provided by the string and does not move in the x-direction Now, how to figure out the tension in the string. drawing a free -body diagram, the main point is just to indicate all the forces that exist Three boxes are pushed with a force F across a frictionless table, as shown in Fig string is connected to each mass and wraps halfway around a pulley, as shown in Fig A) If the kinetic energy of an object is doubled, its momentum will also double. 5 kg and rn3 = 2. Each of the first team's members has an average mass of 68 kg and exerts an average force of 1350 N horizontally. 2 a) 14 Mar 2020 For correctly including the other two forces on Student A. 45 kg . For a correct free-body diagram for Student B (including both weight and tension). 0 long) from the same point. 00 kg hangs from a string wrapped around the large pulley, while a second block of mass M = 8. The normal force is NOT simply "m1g"--WHY?. No problems for about a week or two. 0 kg and the hanging mass is 1. 67 kg . There are two blocks connected by a string and tied to a wall on an incline. The figure is shown at a time t 0. 10. When released, the heavier object accelerates downward while the lighter object accelerates upward. The surface of the table is frictionless. Let a force F be applied on block of mass m 1, then the value of contact force between the blocks is Now suppose our two surfaces that are experiencing friction have lots of little protuberances like this that can bend. A horizontal force F = 600 N is applied to (i) A, (ii) B along the direction of string. Three blocks of masses 2 k g, 3 k g and 5 k g are connected to each other with light string and are then placed on a frictionless surface as shown in the figure. 00 kg are connected by a light string that passes over a frictionless pulley of moment of inertia 0. A force F applied on the upper string produces an acceleration of 2m/s2 in the upward direction in both The upper graph shows the acceleration of the object as a function of time. For F 1 = 20. After releasing from rest, the magnitude of acceleration of the centre of mass of both the blocks is (g = 10 m/s) :- 53° Fixed 37 Two blocks are connected by a string as shown in the diagram. 0 N is applied to the 20. The oscillator can be set anywhere from 1 to 500 Hz. The blocks are connected to a hanging weight by means of a string that passes over a pulley as shown in the figure below, where m1 = 1. The 10-N box slides without friction on the ho Answer to 3- Two boxes are connected to each other by a string as shown in the figure. After it is released, find (a) the acceleration of each of the boxes, and (b) the tension in each string. 00 kg and m2 = 10. Assume that M > m. 5 kg D)2. 00-kg box falls through a distance of 1. A box of weight 100 newtons hangs from the rope. Answer: Oct 17, 2014 · Two blocks, with masses m A and m B are connected to each other and to a central post by thin rods as shown in Fig. 3 m/s2, Tl = 17 N, T3 = 2 IN] h c solve o n,ltq Box 2 21 N Two blocks are connected by a massless rope as shown below. the net force on each box. 050 kg and one with mass m2 = 0. Figure P10. string depends on the tension on the string and the mass density L Example A transverse wave with a speed of 50 m/s is to be produced on a stretched spring. 0 m and a mass of 0. Two boxes are connected by a weightless cord running over a very light frictionless pulley as shown in the figure. What is the Two forces act on a 4. Largest 1 OR, All of these ropes have the same tension. A small mass m hangs from a thin string & can swing like a pendulum. 1 × 10 –8 C/m 2 (b) 5. The Dec 04, 2015 · 6. 91 kg ball is connected by means of two massless strings, each of length L = 1. , its length increases by a negligible amount because of the weight of the block). Q. The pulleys rotate together, rather than independently. Find. 65wo . This is pulled at its centre with a constant force F. Find the amount of charge Q. The linear density of the strings is 5. 00 kg box in the overhead view of Fig. The 10-n box Answer to Two boxes are connected to each other by a string as shown in the figure. The masses M 2 and 3 are 0. The professor starts a triangular pulse moving towards the right as shown in the figure below. Each wave travels from A to B and reflects at B. At each point of juncture within a polynucleotide, the 5' end of one nucleotide attaches to the 3' end of the adjacent nucleotide through a connection called a phosphodiester bond (Figure 3). Shown below are boxes that are being pulled by ropes along frictionless surfaces, accelerating toward left. You can connect two or more SharePoint Framework components together and exchange data between them using dynamic data. the magnitude of the tension in these ropes. In Case 1, the other end of In Atwood's machine (see Figure 5. 78 Sep 21, 2010 · Two blocks connected by a light string are being pulled to the right by a constant force of magnitude 30. 0 kg and m2 = 4. 317x10-3 kg/m. Mar 19, 2018 · The PT is basically a box containing a relay with its switched connection wired across one of the conductors in an AC power plug. And when A 5. 26. Block . If suppose, N number of devices are connected with each other in mesh topology, then total number of dedicated links required to connect them is N C 2 i. C) 5. Two blocks of mass M1 = 10 kg and m2 = 5 kg connected to each other by a massless inextensible string of length 0. The block is subject to two forces, a downward force $m g$ In other words, in equilibrium, the tension $T$ Figure 27 shows a slightly more complicated example in which a block of mass $m$ connected by a light inextensible string. What must be true Shown below are boxes that are being pulled by ropes along frictionless surfaces, accelerating toward left. FIGURE 4-22 In the figure below is shown the system below are shown two blocks linked by a string through a pulley, where the block of mass m 1 slides on the frictionless table. A force of 50. 100. 004 kgs2 and a radius of 5 cm The coefficient of friction for the tabletop is 0. 0 and 18. Find the m ass of box B if the tension in the rope is 36. The pulley is ideal 9) Two boxes are connected to each other by a string as shown in the figure. A horizontal force FP of 40 N is applied by a person to the 10 kg box as shown in the figure. the tension in the cord. The tension in the upper string is 50. 50 s, determine the coefficient of kinetic friction between m1 and the table. 0kg, there is a rope connecting that block to a pulley which then connects to another block that is 2. The string passes over a frictionless pulley as shown in Fig. Figure P4. In addition, recognize that all forces of interaction between the two objects are Third Law pairs. Sep 26, 2018 · Most people will give you a long approach towards solving for the acceleration of two bodies attached to pulleys. e. 0 kg. Problem: Two identical small Styrofoam balls that are 2. OR External links lead to other webpages (those not covered in the above two cases, wiki or not wiki). 4-1, if 25) A stone, of mass m, is attached to a strong string and whirled in a vertical circle of radius r. 250 m and moment of inertia I. To set up two or more new Fetch boxes, you will need to follow the steps below:. 4500 VA power supply with an output of 5. 0° to the horizontal. The 10-N box slides without friction on the horizontal table surface. The system is released from rest and the 1. 0 kg C)1. 5 rev/s, what is the tension in each of the Q: In figure two identical particles each of mass m are tied together with an inextensible string. (b ) The Newton's second law applies to each, so we write two vector equations:. the acceleration of each box; and b. 0 kg and m 2 = 20. When each reflected wave reaches point A, it reflects again and the process repeats. 00 s. When the radioactive source is moved close to the wire gauze, sparking is seen in the air gap. Since the two bodies have different masses and same acceleration, that means one of them is exerting a larger force on spring than the other. What would be the mass of the falling mass, m 2, if both the sliding mass, m 1, and the tension, T, in the cord were known? pulley of radius 50 cm, as shown in the figure. Each object feels the magnitude of the interaction equally but in the opposite direction. E) 6. A horizontal force F = 90 N is applied to m1. What are the (a) tension in the lower string and , (b) How many revolution per minute does it make? Two identical blocks each of mass "M" are tied to ends of a string and the string is laid over a smooth fixed pulley. 0 N making an angle of 25o from the horizontal. Figure 5-31 Problem 7. The coefficient of kinetic friction between the blocks and the surface is 0. 060 kg, what tension on the string is required. Each box has a string attached that passes over a pulley. friction (where two objects are at rest with respect to each other). Draw a free-body diagram for each At approximately what speed will the string break? A) 6. 0 kg with a coefficient of 0. The block M slides over the Two blocks are connected by a string that goes over an ideal pulley as shown in the figure and pulls on block A parallel to the surface of the plane. and box B has mass 8. 5–41. a) Find the magnitude of the acceleration of the two masses b) Find the tension in the string Three masses are connected by two strings as shown: A 3. 0 m in 4. Figure 6. We assume that the string has no mass so that we do not have to consider it as a separate object. 2 × 10 –8 Oct 09, 2012 · Two particles, each of mass m, are connected by a light string of length 2L, as shown in figure. (a) Determine the acceleration of each box and the tension in the string. T = F - f2 - m2a. 07/31/2020; 12 minutes to read +2; In this article. 0 s to a 12-kg box initially at rest. A point object O is placed at a distance of R/2 from one of the surfaces as shown. 0°. Watch recordings on one Fetch box from another box in another room. 0 kg, is initially at rest on the top of the table. N(N-1)/2. If you hold end A of the string (Fig. heavy crates on the roof of a building, as shown in the figure. (There is a block on a horizontal incline, its mass is 1. 2124. If the length of each string is 1. character etc as shown in Figure 1 The setup for the experiment is shown in Figure 2. There is no friction on the table surface or in the pulley. 1. What is true about the tension T in the string? 14) A) T = 20 N B) T < 30 N C) T = 30 N D) T = 10 N E) T > 30 N Two blocks are connected by a massless rope as shown below. Dec 22, 2018 · Three blocks with masses m, 2m and 3m are connected by strings, as shown in the figure. 7 N 3) 24. In the figure below is shown the system below are shown two blocks linked by a string through a pulley, where the block of mass m 1 slides on the frictionless table. 2. 10 kg and m2 = 3. Both boxes move together at a constant speed of 19. If the whole system lie on a smooth horizontal plane, then the acceleration of each particle towards each other is (a) $\displaystyle \frac{\sqrt 3 F}{2m} $ (b) $\displaystyle \frac{ F}{2\sqrt by the two string strands at angles to each other above the weight. between two large parallel conducting plates. We assume that the string is massless and the pulley is massless and frictionless. The excess charge on each plate is equal in magnitude, but opposite in sign. 4 m/s B) 8. The free-body diagrams (Fig. Two collinear forces, of magnitudes F N and 30 N, act on each of the blocks, and in opposite directions, as shown in the figure above. An upward force F = 200 N is applied on the system. ***The total force (which is the total mass times the acceleration) is equal to the sum of the forces. The tension in the upper string is 58. 5) In Figure, a 4. In the Figure 1, there are 5 devices connected to each other, hence total number of ports required is 4. The pulley is ideal Two boxes are connected to each other by a string as shown in the figure. Two blocks, as shown in Figure P10. Jun 15, 2008 · Two packing crates of masses m1 = 10. Many problem-solving Note that no internal forces are shown in a free- body diagram. (d) The readings may be anything but their sums will be 10 kg. The coefficient of friction between block M and plane horizontal surface of A is . As you can see from the way the picture is drawn, if the mass on the table (the 5kg mass) was to The magnitude of acceleration at two boxes is different! How would the force of tension be multiplied along a straight piece of string? So we'll say that the acceleration in a given direction is gonna equal the net force in that 18 Jun 2013 These include string-pulling in ravens and keas (Heinrich and Bugnyar The pigeons were trained to directionally move two different boxes towards Shown in the top left panel of Figure 3 are the tracks of each box around (b) Boxes A and B push on each other with equal forces of less than 100 N. 1 Two masses, m 1 and m 2, situated on a frictionless, horizontal surface are connected by a massless string. which is suspended from a fixed beam by means of a string, as shown in Fig. A block of mass 10 kg is suspended from two light spring balances, as shown in the figure. A) 4. 00 m in 1. The 10 -N box slides without friction on the horizontal table surface. When the plastic tube is moved in a small circle above your head, the racketball moves around in a horizontal circle at the end of a string that passes through the tube and has a mass hanger with slotted masses suspended from its lower end. 20, are placed on a plane inclined at 30°. Level II: Continuous Charge Distributions: Jun 04, 2020 · Rotate your wrists so that your palms now face each other. The inclined plane is at an angle of 38. There should be two crossed strings, and a near and far wrist string. 2 A parallel-plate capacitor Experiments show that the amount of charge Q stored in a capacitor is linearly In a figure two blocks of masses 2. cos = 𝑑𝑗. Feb 13, 2013 · Two objects with masses of 2. A force, F, is exerted on one of the masses to the right (Fig P5. Calculate. 5 m/s2 (B) 5 m/s2 (C) 6 m/s2 (D) 7. The mass of the block on the table is 4. What is true about the tension T in the string? A) T = 10 N B) T = 20 N C) T = 30 N D) T < 30 N E) T > 30 N Two boxes are connected to each other by a string as shown in the figure. A block of mass m = 3. 34. Both pulleys are frictionless and massless. A horizontal force F P of 40. 06 m, to a vertical, rotating rod. surface, as shown in the following figure. 37) Two blocks are connected by a string, as shown in the figure. 26. , m1g 15 Jun 2019 Two blocks A and B are connected to each other by a string and a spring , the string passes over a frictionless pulley as shown in the figure. 95 kg. A particle of weight 40 N is placed on the rod at the point P, where AP = x metres. 34 kg B) 3. After an upward force F is applied on block m, the masses move upward at constant speed v. Remember, there is always zero net force, but often there are accelerations involved to make that force zero. Let each resistor have a value of 820 Ω. 0 kg with a coefficient of kinetic friction of 0. How long are the strings? 11. They are released from rest. 14) Two boxes are connected to each other by a string as shown in the figure. The spheres are given the same electric charge of 7. The system has constant acceleration of magnitude 2 ms −2. In most cases and in purely mathematical terms, this system equation is all you need and this is the end of the modeling. The system is pulled by a force \[F=10N,\] then tension \[{{T}_{1}}=\] [Orissa JEE 2002] As the angle of string 1 approaches 90° and the angle of string 2 approaches 0°, the tension in string 2 drops to zero and the entire 2. (2) The left hand then moves to the right and through the two hanging string nooses. Find (a) the acceleration of each box, and (b) the tension in the cord connecting the boxes. 0-kg and a 10. 0 kg ball is connected by means of two massless strings, each of length L 1. 38). three blocks of masses m1 ,m2 ,m3 are connected by massless strings as shown on a friction less table they are pulled witha force T3 =40N ,if - 3434514 13. Ans: This force of gravity right here. In this example problem, there are two strings, one with an angle of 25 degrees, and the other with an angle of 65 degrees, and a mass: 5 kilograms. 150 lb. The blocks are released from rest. 30 kg are connected by a light string that passes over a frictionless pulley, as in the figure below. The acceleration of the system is (A) 2. This is shown in the figure below. 2N. 39 m/s2, is 30o, and is 0. Three blocks are connected to each other with light inextensible strings as shown in the figure. 45. 00-kg object passes over a light frictionless pulley. 85, where m 1 = 10. If T and T' be the tensions in the two parts of the string, then As the angle of string 1 approaches 90° and the angle of string 2 approaches 0°, the tension in string 2 drops to zero and the entire 2. This step-by-step guide is meant to show you how to approach problems where you have to deal with moving objects subject to friction and other forces, and you need to apply Newton's Laws. the separation between the images of O formed by each refracting surface. The pulley is 15 Mar 2015 Two boxes are connected to each other by a string as shown in the figure. 00 m. -- No Friction. Determine (a) the acceleration of each object and (b) the tension in the two strings. Jan 12, 2015 · See also: An Atwood's Machine (involves tension, torque) You are given a system that is at rest; you know the mass of the object, and the two angles of the strings. 2 m/s C) 12 m/s D) 15 m/s E) 18 m/s 2. A horizontal force of 100N is exerted on. Oct 10, 2011 · M1 and M2 are two masses connected as M2 is over a cliff. The string Apr 14, 2018 · Two masses m 1 = 5kg and m 2 = 10kg, connected by an inextensible string over a frictionless pulley, are moving as shown in the figure. 6 (a) Block 1 is connected by a light string to block 2. A constant upward force F= 80. Two blocks, as shown in Figure, are connected by a string of negligible mass passing over a pulley of radius 0. The coefficient of friction between the table and m1 is 0. 5 while there is no friction between m2 and the table. 90 J Both masses have the same v. The system is pulled by a force \[F=10N,\] then tension \[{{T}_{1}}=\] [Orissa JEE 2002] 7) Two objects having masses m1 and m2 are connected to each other as shown in the figure and are released from rest. 27), the two blocks connected by a string passing over. objects. Theeffect of the string (based on our original calculation of a1 anda2) is to apply a force to block 1 and slow it down and apply anequal but opposite force on block 2 and speed it up. Three blocks of masses 2 kg, 3 kg and 5 kg are connected to each other with light string and are then placed on a frictionless surface as shown in the figure. We've assumed that these can be two different substances, so we've colored one red, the other blue. , its mass is negligible compared to that of the block) and inextensible (i. m/s2 (up the incline) Find the tension in the string. 0 N is applied to box A. a massless string as shown above, the tension at any point in the string is (A) W cosO 277" - (B) (C) wcoso (D) 2 cosO (E) COS O 2. The mass of block B is greater than that of block A. The setup for the experiment is shown in Figure 2. 00 kg, and θ = 58. For the system to be in equilibrium we have (A) tan = 1 + m 2M (B) tan = 1 + M 2m (C) tan = 1 + 2m M (D) tan = 1 + 2M m 22. 0-N The free-body diagram for the 10. 0 N ? May 08, 2020 · N-1. The inclines are frictionless. The strings remain taut at all times. Two boxes are connected to each other by a string as shown in the figure. To verify that resistances in series do indeed add, let us consider the loss of electrical power, called a voltage drop, in each resistor in Figure 2. What is the acceleration of the two masses? Start with three free-body diagrams, one for each mass and one for the pulley. (a) What is magnitude of the acceleration of the two teams? The two blocks in this problem are connected together and so have the same value (magnitude) for acceleration. A steady force F is applied at the midpoint of the string (x = 0) at a right angle to the initial The simplest example of a capacitor consists of two conducting plates of area, which are parallel to each other, and separated by a distance d, as shown in Figure 5. 0kg . 8 J=4. 250 16. What is the net force on the block of mass 2m? (g is the acceleration due to gravity) (a) 3 mg (b) 6 mg (c) zero (d) 2 mg Dec 04, 2018 · 3 are connected by strings of negligible mass which pass over massless and frictionless pulleys P 1 and P 2 as shown in the Figure. 00 kg mass as shown. The force diagram is as follows: In Equilibrium, the sum of the forces in the y-direction, F = 0 T 1y + T 2y = F w T 1y and T 2y are the adjacent sides of the triangle shown above so we need to use cos to find the tension T 1 or T . D) 1. 3 m. e Two boxes lt br gt Two blocks of mass M1 10 kg and m2 5 kg connected to each other by . The tension in the rope connecting the two boxes That is, there's two objects moving together and connected in some manner by a force. In the Figure 1, there are 5 devices connected to each other, hence In other words, for this problem you will need to make two free body diagrams and set up two sets of equations. (4 ed) 5. The strings are tied to the rod and form two sides of an equilateral triangle. Tension is always directed along the line of the rope or string, with no component perpendicular to it. In the arrangement shown in the figure, the acceleration of the mass is, (Ignore friction) 1) 2 F a Three blocks of masses 2kg 3kg 5 kg are connected to each other with a light string and are then placed on a frictionless surface as shown in figure the system is pulled by a F= 10 Newton Q: Two blocks of mass 5 kg and 9 kg are connected by a string of negligible mass that passes over a frictionless pulley. Block A has a mass of 3. They are constrained to move on a frictionless ring in a vertical plane as shown in figure. 2) In your diagram, each arrow represents the force due to some object other than Using free body diagrams, show that the tension in the segment connecting attached with a string of negligible mass over a frictionless, massless pulley. The tension in the upper string is 80 N. (Take g = 10 m/s2) 4. Oct 06, 2012 · Two objects are connected by a light string that passes over a frictionless pulley as shown in the figure below. The other end of the rope is attached to a second block. 00 kg, m2 = 5. 00 kg and 3. What is the magnitude of the force that box A exerts on box. A & C are maintained at a temperature of 95° C while the columns B & D are maintained at 5° C. The 10 - N box slides without friction on the horizontal table surface. (a) The dots below represent the two blocks. exerts a force due to its weight, which causes the system (two blocks and a string) to accelerate. Figure (a) Both the scales will read 10 kg. (a) Draw free-body diagrams of both objects. N Two blocks connected by a string are pulled across a horizontal surface by a force applied to one of the blocks, as shown. According to Ohm's law, the voltage drop, V, across a resistor when a current flows through it is calculated using the equation V = IR, where I equals the current in amps (A) and R is the resistance in ohms (Ω). The coefficient of friction of horizontal surface is 0. (3) The little fingers pick up first the far then the near crossed strings. The coefficient of friction between the block and the table is \mu= The pulley is frictionless. Two blocks of mass M 1 = 10 kg and m 2 = 5 kg connected to each other by a massless inextensible string of length 0. Draw free-body diagrams showing and Oct 29, 2020 · Using a samsung galaxy s9+ Had it paired to a Dual XDM16BT in my old car for months, no problem. What is the acceleration of each box? The diagram for the situation looks like this: The Connect SharePoint Framework components using dynamic data. Two strings of the same material are connected to the same oscillator, pulled tightly over two light, frictionless pulleys and then connected to two 5 kg hanging masses as shown in the diagram. When you apply a specific voltage (normally between 3V and 5V) to the input connections on the PT (shown with two red wires connected to it in the figure) the relay triggers and AC current passes through the power cord. It is connected via a massless string over a massless, frictionless pulley to a hanging block of mass 2. 3 the blocks are released from rest using energy methods find the speed of the upper block when it has moved 0. The mass element is small but is enlarged in the figure to make it visible. What is the minimum mass m that will stick and not slip? A)4. 0 kg) accelerates downwards at 3. 0-kg box are touching each other. 0cm/s . Therefore, I used the same symbol for the acceleration of each. These techniques also reinforce concepts that are useful in many other areas of physics. e M and m) moves down with an acceleration of g Solving problems which involve forces, friction, and Newton's Laws: A step-by-step guide. 0 N is applied to the 10. attached by strings to boxes of masses ml = 1. Determine the acceleration of the system and the tension, T, in the string. Each of the second team's members has an average mass of 73 kg and exerts an average force of 1365 N horizontally. There is no sideways force. m 2 of block 2 is greater than the mass . Box A has mass 24. 0 N, a = 12. May 22, 2019 · Two masses, m 1 and m 2, are connected by a cord and arranged as shown in the diagram with m 1 sliding along on a friction less surface and m 2 hanging from a light frictionless pulley. 0-kg object is shown at the right. Get rid of the old car put stereo in new car. 060kg)2 30N 5. 10 kg are connected by a massless string, as shown in the Figure (the figure shows m1 on top of a box and m2 hanging off the side of the box). But there's a shorter method. Two boxes are connected to each other as shown. Two boxes are placed next to each other on a smooth flat surface. 00 kg, m2 = 7. Jan 31, 2016 · In the figure, a 1. (c) Find the tension in the string. (c) The upper scale will read 10 kg and the lower zero. ℎ𝑦𝑝. 4. 15. Two blocks are connected by a rope, through a pulley as shown in this figure. What must be true Two blocks are connected by a massless rope as shown below. Wikilinks are visibly distinct from other text, and if an internal wikilink leads to a page that does not yet exist, it usually has a different specific visual appearance. The total current through the Question: Two objects (53. The inner block is connected to a central pole by another string as shown in the figure (refer to text book) with r1 = 0. [a = 1. box A. Now we have two differential equations for two mass (component of the system) and let's just combine the two equations into a system equations (simultaenous equations) as shown below. Boxes A and B are in contact on a horizontal, frictionless. The 4. Find the acceleration of the 4. The spring passes over a frictionless pulley connected rigidly to the egde of a stationary block A. Strings, pulleys, and inclines Consider a block of mass which is suspended from a fixed beam by means of a string, as shown in Fig. a) Find the magnitude of the acceleration of the two masses b) Find the tension in the string Two blocks are connected by a massless rope as shown below. (b) Both the scales will read 5 kg. There are For each object separately, sketch a free-body diagram, showing all the forces Two boxes are connected by a lightweight (massless!) cord & are resting on a smooth (frictionless!) table. This capability allows you to build rich experiences and compelling end-user solutions. Two 10-kilogram boxes are connected by a massless string that. 400. The blocks revolve about the post at the same frequency f (revolutions per second) on a frictionless horizontal surface at distances r A and r B from the post. The upper block is hung by another string. 00 kg crate lies on a smooth incline of angle 40. All strings can be treated as massless. The apparatus shown in the figure consists of four glass columns a connected by horizontal sections. The pulley is light (massless) and frictionless. 78. The two blocks are said to be coupled. 00-kg mass sliding on a frictionless table is connected by a string which runs over an ideal pulley to a mass M, from which is suspended a 2. The pulse is triangular and is not symmetric. The string is attached to the upper two corners of the painting. 31 (a) The figure represents the second mode of the string that satisfies the boundary conditions of a node at each end of the string. 0-V supply, with values shown in Fig. 70. The rod is suspended in a horizontal position by two vertical light inextensible strings, one at each end, as shown in Figure 2. Two identical blocks each of mass "M" are tied to ends of a string and the string is laid over a smooth fixed pulley. A uniform rope of weight 50 newtons hangs from a hook as shown above. the acceleration of the three boxes. 5 m/s2 6. Microscopic View A powerful microscope would see that the string was made up of atoms connected by molecular As shown in . A light string is attached to the cart and passes over a pulley at the end of the track and a second mass is attached to the end of this string. Take g = 10 ms-2. Feb 17, 2016 · 1) Two objects having masses m 1 and m 2 are connected to each other as shown in the figure and are 11) released from rest. 75 m. (a) What acceleration does each block experience? (b) If a taunt string is connected to the blocks (Fig. 2 m. 0m == Superposition Principle • When two Jun 01, 2011 · 06<br />Problem<br />1997<br />Two blocks of mass M1 = 10 kg and m2 = 5 kg connected to each other by a massless inextensible string of length 0. The rope is also applied at the other end of the rope, where it. 25 m, to a vertical, rotating rod. F v m/L = vm2 F L = (50m/s) (0. two boxes are connected to each other by a string as shown in the figure
yspzcnqtmzj2ebtfwaf3tlappf73bsm3t kxeosjyghnumeklvljiexzecucvrhpy9el snsc0uhueuu3rt0vytmh5ocqssdzcoq7 w8wvvmudm2eyj1bzljrbp2mxvplguaqo mtysskdwuj6lh1aakdqhjkxhqracbqs esuyjmfbpixqjsnflimbyf7bssbjgo vdwahp3qyykbkoq4tib07j2pb7j0wblfvl kntib0wztvtrwi9gy8kdpll6oo7blsr8zpqh 8kh9d5mi5fbrvlzodkrmktgk2zkbkd83 pnw7pqatlviveshobtevrgvl4pztzzwbmge
EngineLabs NEWSLETTER - SIGN UP FREE!
We will safeguard your e-mail and only send content you request.
EngineLabs
We'll send you raw engine tech articles, news, features, and videos every week from EngineLabs.
We promise not to use your email address for anything but exclusive updates from the Power Automedia Network.
Subscribe to more FREE Online Magazines!
Late Model LS Vehicles
Performance Driving
or No thanks
|
CommonCrawl
|
Sequence of partial sums converges to sum of series
Say we have a series $a_0+a_1+a_2+...+a_n+...$ that has the partial sums $$S_0 = a_0$$ $$S_1 = a_0+a_1$$ $$...$$ $$S_n = a_0+a_1+...+a_n$$
The series converges if $\lim_{n\to\infty} S_n = S$, where S is the sum of the series, i.e. $S = \sum_{j=0}^\infty a_j$.
Why is the limit the sum of the series?
sequences-and-series limits
mavaviljmavavilj
$\begingroup$ Actually this limit is the very definition of the sum of the series. $\endgroup$ – Bernard Sep 13 '15 at 13:57
$\begingroup$ Also for finite series? $\endgroup$ – mavavilj Sep 13 '15 at 13:57
$\begingroup$ There exists only finite sums. My terminology knows only of infinite series. $\endgroup$ – Bernard Sep 13 '15 at 13:59
By definition, indeed.
There is no ground to speak of the "sum" of a series, say $1 - 1 + 1 - 1 + \cdots$, for one may argue that $(1-1) + (1-1) + \cdots = 0$ and another may argue that $1 - (1-1) - (1-1) - \cdots = 1$, a blatant contradiction.
To talk of the sum of an infinite series in a meaningful way, a covenant has been made, which is precisely the definition interesting you.
MegadethMegadeth
It's the definition of the "sum" of the series. It's a sensible definition in the sense that
If $S$ is the sum of the infinite sequence $a_n$, then as you sum more and more elements of $a_n$, the sum you get will be as close to $S$ as you want it to be.
That said, the "makes sense" is a justification, but not a "proof" that it is correct, since it is our decision to use that definition.
5xum5xum
Not the answer you're looking for? Browse other questions tagged sequences-and-series limits or ask your own question.
Arithmetic manipulation of partial sums
How to prove that the series $\sum_{n=1}^\infty \frac{(-1)^n}n$ converges
If a series converges does its sequence of partial sums converge?
Let $\{a_n\}_{n=1}^\infty$ be an infinite sequence. Does there exist an infinite series whose partial sums is $\{a_n\}_{n=1}^\infty$?
The definition of a series
Does a series with bounded partial sums converge if the summands go to $0$?
Definition of series and their relation to sequences and functions
By considering the sequence of partial sums calculate the limit of $\sum^{\infty}_{n=0} p^n$ , where $0<p<1.$
If $\sum_{n=1}^\infty a_n$ converges to $A$, then ${1 \over 2}(a_1 + a_2) + {1 \over 2}(a_2 + a_3) + \cdots$ converges
Is there a typo in this text?
|
CommonCrawl
|
Intuition for Cohomology and Holes in a Space
I'm learning that the dimension of the $k^{th}$ De-Rham Cohomology Group (as a vector space over $\mathbb{R}$) of space tells us the number of $k$-dimensional holes in the space. I always found this quite strange, given that the De-Rham Cohomology Group deals with differential forms, a very algebraic object, whereas holes are very geometric.
So my question is this: Is there an intuitive reason why differential forms on a space and holes in the space have anything to do with each other? Does the presence of holes force the differential form to "dodge" the hole, which in turn changes its properties in ways that we can detect? I know how to formally prove that forms detect holes; that's not what I'm looking for. I am looking for a deeper philosophical answer to the question: what is the intuitive relationship between differential forms and holes?
Thanks in advance. Appreciate the help!
differential-geometry algebraic-topology differential-forms
chaad
chaadchaad
$\begingroup$ Do you know about the form $d\theta$ on $\mathbb{R}^2\setminus \{(0,0)\}$? This is the first example of dR cohomology detecting holes. $\endgroup$
– Alekos Robotis
$\begingroup$ I see why, purely formally, the form $d \theta$ is closed but not exact on $\mathbb{R}^2 \setminus \{ (0,0) \}$. What I struggle to see, though, is an intuitive reason why holes give rise to forms that are closed but exact. Or conversely: an intuitive reason why forms that are closed but not exact can detect holes. $\endgroup$
– chaad
$\begingroup$ @ Alekos Robotis Actually, the first example is $\frac{1}{x}$ on ${\bf R}\backslash \{0\}$. $\endgroup$
– Peter Saveliev
In my opinion, this is a great question, because it's actually not the right way to measure holes. The construction that measures holes is not cohomology, but just homology. The easiest example of a homology theory is the simplicial homology, where you take a space which can be written as a union of simplices. You fix an ordering on the vertices for the sake of computation, and then you define the chain groups $C^k(X)$ to be the formal linear combinations of simplices in each dimension. The 'differential' decreases the dimension, and is simply the alternating sum of the boundary faces of that simplex. When you have a chain complex with a differential which decreases the rank rather than increase it, we tend to call them 'boundary' operators inspired by this theory.
This turns out to also define a chain complex, because the alternation forces the relation $$\partial^2 = 0$$ because if a simplex has vertices $0, ..., n$, then the boundary is $\sum_{i=0}^n (-1)^i(0,...,\hat{i}, ... n)$ where the hat denotes deletion. When you take the boundary twice, there are two ways to delete a pair of indices, and the difference between them will come with a difference in sign, so they'll cancel.
Now what are the homology groups of such a thing looking like? Well the things killed by the boundary operator are called cycles. The intuition from this is that if you have a loop, and you divide the loop into edges making a polygon, this is a simplicial complex, as described briefly above. Each vertex appears once as a head and once as a tail. So if I write my loop as a sum of these edges and take the boundary, everything cancels and you get 0.
On the other hand, the image of the boundary operator, well, these are just called boundaries, because they are the boundary of something. The quotient is the simplicial homology. It is exactly these that measure holes in a space. The loops which are not boundaries of something filling them in are the holes!
The relationship between homology and cohomology in this way is spelled out over a beautiful collection of theorems forming what is for me the spine of algebraic topology (at least for me as someone predominantly interested in geometry, where mostly I want these tools for computational purposes). First, there are many other homology theories and cohomology theories. The most important theorem(s) relating them is that all the homology theories on reasonable spaces, and satisfying some very modest axioms all compute the same algebraic invariant, so we are justified in just speaking of the homology or cohomology of a space.
On the other hand, homology and cohomology are also related to one another by some special theorems, such as Poincare duality and the Universal Coefficient Theorem.
The rough intuition for Poincare duality (and indeed, a fake proof was given by Poincare himself along these lines) was that there are two reasonable ways of formulating how to dualize a chain complex where the objects look like simplices. One is to replace the simplices with dual simpliices, i.e. replace the vertices with top cells, the edges with second-to-top faces, and so on. The other way is the algebraic construction of a cohomology theory, where you form the dual module of every group in the chain complex, by taking $C^k := \hom(C_k, \mathbb{Z})$. It turns out that there is a dual pairing $C_k(M) \otimes C_{n-k}(M) \to \mathbb{Z}$ given by counting signed intersections, and this pairing can often induce an isomorphism between $k$ chains and $n-k$ cochains, since the boundary operation is essentially carrying all the same data as the incidence relation for the duals.
For this reason, when one is studying cohomology with differential forms, you are actually studying (by one of these theorems that says all cohomology theories that satisfy modest axioms are isomorphic!) studying the geometry of a dual triangulation of your space. This gives you intuition for why they should be similar, but can often be different. Especially when one is studying De Rham cohomology, because there, the coefficients are in the field $\mathbb{R}$, so the coefficients do not allow you to see 'half holes' such as in the projective space. In that space (if you haven't seen this example) there is a loop which does not bound a disk, but if you traverse the loop twice, now it does bound a disk. The general relationship between the different theories' and their coefficients is the purpose of the UCT above. Changing the coefficients can change what your computations can 'see,' but this is another story.
Alfred YergerAlfred Yerger
$\begingroup$ Wow! Thanks for taking the time to write up such a thorough answer. That definitely helps in understanding the similarities between homology and co-homology. $\endgroup$
$\begingroup$ Gladly. The reason that we spend a lot of time emphasizing cohomology over homology, is because cohomology ends up being a finer invariant. Even though it looks like they are similar, cohomology contains more data, because the separate cohomology groups can be combined into a ring. For differential forms, the wedge product tells you how to multiply forms. It's possible for spaces to have the same groups, but the product operation works out differently. This is the sorta thing that becomes manifestly clear the deeper you go into the subject. $\endgroup$
– Alfred Yerger
The intuitive reason for why closed forms detect holes is the existence of Stokes' theorem. One of the consequences of Stokes is, for example, that for a closed $1$-form $\omega$ and a path $c$, the value of $\int_{c} \omega$ is invariant if we move $c$ around via an homotopy. If there is another path $c'$ which has a different value for $\int_{c'}\omega$, then we cannot move $c$ around and end in $c'$, thus "there is a hole" of some kind.
It is also arguably the formal reason, since the de Rham map \begin{align*} \Omega_n(X) &\to \mathrm{Hom}(C_n(X),\mathbb{R})\\ \omega &\mapsto (\sigma \mapsto \int_{\sigma}\omega) \end{align*} depends on Stokes to even be a chain map. The de Rham theorem then says it induces an isomorphism on the cohomology level, and since the right side is singular cohomology you recover the frequently used "holes" analogy.
Aloizio Macedo♦Aloizio Macedo
$\begingroup$ That's a very nice intuitive way to think about it: the path $c$ can't deform to $c'$ because a hole is "in the way". Thanks for the reply! $\endgroup$
Not the answer you're looking for? Browse other questions tagged differential-geometry algebraic-topology differential-forms or ask your own question.
Source for Differential Manifolds/Geometry Questions?
Why do differential forms have a much richer structure than vector fields?
How to relate algebraic properties of the fundamental group to the topological properties of a space
Intuition of de rham cohomology
What types of holes $\pi_1(X)$ can detect?
|
CommonCrawl
|
Time complexity for a variant of edit distance
This question is about the following variant of edit distance. Say we have a cost of 1 for inserts, deletes and substitutions as usual with one exception. A substitution for a given letter x for a letter y only costs 1 the first time. Any further substitutions of x for y cost 0.
As simple examples:
A = apppple
B = attttle
cost 1 (not 4) to transform A into B. This is because we change p for a t four times but we only charge for the first one.
A = apxpxple
B = atxyxtle
cost 2 (not 3) as the p to t substitution only costs 1 even though we do it twice.
B = altttte
cost 2. p -> l give allllle and then l -> t gives altttte.
If we assume the total length of the input is n, what is the best time complexity?
I suspect the problem is NP-hard so I am not expecting a poly time solution.
Question previously asked at https://codegolf.stackexchange.com/questions/215148/edit-distance-where-a-substitution-only-costs-the-first-time
ds.algorithms edit-distance
Anush
AnushAnush
$\begingroup$ Can you find a case where the sequence of operations found with the regular Levenshtein distance is not also optimal for this problem? $\endgroup$ – causative Nov 22 '20 at 6:11
$\begingroup$ Is the size of the alphabet constant? There's a straightforward dynamic programming algorithm whose running time is something like $O(n^2 2^{|\Sigma|^2})$, which is $O(n^2)$ if $|\Sigma|$ is a constant. $\endgroup$ – D.W. Nov 22 '20 at 6:34
$\begingroup$ @D.W. I should have said, we should assume the alphabet size can be as large as n. $\endgroup$ – Anush Nov 22 '20 at 7:06
$\begingroup$ My intuition says that it can be done in polynomial time, but I am too lazy to think of a way. $\endgroup$ – user21820 Nov 22 '20 at 7:40
$\begingroup$ Yep. I posted my answer :) $\endgroup$ – Mikhail Rudoy Dec 18 '20 at 11:07
Computing this type of edit distance is NP-complete, which I will prove below by reducing from the NP-complete problem Vertex Cover (given a graph $G$ and a number $k$, determine whether there exists a set of vertices $C$ with $|C|\le k$ such that each edge in $G$ has at least one vertex in $C$, aka a vertex cover of size at most $k$).
Helpful subproblem
Consider any sequence of edits between two strings (insertions/deletions/substitutions). This sequence of edits can be re-ordered so that the new sequence of edits has the same cost, but the substitutions happen after the insertions and deletions. We can separate the cost of the edits into two parts: an insertion/deletion cost and a substitution cost.
If you have already made a choice of what insertions/deletions to do starting at the source string, the insertion/deletion cost is already fixed, but there might be multiple different substitution strategies to get from that intermediate state to the target string. So one reasonable question is to ask what the optimal substitution strategy is at that point, and therefore what the optimal substitution cost is. Equivalently, this is asking for the edit distance using your distance metric but with only substitutions allowed.
So let's say we want to get from string $A = a_1a_2\ldots a_n$ to string $B = b_1b_2\ldots b_n$. Construct the directed graph $T_{A\to B}$, whose vertices are the characters used in these strings, and whose edges are the pairs $(u, v)$ such that $a_i = u \ne v = b_i$ for some $i$. In other words, $T_{A\to B}$ is the graph of the desired substitutions.
Then we have the following key fact: The optimal substitution cost is equal to the number of vertices in $T_{A\to B}$ minus the number of acyclic weakly connected components of that graph. This fact will be used in the analysis of the reduction. The rest of this section will prove it. If you are just interested in the NP-hardness proof, feel free to skip to the next section.
We want to know what substitutions to do, but each substitution has cost 1 no matter how many times we do it. In other words, we can look at every pair of characters $(u,v)$, and ask the question of whether we want to do the substitution of $u \to v$ at least once or not. This choice of which substitutions to do can be expressed as a directed graph, where we include edge $(u,v)$ if we plan to do the substitution of $u \to v$ at least once. Thus, the problem can be thought of as an optimization problem where we are looking for an optimal graph. What are we optimizing and what are our constraints? Well we are trying to minimize the substitution cost, so we are trying to choose a graph with a minimum possible number of edges. As for the constraints, each edge $(u,v)$ of $T_{A\to B}$ is a constraint saying that character $u$ must be converted into $v$ via a sequence of substitutions. In other words, the presence of edge $(u,v)$ in $T_{A\to B}$ means that we have the constraint that there must be a path from $u$ to $v$ in our graph.
Consider any weakly connected component of a candidate graph for this graph optimization problem. If the component is acyclic, then there must be at least $n-1$ edges where $n$ is the number of vertices in that component. On the other hand, if the component is cyclic, there must be at least $n$ edges. Therefore, the overall number of edges in a candidate graph has as a lower bound the number of vertices (which is fixed) minus the number of acyclic weakly connected components (which can vary among candidate graphs). If we could get an upper bound on the number of acyclic weakly connected components, we would have an overall lower bound on the number of edges (aka the substitution cost).
Now consider a weakly connected component of $T_{A\to B}$. If two vertices are in the same weakly connected component of $T_{A\to B}$, then they must also be in the same connected component of any candidate graph in our graph optimization problem (because edges in $T_{A\to B}$ correspond to paths in the candidate graph, thereby retaining connectivity). Therefore, each weakly connected component of our candidate graph must be a union of weakly connected components of $T_{A\to B}$. Thus, we see that the number of acyclic weakly connected components in $T_{A\to B}$ is an upper bound on the number of acyclic weakly connected components in our candidate graphs, which is exactly what we wanted. We have our lower bound on the substitution cost: it is at least the number of vertices in $T_{A\to B}$ minus the number of acyclic weakly connected components.
This is exactly the value that the key fact says is the optimal substitution cost, so all that's left is to demonstrate that this value can be achieved. We will do this as follows. Consider every acyclic weakly connected component of $T_{A\to B}$, one at a time. That component is a DAG, so we can find a topological sort. Then, we can include that topological sort as a path in our graph of substitutions (i.e. earliest element in the sort points to second earliest, etc...). This uses one fewer edge than the number of vertices in that component. It also satisfies the constraints for that component: if $u$ needs to be converted into $v$, then $(u,v)$ is an edge in the component; as a result $u$ must be before $v$ in the topological sort, and so the path of substitutions has as a subpath a path from $u$ to $v$. After doing this for all the acyclic weakly connected components, take all the other vertices, and arrange them in a cycle. This uses a number of edges equal to the number of vertices and satisfies the requirement since any of these vertices has a path to any other (by following the cycle). This graph satisfies all the requirements and has a total number of edges equal to the number of vertices minus the number of acyclic weakly connected components.
This concludes the proof of the key fact.
NP-hardness reduction
We wish to prove that computing the edit distance is NP-hard by reducing from Vertex Cover. That means our reduction starts with a given Vertex Cover instance: a graph $G$ and a number $k$. The reduction will output a pair of strings $(A, B)$ and a number $k'$ using polynomial time to compute these outputs. We will prove that the edit distance between $A$ and $B$ is at most $k'$ if and only if $G$ has a vertex cover of size at most $k$.
In particular, $k'$ will be $k + 4m$ where $m$ is the number of edges in $G$.
As for the strings $A$ and $B$, we will construct them out of several pieces $A = A_1A_2\ldots A_t, B = B_1B_2\ldots B_t$, where each pair $(A_i, B_i)$ is a gadget built for a specific purpose. The rest of this section will go over what kinds of gadgets we will use, how these gadgets will be combined together (i.e. how many of which gadget we will use and what characters will be used for the gadgets), and then a correctness proof showing that this reduction is correct (i.e. the edit distance between $A$ and $B$ is at most $k'$ if and only if $G$ has a vertex cover of size at most $k$).
The first kind of gadget we will use is an alignment gadget. This gadget consists of pair $(A_i, B_i)$ where $A_i$ consists of a large number of different characters that are not used anywhere else in $A$ or $B$ and $B_i = A_i$. The purpose of this gadget is to force alignment of these two sections of strings $A$ and $B$ via insertions/deletions. If these two sections don't align, then in the substitution step, there would need to be a number of different substitutions roughly equal to the number of characters in $A_i$. By making the number of characters large (much larger than $k'$), we can make it prohibitively expensive, in terms of substitution cost, to not align these sections of $A$ and $B$ during the insertion/deletion stage of the edits.
We will use alignment gadgets as separators between the other gadgets, which allows us to reason about each other gadget $(A_i, B_i)$ in isolation. That is, we will be forced, due to separator gadgets $(A_{i-1},B_{i-1})$ and $(A_{i+1},B_{i+1})$, to edit $A_i$ directly into $B_i$ (rather than possibly converting $A_i$ into some other part of string $B$ and converting some other part of $A$ into $B_i$).
The second gadget is a forced substitution gadget. This gadget simply consists of $(A_i, B_i)$ where $A_i$ and $B_i$ have the same (very large) length, $A_i$ consists entirely of copies of one character $u$ and $B_i$ consists of entirely copies of another character $v$. The duplication of these characters many times makes it so that even after using an insertion/deletion cost of up to $k'$, there is still at least one character $u$ that must be converted into a $v$. Since that conversion needs to happen anyway, the simplest way to solve this gadget is to just use a substitution cost of 1 to convert all the copies of $u$ into copies of $v$. However, it is possible to use some other substitution strategy to convert these characters (i.e. by converting to some third character before converting to $u$). All we really need to know here is that the insertion/deletion cost within this gadget can be as low as 0 and that the desired substitution $u \to v$ is forced, either directly or indirectly, by this gadget.
The final gadget is a choice gadget. Here $A_i = x_1ax_2$ and $B_i = bc$, where $x_1$ and $x_2$ are characters unique to this gadget (not used anywhere else), while $a$, $b$, and $c$ are characters potentially used elsewhere. Assuming this gadget is placed between two alignment gadgets, we need to convert $A_i$ into $B_i$. If this is done with an insertion/deletion cost of $1$, then we are left trying to convert either $x_1a$, $x_1x_2$, or $ax_2$ into $bc$. This can be accomplished with two substitutions, though the substitution cost can be anywhere between $0$ and $2$ depending on whether these substitutions can also be used elsewhere. Therefore, if using an insertion/deletion cost of $1$, the total cost in this gadget is at most $3$. On the other hand, the next smallest possible insertion/deletion cost in this gadget is $3$, which is already at least the cost of the three solutions listed above. Therefore, there is always an efficient solution of this gadget with only one deletion followed by substitutions. If we chose to delete $a$ and substitute $x_1x_2$ into $bc$, then both substitutions have cost $1$ since neither $x_1$ nor $x_2$ appear elsewhere in the strings; therefore, if we were to instead choose one of the other two possible deletions, the cost would be no worse. That is why this is a choice gadget: the remaining two choices involve either substituting $a$ into $b$ (when $ax_2 \to bc$) or into $c$ (when $x_1a \to bc$).
Arrangement of gadgets
As previously mentioned, we will use alignment gadgets as separators for all the other gadgets. In other words, every second gadget will be an alignment gadget. This leaves the question of which other gadgets should be used.
Consider every edge $e$ in $G$. We will use two characters $p_e$ and $q_e$ for this edge. For each such edge $e$, we will include two forced substitution gadgets forcing the substitution of $p_e$ into $q_e$ and the substitution of $q_e$ into $p_e$.
Next consider the vertices of $G$. If $v$ is a vertex, we will use $r_v$ as a character. For every edge $e = (u, v)$, we will create a choice gadget in which character $p_e$ must be substituted into either $r_u$ or $r_v$ (i.e. the choice gadget where $A_i = x_1p_ex_2$ and $B_i = r_ur_v$).
That concludes the actual construction of the reduction. Notably, these strings $A$ and $B$ and the value $k$ can easily be constructed from the vertex cover instance in polynomial time. What's left is to prove the correctness.
Suppose first that we have a vertex cover $C$ of $G$ with $|C| \le k$. Then we will prove that the edit distance between $A$ and $B$ will be less than $k' = k + 4m$.
$A$ can be edited into $B$ as follows:
First, in the insertion/deletion step, delete one character from each choice gadget, for an insertion/deletion cost of $m$. In particular, for each edge $e = (u,v)$, we have the gadget $(A_i, B_i) = (x_1p_ex_2, r_ur_v)$ and we will delete either $x_1$ or $x_2$ as follows. Either $u$ or $v$ is in the vertex cover $C$. We will delete $x_1$ (thus requiring the substitution $p_e \to r_u$) if $u \in C$, and will delete $x_2$ (thus requiring the substitution $p_e \to r_v$) otherwise (in which case $v \in C$).
In the substitution step, we will do two things. First, we will look at each choice gadget, and we will convert either $x_1$ or $x_2$, whichever is still there, into the character it needs to turn into. This has a substitution cost of $m$. Second, we will take the characters $p_e$ and $q_e$ for every edge $e$ in $G$ and the characters $r_v$ for every vertex $v$ in $C$, and we will arrange these characters in a cycle, substituting each character for the next in the cycle. This has a substitution cost of $2m + |C|$, and allows any of these characters to turn into any other (by simply substituting along the cycle repeatedly).
This converts $A$ into $B$ with a total cost of $m+m+2m+|C| = 4m + |C| \le 4m + k = k'$. To confirm that this actually converts $A$ into $B$, lets consider every gadget. The alignment gadgets are successfully converted as long as the other gadgets are. The forced substitution gadgets are successfully converted since both $p_e$ and $q_e$ are in the cycle of substitutions. As for the choice gadgets, the necessary substitutions are either $x_1p_e \to r_ur_v$ (in a case where $v \in C$) or $p_ex_2 \to r_ur_v$ (when $u \in C$). The substitutions of $x_1$ or $x_2$ are explicitly handled. Therefore, we're left needing to substitute either $p_e \to r_v$ (when $v \in C$) or $p_e \to r_u$ (when $u \in C$). Note that in both cases, the character that $p_e$ needs to be converted into is in the cycle of substitutions, meaning that the above instructions successfully convert the choice gadgets as well.
This shows that if the answer to the vertex cover instance is "yes", then the edit distance between $A$ and $B$ is at most $k'$.
What about the reverse direction?
Let's say there is a way to edit $A$ into $B$ using at most cost $k'$. Suppose we have the solution with the least cost. When we analyzed the choice gadget, we saw that there is always an efficient (least cost) solution involving one deletion in that choice gadget, specifically deleting either $x_1$ or $x_2$ in the gadget. Thus we can assume WLOG that our least-cost solution has no insertions and exactly one deletion in each choice gadget and that the deletion is one of the two intended deletions. That's an insertion/deletion cost of $m$ in the choice gadgets alone. Therefore, the substitution cost must be at most $k' - m = 3m + k$.
Due to the alignment gadgets we know that the choice gadgets must be solved entirely by substitution after the one deletion in that gadget. Therefore, for each edge $e = (u,v)$, we either have a substitution of $x_1p_e \to r_ur_v$ or $p_ex_2 \to r_ur_v$. Define a set of vertices $C$ such that $v \in C$ if and only if there is a substitution $p_e \to r_v$ that needs to happen in some edge gadget. By the above, we see that $C$ must be a vertex cover (since each edge gadget contributes one of the endpoints of the edge to $C$).
We also know from the forced substitution gadgets that $p_e \to q_e$ and $q_e \to p_e$ are forced substitutions.
Let $A'$ be the state of the string after the insertion/deletion step and before the substitution step. At this point we will use the key fact from the first section to determine what the substitution cost must be. We know that there is a directed graph $T_{A'\to B}$ whose edges are the substitutions that are required to convert $A'$ into $B$. We know from the above that this graph includes the following edges:
$(p_e, q_e)$ and $(q_e, p_e)$ for every edge $e$ in $G$
Either $(x_1, r_u)$ and $(p_e, r_v)$, or $(x_2, r_v)$ and $(p_e, r_u)$ for each edge $e = (u,v)$ (where the $x_1$ and $x_2$ vertices are unique per edge $e$)
According to the key fact from the previous section, the substitution cost is equal to the number of vertices in this graph minus the number of acyclic weakly connected components. This value can never decrease if we add more vertices or edges, so computing this value for only those edges listed above is a lower bound on the substitution cost of our edit solution.
There are exactly $3m+n$ vertices listed above, where $n$ is the number of vertices in $G$ (one $p_e$ vertex per edge of $G$, one $q_e$ vertex per edge of $G$, one $x_1$ or $x_2$ per choice gadget--of which there are $m$, and one $r_v$ per vertex of $G$). What about the number of acyclic weakly connected components? Due to the cycle $p_e, q_e, p_e$, the connected components of vertices $p_e$ are never acyclic. As for each $r_v$, it either connects to some $p_e$, in the case that $v \in C$, or it does not connect to any $p_e$ if $v \not\in C$. In the former case, this vertex is not in an acyclic weakly connected component. In the latter case, the entire weakly connected component of $r_v$ is that vertex with some number of $x_1$ or $x_2$ vertices neighboring it. That is acyclic. As for the $x_1$ or $x_2$ vertices, each one neighbors an $r_v$ vertex, and its weakly connected component has therefore already been addressed one way or the other above. Thus, we see that each acyclic weakly connected component contains exactly one $r_v$ vertex and furthermore one where $v \not\in C$. Thus, the number acyclic weakly connected components is equal to the number of vertices $v$ not in $C$. This equals $n - |C|$.
The substitution cost therefore has a lower bound of $(3m+n) - (n-|C|) = 3m+|C|$. But we saw before that the substitution cost must be at most $k' - m = 3m + k$. Therefore, we see that it must be the case that $|C| \le k$. Then $C$ is a vertex cover of size at most $k$, proving that the answer to the vertex cover instance is "yes".
We have shown via the two directions that $G$ has a vertex cover of size at most $k$ if and only if $A$ and $B$ have an edit distance of at most $k'$. This proves that the reduction is correct. Since we previously noted that it runs in polynomial time, this concludes the proof that computing edit distance (as you defined it) is NP-hard.
Mikhail RudoyMikhail Rudoy
$\begingroup$ Very nice! Thank you. At least my question wasn't too trivial :) $\endgroup$ – Anush Dec 18 '20 at 11:32
Not the answer you're looking for? Browse other questions tagged ds.algorithms edit-distance or ask your own question.
Edit distance between two partitions
Facility Location problem, multiple facilities with no specific locations
Edit distance algorithms that depend on alphabet
Edit distance with move operations
Edit distance in sublinear space
Complexity of Homogenizing a String
Algorithm for computing unordered tree edit distance
|
CommonCrawl
|
Homology of the Klein bottle using cellular homology
I am trying to calculate homology groups of the Klein bottle $K$ using cellular homology. $K$ has one $0$-cell, two $1$-cells and one $2$-cell:
$\require{AMScd}$ \begin{CD} x_0 @>{a}>> x_0 \\ @V{b}VV \circlearrowleft @A{b}AA \\ x_0 @>{a}>> x_0 \end{CD} (The circling arrow indicates an orientation on the 2-cell.)
So the cellular chain complex is of the form: \begin{equation} 0 \rightarrow \mathbb{Z} \xrightarrow[\text{}]{\delta_{2}} \mathbb{Z} \oplus \mathbb{Z}\xrightarrow[\text{}]{\delta_{1}} \mathbb{Z} \rightarrow 0\end{equation} , where $\delta_1$ and $\delta_2$ are the boundary maps. I have trouble with calculating those. In general, given an $i$-cell $\alpha$ with attaching map $\gamma_{\alpha}:S^{i-1} \rightarrow X^{(i-q)}$, and an $(i-1)$-cell $\beta$, we define $f_{\alpha,\beta}:S^{i-1} \rightarrow S^{i-1}$ to be the composition \begin{equation} S^{i-1} \xrightarrow[\text{}]{\gamma_{\alpha}} X^{(i-1)} \rightarrow X^{(i-1)}{/}X^{(i-2)}\xrightarrow[\text{}]{p_{\beta}} S_{\beta}^{i-1} \end{equation} I also know the Cellular Boundary Formula so I think that I need to calculate the degrees of $f_{a,x_0}$ and $f_{b,x_0}$ to get $\delta_1$ and to calculate the degrees of $f_{ab,a}$ and $f_{ab,b}$ to get $\delta_2$. Can someone explain how to calculate the degrees? Is there a general strategy to quickly do so?
algebraic-topology
homology-cohomology
Eric Towers
billy192billy192
For the 1-cell attaching maps the degree is easy: your attaching maps are constant because they are the maps $S^0 \to X^0 = \{x_0\}$ sending both endpoints of your $1$-cells to the $0$-cell. Thus both $\text{deg}f_{a,x_0}$ and $\text{deg}f_{b, x_0}$ are zero and thus $\delta_1 = 0$. It's worth noting that in general, if your $1$-skeleton ends up a wedge of circles the cellular boundary $\delta_1$ will always be zero, for this reason.
Now, I'll call the $2$-cell $e$, instead of $ab$ as you've done, so that when I refer to the $2$-cell it's a bit clearer (its attaching map will involve $a$'s and $b$'s so I don't want any confusion there).
$\delta_2$ also has a quick strategy for computation. The first step is usually to describe the attaching map for the $2$-cell in terms of the $1$-cells; in this case, we can say that the attaching map is $baba^{-1}$ (read off the edges in your polygon), corresponding to $\gamma: S^1 \to X^1 = S^1 \vee S^1$ that on the first quarter-circle of the domain traces the 1-cell $b$, on the second quarter-circle traces $a$, etc.
Now that we've done this, realize that $f_{e,a}$ is the attaching map restricted to $a$, so basically we're deleting $b$ from the formula for $\gamma$; this is the interpretation of the map $f_{\alpha,\beta}$ that you've described. This means that $\text{deg}f_{e,a}$ is the degree of the map described by $aa^{-1}$, which is constant. Similarly, $\text{deg}f_{e,b}$ is the degree of the map represented by $b^2$, which has degree $2$.
Therefore $\delta_1: \mathbb{Z} \oplus \mathbb{Z} \to \mathbb{Z}$ is the $0$ map and $\delta_2: \mathbb{Z} \to \mathbb{Z} \oplus \mathbb{Z}$ sends $1 \mapsto (2,0)$.
kamillskamills
$\begingroup$ thank you for such a nice answer. It makes sense now! $\endgroup$
– billy192
Cellular homology computation
Cellular Homology via Stable Homotopy
Calculating homology of a Klein bottle (Using only axioms)
Using cellular homology in computing singular homology
How to apply the cellular boundary formula?
Computation of Cellular Homology
Homology of $\mathbb{R}P^2$ using cellular homology - verification.
Showing that cellular homology gives a chain complex
|
CommonCrawl
|
Collasping behaviour of a singular diffusion equation
On dynamical behavior of viscous Cahn-Hilliard equation
June 2012, 32(6): 2187-2205. doi: 10.3934/dcds.2012.32.2187
Existence of nontrivial solutions to Polyharmonic equations with subcritical and critical exponential growth
Nguyen Lam 1, and Guozhen Lu 1,
Department of Mathematics, Wayne State University, Detroit, MI 48202, United States
Received April 2011 Revised June 2011 Published February 2012
The main purpose of this paper is to establish the existence of nontrivial solutions to semilinear polyharmonic equations with exponential growth at the subcritical or critical level. This growth condition is motivated by the Adams inequality [1] of Moser-Trudinger type. More precisely, we consider the semilinear elliptic equation \[ \left( -\Delta\right) ^{m}u=f(x,u), \] subject to the Dirichlet boundary condition $u=\nabla u=...=\nabla^{m-1}u=0$, on the bounded domains $\Omega\subset \mathbb{R}^{2m}$ when the nonlinear term $f$ satisfies exponential growth condition. We will study the above problem both in the case when $f$ satisfies the well-known Ambrosetti-Rabinowitz condition and in the case without the Ambrosetti-Rabinowitz condition. This is one of a series of works by the authors on nonlinear equations of Laplacian in $\mathbb{R}^2$ and $N-$Laplacian in $\mathbb{R}^N$ when the nonlinear term has the exponential growth and with a possible lack of the Ambrosetti-Rabinowitz condition (see [23], [24]).
Keywords: Polyharmonic operators, existence of nontrivial solutions, nonlinearity of exponential growth, Moser-Trudinger's inequality, Palais-Smale sequence, regularity of solutions., Adams' inequality, Ambrosetti-Rabinowitz condition.
Mathematics Subject Classification: Primary: 35J91, 35J3.
Citation: Nguyen Lam, Guozhen Lu. Existence of nontrivial solutions to Polyharmonic equations with subcritical and critical exponential growth. Discrete & Continuous Dynamical Systems - A, 2012, 32 (6) : 2187-2205. doi: 10.3934/dcds.2012.32.2187
David R. Adams, A sharp inequality of J. Moser for higher order derivatives,, Ann. of Math. (2), 128 (1988), 385. Google Scholar
Adimurthi, Existence of positive solutions of the semilinear Dirichlet problem with critical growth for the n-Laplacian,, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 17 (1990), 393. Google Scholar
Antonio Ambrosetti and Paul H. Rabinowitz, Dual variational methods in critical point theory and applications,, J. Functional Analysis, 14 (1973), 349. doi: 10.1016/0022-1236(73)90051-7. Google Scholar
Gianni Arioli, Filippo Gazzola, Hans-Christoph Grunau and Enzo Mitidieri, A semilinear fourth order elliptic problem with exponential nonlinearity,, SIAM J. Math. Anal., 36 (2005), 1226. doi: 10.1137/S0036141002418534. Google Scholar
Elvise Berchio, Filippo Gazzola and Enzo Mitidieri, Positivity preserving property for a class of biharmonic elliptic problems,, J. Differential Equations, 229 (2006), 1. doi: 10.1016/j.jde.2006.04.003. Google Scholar
Elvise Berchio, Filippo Gazzola and Tobias Weth, Critical growth biharmonic elliptic problems under Steklov-type boundary conditions,, Adv. Differential Equations, 12 (2007), 381. Google Scholar
Jiguang Bao, Nguyen Lam and Guozhen Lu, Existence and regularity of solutions to polyharmonic equations with critical exponential growth in the whole space,, to appear., (). Google Scholar
Haim Brézis and Louis Nirenberg, Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents,, Comm. Pure Appl. Math., 36 (1983), 437. doi: 10.1002/cpa.3160360405. Google Scholar
Lennart Carleson and Sun-Yung A. Chang, On the existence of an extremal function for an inequality of J. Moser,, Bull. Sci. Math. (2), 110 (1986), 113. Google Scholar
Sun-Yung A. Chang and Paul C. Yang, The inequality of Moser and Trudinger and applications to conformal geometry, Dedicated to the memory of Jurgen K. Moser,, Comm. Pure Appl. Math., 56 (2003), 1135. doi: 10.1002/cpa.3029. Google Scholar
Giovanna Cerami, An existence criterion for the critical points on unbounded manifolds, (Italian),, Istit. Lombardo Accad. Sci. Lett. Rend. A, 112 (1978), 332. Google Scholar
Giovanna Cerami, On the existence of eigenvalues for a nonlinear boundary value problem, (Italian),, Ann. Mat. Pura Appl. (4), 124 (1980), 161. doi: 10.1007/BF01795391. Google Scholar
J. M. B. do Ó, Semilinear Dirichlet problems for the N-Laplacian in $\mathbbR^N$ with nonlinearities in the critical growth range,, Differential Integral Equations, 9 (1996), 967. Google Scholar
D. E. Edmunds, D. Fortunato and E. Jannelli, Critical exponents, critical dimensions and the biharmonic operator,, Arch. Rational Mech. Anal., 112 (1990), 269. doi: 10.1007/BF00381236. Google Scholar
Martin Flucher, Extremal functions for the Trudinger-Moser inequality in $2$ dimensions,, Comment. Math. Helv., 67 (1992), 471. doi: 10.1007/BF02566514. Google Scholar
Luigi Fontana, Sharp borderline Sobolev inequalities on compact Riemannian manifolds,, Comment. Math. Helv., 68 (1993), 415. doi: 10.1007/BF02565828. Google Scholar
D. G. de Figueiredo, O. H. Miyagaki and B. Ruf, Elliptic equations in $\mathbbR^2$ with nonlinearities in the critical growth range,, Calc. Var. Partial Differential Equations, 3 (1995), 139. Google Scholar
Filippo Gazzola, Hans-Christoph Grunau and Marco Squassina, Existence and nonexistence results for critical growth biharmonic elliptic equations,, Calc. Var. Partial Differential Equations, 18 (2003), 117. Google Scholar
Filippo Gazzola, Hans-Christoph Grunau and Guido Sweers, "Polyharmonic Boundary Value Problems. Positivity Preserving and Nonlinear Higher Order Elliptic Equations in Bounded Domains,", Lecture Notes in Mathematics, 1991 (2010). Google Scholar
Hans-Christoph Grunau, Positive solutions to semilinear polyharmonic Dirichlet problems involving critical Sobolev exponents,, Calc. Var. Partial Differential Equations, 3 (1995), 243. Google Scholar
Hans-Christoph Grunau and Guido Sweers, Classical solutions for some higher order semilinear elliptic equations under weak growth conditions,, Nonlinear Anal., 28 (1997), 799. doi: 10.1016/0362-546X(95)00194-Z. Google Scholar
Omar Lakkis, Existence of solutions for a class of semilinear polyharmonic equations with critical exponential growth,, Adv. Differential Equations, 4 (1999), 877. Google Scholar
Nguyen Lam and Guozhen Lu, Elliptic equations and systems with subcritical and critical exponential growth without the Ambrosetti-Rabinowitz condition,, to appear., (). Google Scholar
Nguyen Lam and Guozhen Lu, $N-$Laplacian equations in $\mathbbR^N$ with subcritical and critical growth without the Ambrosetti-Rabinowitz condition,, \arXiv{1012.5489}., (). Google Scholar
Nguyen Lam and Guozhen Lu, Existence and multiplicity of solutions to equations of N-Laplacian type with critical exponential growth in $\mathbbR^N$,, Journal of Functional Analysis, 262 (2012), 1132. doi: 10.1016/j.jfa.2011.10.012. Google Scholar
M. Lazzo and P. G. Schmidt, Oscillatory radial solutions for subcritical biharmonic equations,, J. Differential Equations, 247 (2009), 1479. doi: 10.1016/j.jde.2009.05.005. Google Scholar
Mark Leckband, Moser's inequality on the ball $B^n$ for functions with mean value zero,, Comm. Pure Appl. Math., 58 (2005), 789. doi: 10.1002/cpa.20056. Google Scholar
Yuxiang Li, Moser-Trudinger inequality on compact Riemannian manifolds of dimension two,, J. Partial Differential Equations, 14 (2001), 163. Google Scholar
Yuxiang Li, Extremal functions for the Moser-Trudinger inequalities on compact Riemannian manifolds,, Sci. China Ser. A, 48 (2005), 618. doi: 10.1360/04ys0050. Google Scholar
Yuxiang Li and Cheikh B. Ndiaye, Extremal functions for Moser-Trudinger type inequality on compact closed $4$-manifolds,, J. Geom. Anal., 17 (2007), 669. doi: 10.1007/BF02937433. Google Scholar
Yuxiang Li and Bernhard Ruf, A sharp Trudinger-Moser type inequality for unbounded domains in $\mathbbR^n$,, Indiana Univ. Math. J., 57 (2008), 451. doi: 10.1512/iumj.2008.57.3137. Google Scholar
Kai-Chin Lin, Extremal functions for Moser's inequality,, Trans. Amer. Math. Soc., 348 (1996), 2663. doi: 10.1090/S0002-9947-96-01541-3. Google Scholar
Guozhen Lu and Yunyan Yang, A sharpened Moser-Pohozaev-Trudinger inequality with mean value zero in $\mathbbR^2$,, Nonlinear Anal., 70 (2009), 2992. doi: 10.1016/j.na.2008.12.022. Google Scholar
Guozhen Lu and Yunyan Yang, Adams' inequalities for bi-Laplacian and extremal functions in dimension four,, Adv. Math., 220 (2009), 1135. doi: 10.1016/j.aim.2008.10.011. Google Scholar
Guozhen Lu and Yunyan Yang, Sharp constant and extremal function for the improved Moser-Trudinger inequality involving Lp norm in two dimension,, Discrete Contin. Dyn. Syst., 25 (2009), 963. Google Scholar
O. H. Miyagaki and M. A. S. Souto, Superlinear problems without Ambrosetti and Rabinowitz growth condition,, J. Differential Equations, 245 (2008), 3628. doi: 10.1016/j.jde.2008.02.035. Google Scholar
Jurgen Moser, A sharp form of an inequality by N. Trudinger,, Indiana Univ. Math. J., 20 (): 1077. doi: 10.1512/iumj.1971.20.20101. Google Scholar
S. I. Pohožaev, On the eigenfunctions of the equation $\Delta u+\lambda f(u)=0$,, (Russian) Dokl. Akad. Nauk SSSR, 165 (1965), 36. Google Scholar
Patrizia Pucci and James Serrin, Critical exponents and critical dimensions for polyharmonic operators,, J. Math. Pures Appl. (9), 69 (1990), 55. Google Scholar
Wolfgang Reichel and Tobias Weth, Existence of solutions to nonlinear, subcritical higher order elliptic Dirichlet problems,, J. Differential Equations, 248 (2010), 1866. doi: 10.1016/j.jde.2009.09.012. Google Scholar
Bernard Ruf, A sharp Trudinger-Moser type inequality for unbounded domains in $\mathbbR^2$,, J. Funct. Anal., 219 (2005), 340. doi: 10.1016/j.jfa.2004.06.013. Google Scholar
Neil S. Trudinger, On imbeddings into Orlicz spaces and some applications,, J. Math. Mech., 17 (1967), 473. Google Scholar
Yunyan Yang, A sharp form of the Moser-Trudinger inequality on a compact Riemannian surface,, Trans. Amer. Math. Soc., 359 (2007), 5761. doi: 10.1090/S0002-9947-07-04272-9. Google Scholar
Anouar Bahrouni. Trudinger-Moser type inequality and existence of solution for perturbed non-local elliptic operators with exponential nonlinearity. Communications on Pure & Applied Analysis, 2017, 16 (1) : 243-252. doi: 10.3934/cpaa.2017011
Vincenzo Ambrosio. Periodic solutions for a superlinear fractional problem without the Ambrosetti-Rabinowitz condition. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2265-2284. doi: 10.3934/dcds.2017099
Eun Bee Choi, Yun-Ho Kim. Existence of nontrivial solutions for equations of $p(x)$-Laplace type without Ambrosetti and Rabinowitz condition. Conference Publications, 2015, 2015 (special) : 276-286. doi: 10.3934/proc.2015.0276
Changliang Zhou, Chunqin Zhou. Extremal functions of Moser-Trudinger inequality involving Finsler-Laplacian. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2309-2328. doi: 10.3934/cpaa.2018110
Prosenjit Roy. On attainability of Moser-Trudinger inequality with logarithmic weights in higher dimensions. Discrete & Continuous Dynamical Systems - A, 2019, 39 (9) : 5207-5222. doi: 10.3934/dcds.2019212
Scott Nollet, Frederico Xavier. Global inversion via the Palais-Smale condition. Discrete & Continuous Dynamical Systems - A, 2002, 8 (1) : 17-28. doi: 10.3934/dcds.2002.8.17
Antonio Azzollini. On a functional satisfying a weak Palais-Smale condition. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 1829-1840. doi: 10.3934/dcds.2014.34.1829
Jun Wang, Junxiang Xu, Fubao Zhang. Homoclinic orbits for superlinear Hamiltonian systems without Ambrosetti-Rabinowitz growth condition. Discrete & Continuous Dynamical Systems - A, 2010, 27 (3) : 1241-1257. doi: 10.3934/dcds.2010.27.1241
Guozhen Lu, Yunyan Yang. Sharp constant and extremal function for the improved Moser-Trudinger inequality involving $L^p$ norm in two dimension. Discrete & Continuous Dynamical Systems - A, 2009, 25 (3) : 963-979. doi: 10.3934/dcds.2009.25.963
Xumin Wang. Singular Hardy-Trudinger-Moser inequality and the existence of extremals on the unit disc. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2717-2733. doi: 10.3934/cpaa.2019121
A. Azzollini. Erratum to: "On a functional satisfying a weak Palais-Smale condition". Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4987-4987. doi: 10.3934/dcds.2014.34.4987
Nguyen Lam. Equivalence of sharp Trudinger-Moser-Adams Inequalities. Communications on Pure & Applied Analysis, 2017, 16 (3) : 973-998. doi: 10.3934/cpaa.2017047
Satoshi Hashimoto, Mitsuharu Ôtani. Existence of nontrivial solutions for some elliptic equations with supercritical nonlinearity in exterior domains. Discrete & Continuous Dynamical Systems - A, 2007, 19 (2) : 323-333. doi: 10.3934/dcds.2007.19.323
Tomasz Cieślak. Trudinger-Moser type inequality for radially symmetric functions in a ring and applications to Keller-Segel in a ring. Discrete & Continuous Dynamical Systems - B, 2013, 18 (10) : 2505-2512. doi: 10.3934/dcdsb.2013.18.2505
Kyril Tintarev. Is the Trudinger-Moser nonlinearity a true critical nonlinearity?. Conference Publications, 2011, 2011 (Special) : 1378-1384. doi: 10.3934/proc.2011.2011.1378
M. Eller. On boundary regularity of solutions to Maxwell's equations with a homogeneous conservative boundary condition. Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 473-481. doi: 10.3934/dcdss.2009.2.473
Kanishka Perera, Marco Squassina. Bifurcation results for problems with fractional Trudinger-Moser nonlinearity. Discrete & Continuous Dynamical Systems - S, 2018, 11 (3) : 561-576. doi: 10.3934/dcdss.2018031
Leszek Gasiński, Nikolaos S. Papageorgiou. Three nontrivial solutions for periodic problems with the $p$-Laplacian and a $p$-superlinear nonlinearity. Communications on Pure & Applied Analysis, 2009, 8 (4) : 1421-1437. doi: 10.3934/cpaa.2009.8.1421
Felipe Riquelme. Ruelle's inequality in negative curvature. Discrete & Continuous Dynamical Systems - A, 2018, 38 (6) : 2809-2825. doi: 10.3934/dcds.2018119
S. S. Dragomir, C. E. M. Pearce. Jensen's inequality for quasiconvex functions. Numerical Algebra, Control & Optimization, 2012, 2 (2) : 279-291. doi: 10.3934/naco.2012.2.279
Nguyen Lam Guozhen Lu
|
CommonCrawl
|
Workshop for Women in Computational Topology (WinCompTop)
Poster Session and Reception
Monday, August 15, 2016 - 5:30pm - 6:30pm
Persistent Homology for Pan-Genome Analysis
Brittany Terese Fasy (Montana State University)
Single Nucleotide Polymorphisms (SNPs), Insertions and Deletions (INDELs), and Structural Variations (SVs) are the basis of genetic variation among individuals and populations. Second and third generation high throughput-sequencing technologies have fundamentally changed our biological interpretation of genomes and notably have transformed analysis and characterization of the genome-wide variations present in a population or a particular environment. As a result of this revolution in the next generation sequencing technologies we now have a large volume of genome sequences of species that represent major phylogenetic clades. Having multiple, independent genomic assemblies from a species presents the opportunity to move away from a single reference per species, incorporating information from species across the phylogenomic range of the species into a pan-genomic reference that can better organize and query the underlying genetic variation. Tools have started to explore multiple genomes in bioinformatics analyses. Several tools have evolved to take advantage of information from multiple, closely related genomes (species, strains/lines) to perform bioinformatics analyses such as variant detection without the bias introduced from using a single reference. In this work, we address challenges and opportunities that arise from pan-genomics using graphical data structures. We consider the problem of computing the persistence of structures representing genomic variation from a graph/path data set. The particular application we are interested in is mining pan-genomic data sets.
Grid Presentation for Heegaard Floer Homology of a Double Branched Cover
Sayonita Ghosh Hajra (Hamline University)
Heegaard Floer homology is an invariant of a closed oriented 3-manifold. Because of its complex nature these homology groups are difficult to compute. Stipsicz gave a combinatorial description of a version of Heegaard Floer homology for a double branched cover of S^3. In this poster, we describe the algorithm and present a code. As an example, we compute
this homology group for a double branched cover of S^3 branched along the unknot.
Topological Data Analysis at PNNL
Emilie Purvine (Battelle Pacific Northwest National Laboratory)
Over the past three years the Pacific Northwest National Laboratory has been growing a portfolio in Topological Data Analysis and Modeling. This poster will lay out our portfolio in the area with the hopes of informing the community of our platform and building collaborations. Our current projects include:
-The use of persistent homology and HodgeRank to discover anomalies in time-evolving systems. For PH we form point clouds using statistics from a dynamic graph and look at when the barcodes of these point clouds differ significantly from that of a baseline point cloud. We use Wasserstein distance and other dissimilarities based on interval statistics. HodgeRank is used to discover rankings of sources and sinks in a directed graph. As the graph evolves these rankings may change and we consider anomalies to be when any node's rank differs isignificantly from its baseline rank. In particular we use these techniques to find attacks and instabilities in cyber systems.
-Sheaf theory for use in information integration. We model groups of interacting sensors as a topological space. The data that is returned by the sensors serves as the stalk space to define a sheaf. The cohomology of the sheaf identifies global sections - where all sensors are in agreement - and identifying loops in the base space may inform when some sensors need to be retasked. Included in this work is the measurement of local sections where sensors are partially in agreement, representation of uncertainty by relaxing sectional equality to produce approximate sections, and use of category theory to cast all stalks into vector spaces so that integration is more easily defined.
-A computational effort towards a robust scalable software suite for computational topology. We have found useful software in the community, but typically only one piece at a time, e.g. persistence is separate from sheaf theory which is separate from general homology. We hope to precipitate a community effort towards development of a suite of topological software tools which can be applied to small and large data sets alike.
Median Shapes
Altansuren Tumurbaatar (Washington State University)
We introduce new ideas for the average of a set of general shapes, which we represent as currents. Using the flat norm to measure the distance between currents, we present a mean and a median shape. In the setting of a finite simplicial complex, we demonstrate that the median shape can be found efficiently by solving a linear program.
Burn Time of a Medial Axis and its Applications
Erin Chambers (St. Louis University)
The medial axis plays a fundamental role in many shape matching and analysis, but is widely known to be unstable to even small boundary perturbations. Significance measures to analyze and prune the medial axis of 2D and 3D shapes are well studied, but the majority of them in 3D are locally defined and are unable to maintain global topological properties when used for pruning. We introduce a global significance measure called the Burn Time, which generalizes the extended distance function (EDF) introduced in prior work. Using this function, we are able to generalize the classical notion of erosion thickness measure over the medial axes of 2D shapes. We demonstrate the utility of these shape significance measures in extracting clean, shape-revealing and topology-preserving skeletons of 3D shapes, and discuss future directions and applications of this work.
This is based on joint work with Tao Ju, David Letscher, Kyle Sykes, and Yajie Yan, which appeared in SIGGRAPH 2016.
Homology of Generalized Generalized Configuration Spaces
Radmila Sazdanović (North Carolina State University)
The configuration space of n distinct points in a manifold is a well-studied object with lots of applications. Eastwood and Huggett relate homology of so-called graph configuration spaces to the chromatic polynomial of graphs. We describe a generalization of this approach from graphs to simplicial complexes. This construction yields, for each pair of a simplicial complex and a manifold, a simplicial chromatic polynomial that satisfies a version of deletion-contraction formula.
This is a joint work with A. Cooper and V. de Silva.
Analysis and Visualization of ALMA Data Cubes
Bei Wang (The University of Utah)
The availability of large data cubes produced by radio telescopes like the VLA and ALMA is leading to new data analysis challenges as current visualization tools are ill-prepared for the size and complexity of this data. Our project addresses this problem by using the notion of a contour tree from topological data analysis (TDA). The contour tree provides a mathematically robust technique with fine grain controls for reducing complexity and removing noise from data. Furthermore, to support scientific discovery, new visualizations are being designed to investigate these data and communicate their structures in a salient way: a process that relies on the direct input of astronomers.
Joint work with Paul Rosen (USF), Anil Seth (Utah Astronomy), Jeff Kern (NRAO), Betsy Mills (NRAO) and Chris Johnson
(Utah)
Rips Filtrations for Quasi-metric Spaces (with Stability Results)
Katharine Turner (École Polytechnique Fédérale de Lausanne (EPFL))
Rips filtrations over a finite metric space and their corresponding persistent homology are prominent methods in Topological Data Analysis to summarize the shape of data. Crucial to their use is the stability result that says if $X$ and $Y$ are finite metric space then the (bottleneck) distance between persistence diagrams, barcodes or persistence modules constructed by the Rips filtration is bounded by $2d_{GH}(X,Y)$ (where $d_{GH}$ is the Gromov-Hausdorff distance). Using the asymmetry of the distance function we construct four different constructions analogous to the Rips filtration that capture different information about the quasi-metric spaces. The first method is a one-parameter family of objects where, for a quasi-metric space $X$ and $a\in [0,1]$, we have a filtration of simplicial complexes $\{\mathcal{R}^a(X)_t\}_{t\in [0,\infty)}$ where $\mathcal{R}^a(X)_t$ is clique complex containing the edge $[x,y]$ whenever $a\min \{d(x,y), d(y,x) \}+ (1-a)\max \{d(x,y), d(y,x)\}\leq t$. The second method is to construct a filtration $\{\mathcal{R}^{dir}(X)_t\}$ of ordered tuple complexes where tuple $(x_0, x_2, \ldots x_p)\in \mathcal{R}^{dir}(X)_t$ if $d(x_i, x_j)\leq t$ for all $i\leq j$. Both our first two methods agree with the normal Rips filtration when applied to a metric space. The third and fourth methods use the associated filtration of directed graphs $\{D(X)_t\}$ where $x\to y$ is included in $D(X)_t$ when $d(x,y)\leq t$. Our third method builds persistence modules using the the connected components of the graphs $D(X)_t$. Our fourth method uses the directed graphs $D_t$ to create a filtration of posets (where $x\leq y$ if there is a path from $x$ to $y$) and corresponding persistence modules using poset topology.
Safer Roads Tomorrow Through Analyzing Today's Accidents*
Maia Grudzien (Montana State University)
Bozeman Daily Chronicle quoted the city's head engineer as stating "Even with property owners paying more to help Bozeman's street grid keep up with growth, the rate of development is out-pacing the city's ability to upgrade increasingly clogged intersections" the week of July 27, 2016. As infrastructure is strained by the growing population in Bozeman, the state of Montana, and nationwide, it falls quicker into disrepair. The need for more efficient roadways is creating shorter time lines of design, but the safety of the roadways must remain at a top priority. This project has been looking at understanding accident prone areas in Montana cities and towns by collecting data and mapping it throughout the region. Then areas are sorted with factors that could include density, clusters, city regions (i.e., sporting event complexes, shopping centers), etc. The goal of this project is to provide examples to engineers and city planners of safe and accident-prone roads and intersections that can be used to better build much needed infrastructure.
*This research is funded by NSF CCF grant 1618605, and the Montana State University USP program
Persistent Homology on Grassmann Manifolds for Analysis of Hyperspectral Movies
Lori Ziegelmeier (Macalester College)
We present an application of persistent homology to the detection of chemical plumes in hyperspectral movies of Long-Wavelength Infrared data which capture the release of a quantity of chemical into the air. Regions of interest within the hyperspectral data cubes are used to produce points on the real Grassmann manifold $G(k, n)$ (whose points parameterize the k-dimensional subspaces of $\mathbb{R}^n)$, contrasting our approach with the more standard framework in Euclidean space. An advantage of this approach is that it allows a sequence of time slices in a hyperspectral movie to be collapsed to a sequence of points in such a way that some of the key structure within and between the slices is encoded by the points on the Grassmann manifold. This motivates the search for topological structure, associated with the evolution of the frames of a hyperspectral movie, within the corresponding points on the Grassmann manifold. The proposed framework affords the processing of large data sets while retaining valuable discriminative information. In this paper, we discuss how embedding our data in the Grassmann manifold, together with topological data analysis, captures dynamical events that occur as the chemical plume is released and evolves.
|
CommonCrawl
|
Cross Validated
Cross Validated Meta
Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. Join them; it only takes a minute:
Failing to understand how p-value corresponds to significance of evidence against null hypothesis?
I'm trying to understand how the $p$-value defined as $p = P(D\ge d\ |\ H_0)$ where $D$ is a discrepancy statistic, $d$ is the observed discrepancy, and $H_0$ is the null hypothesis.
As I understand it, $D$ gives a measure of how "inconsistent" the observed data is with the null hypothesis. $D = 0$ corresponds for the "best evidence" to support the null hypothesis, whereas larger and values of $D$ indicate that the data is less consistent with $H_0$.
So, in my textbook, we have the following table:
$p>0.10$ - No evidence against $H_0$.
$0.05 < p \le 0.10$ - Weak evidence against $H_0$.
$0.01 < p \le 0.05$ - Evidence against $H_0$.
$0.001 < p \le 0.01$ - Strong evidence against $H_0$.
$p \le 0.001$ - Very strong evidence against $H_0$.
My confusion with this correlation between $p$ and the strength of the evidence is that the $p$ value also depends on the observed data $d$.
To give some more context, the $p$-value is the probability that, given we assume the null hypothesis to be true, we observe a discrepancy greater than the initially observed discrepancy.
Edit: As @NuclearWang pointed out, these are all backwards for some reason. I'm not sure why.
Under my interpretation, if $p$ is small and $d$ is small, that's evidence supporting $H_0$, since the probability of even a moderately high discrepancy is very low, meaning discrepancies will generally be near $0$. (This is the opposite from the above list, where if $p$ is small then that's evidence against $H_0$)
Under my interpretation, if $p$ is large and $d$ is large, that's evidence against $H_0$, since if $p$ is large and $d$ is large then we still have a very high probability of discrepancies that are very far from 0, which is very inconsistent with $H_0$. (This is the opposite from the list above, where if $p$ is large then there is no evidence against $H_0$)
If $p$ is large and $d$ is small, then that's evidence against $H_0$ (sorta), since it means the discrepancies are more concentrated away from the origin. But, this could also be confusing because $p$ naturally gets closer to $1$ as $d$ gets closer to $0$ (that is, if we conducted an experiment where our initial data gave a discrepancy of $0$), meaning we could also use this as a lack of evidence against $H_0$.
However, what if $p$ is small and $d$ is large? If $d$ is large, then the probability of getting a discrepancy larger than $d$ is small regardless of $H_0$, since there are just less values that $d$ can take on, so of course $p$ is small. This isn't evidence for or against $H_0$.
I feel like it would be more worthwhile to analyze $p$ as a distribution of the sampling data. For example, we could take $p(\mathbf Y) = P(D > D(\mathbf Y)\ |\ H_0)$, and then determine if $p$ is more concentrated near its tails, or near 0 or whatnot. For example, $p$ would look linear if the probability of getting any discrepancy was equal (that is, $D$ is uniformly distributed), which is strong evidence against $H_0$, right?
Edit: I just realized that there's probably a problem with my "$p(\mathbf Y)$ above, and that's that we're assuming we know the distribution of $\mathbf Y$ beforehand, when instead that's what we're testing (I think...). So the statement $P(D > D(\mathbf Y)\ |\ H_0)$ is kind of meaningless.
So, all in all, am I misinterpretting something? Are my thoughts and ideas ok or are they way off?
hypothesis-testing p-value
$\begingroup$ I think you might have "small" and "large" p-values backwards. Small p-values (more significant, closer to 0) are always evidence against the null hypothesis H0. Not sure what you mean about the "values that d can take", d is an observed variable that's fixed by your data. $\endgroup$ – Nuclear Wang Feb 27 '18 at 20:52
$\begingroup$ @NuclearWang Huh, I do have them backwards. Now I'm even more confused, I thought my interpretation for the first three was alright, but I guess they're all wrong. I know that $d$ is an observed variable, but what if we were to run the experiment and we "accidentally" got a very large value of $d$ even when thats statistically very unlikely, and then we got a very small $p$ value. We would conclude that it's "very strong evidence against $H_0$" when it's not really, right? Is the fault in my interpretation of "very strong evidence against"? $\endgroup$ – user3002473 Feb 27 '18 at 20:56
$\begingroup$ Take a look at stats.stackexchange.com/questions/31. $\endgroup$ – whuber♦ Feb 27 '18 at 21:03
$\begingroup$ p-values are only small when $d$ is big relative to what you would expect under $H_0$. So it's not clear what you mean by considering cases when the test statistic is small but the p-value is also small. The p-value is a function of the test statistic. $\endgroup$ – Jonny Lomond Feb 27 '18 at 21:12
The P-value depends on the data because it is a summary of the data. A summary of the strength, according to the statistical model, of the discrepancy between the data and the expectations regarding data when the model parameter(s) are set to the value(s) corresponding to the null hypothesis. When the p-value is small an interesting discrepancy is indicated.
The discrepancy may be due to the null hypothesis being far from the correct value of the parameter OR due to the model being badly matched to the real-world data generating process. Textbooks do not usually (ever?) tell you that last bit, but it is really important.
Notice that once the null hypothesis is appropriately linked to a model parameter it becomes logical to think about the evidence in the data as a function of values of the model parameter. That is what you have proposed with your $P(Y)=P(D>D(Y)|H_0)$, and it is an excellent idea.
The function that some feel is the most appropriate expression of the evidence concerning the values of the parameter is a likelihood function, but others have suggested various alternatives including a p-value function. Likelihood functions are rarely discussed in textbooks—an alarming shortcoming, in my opinion—and so you will need a different resource. I don't recommend the Wikipedia page unless you are mathematically adept, so see if your library has a book called Likelihood by Edwards or Statistical Evidence: a Likelihood Paradigm by Royall.
Michael LewMichael Lew
$\begingroup$ Oh! I think you just made me understand. So, for example, if $p$ is small, then this is strong evidence against $H_0$ because it's very that we would have gotten an initial reading of $d$ at all! So $H_0$ is probably a false assumption. So, with your help, I think I've pinpointed where the fault in my understanding is. It's not so much about the distribution of all discrepancies, but more so about the probability of observing the initial data $d$ at all, given $H_0$. How is that interpretation? $\endgroup$ – user3002473 Feb 27 '18 at 21:29
$\begingroup$ I'm glad to have helped with your confusion, and your interpretation is pretty good. However, you seem to have missed the caveat about the possibility that the model is inappropriate, and I have to point out that a statement about the probability of the null being false should be based on a Bayesian analysis because it depends on the probability of the null being true independent of the evidence (that's the 'prior'). It's all much more complicated than beginners (and textbook writers) might think. $\endgroup$ – Michael Lew Feb 28 '18 at 0:58
where D is a discrepancy statistic, d is the observed discrepancy. As I understand it, D gives a measure of how "inconsistent" the observed data is with the null hypothesis.
No, d is the discrepancy statistic and a measure of inconsistency. D is just a dummy variable used for calculating p.
D=0 corresponds for the "best evidence" to support the null hypothesis, whereas larger and values of D indicate that the data is less consistent with H0.
d=0 is best evidence for the null in a two-tailed test, but for a one-tailed test, the best evidence for the null is as far away from the tail as possible. In this case, since you're testing the right tail, the best evidence for the null would be $-\infty$.
That table's not quite right.
I can understand how, if p is small and d is small, that's evidence supporting H0, since the probability of even a moderately high discrepancy is very low, meaning discrepancies will generally be near 0.
This sort of hypothesis testing either rejects or fails to reject the null hypothesis. One is on shaky grounds claiming that low p provides "evidence" against the null, and even shakier ground claiming that a high p provides evidence for the null.
I can understand how, if p is large and d is large, that's evidence against H0, since if p is large and d is large then we still have a very high probability of discrepancies that are very far from 0, which is very inconsistent with H0.
As @Nuclear Wang said, you have it backwards.
[taken from comments]
what if we were to run the experiment and we "accidentally" got a very large value of d even when thats statistically very unlikely, and then we got a very small p value. We would conclude that it's "very strong evidence against H0" when it's not really, right?
The p value is a measure of how likely "accidentally" getting a large d is. You can get a large d either from the null hypothesis being false, or from being "unlucky". The smaller the p is, the less chance you have of being "unlucky", and therefore the more confident you can be that you got it from the hypothesis being false.
In Bayesian terms, E is evidence for H if P(E|H)>P(E|~H). So to say whether E is evidence for H, one has to have both the probability if the null hypothesis is true, and the probability if the null hypothesis is false, and with this sort of hypothesis testing the latter is not as well defined. Note, however, that whether something is "evidence" depends on its conditional probabilities. It does not depend on whether the hypothesis is actually true. A high d would be evidence against H0. If you later found out that H0 was true all along, that doesn't mean that the d wasn't evidence against H0. Evidence is about what probabilistic conclusions you can derive from the evidence at hand, not what's true from an omniscient perspective. If we had an omniscient perspective, we wouldn't need to be doing any of this in the first place.
For example, we could take p(Y)=P(D>D(Y) | H0)
What does that mean? What's D? What's D(Y)?
For example, p would look linear if the probability of getting any discrepancy was equal (that is, D is uniformly distributed), which is strong evidence against H0, right
If p is a function, then it does not depend on the observed data (and here I am making a distinction between a function varying, and the value of a function varying), so it's can't possibly be evidence for or against H0.
AcccumulationAcccumulation
$\begingroup$ My textbook defines $D$ as, quote, "a function of the data $Y$ that measures the 'agreement' between $Y$ and the null hypothesis." It also defines $d = D(\mathbf y)$ to be the observed discrepancy. Also, what do you mean by "that table's not quite right?" It's not right in that its not giving the truest picture? $\endgroup$ – user3002473 Feb 27 '18 at 21:52
$\begingroup$ Sorry, your last sentence does not make sense. P-values depend on what value is set as the null hypothesis and so they can be expressed as a function of the value set to be the null hypothesis. P-values are calculated from the data (usually via the test statistic) and so they are dependent on the data. $\endgroup$ – Michael Lew Feb 28 '18 at 0:54
$\begingroup$ @Michael Lew You comment utterly fails to articulate anything that contradicts my last sentence. You seem to first be confusing p() and P-value, and second just completely ignoring the part of my sentence in parentheses. $\endgroup$ – Acccumulation Feb 28 '18 at 15:40
$\begingroup$ I don't see it. Would some editing help? $\endgroup$ – Michael Lew Feb 28 '18 at 20:21
Thanks for contributing an answer to Cross Validated!
Not the answer you're looking for? Browse other questions tagged hypothesis-testing p-value or ask your own question.
What is the meaning of p values and t values in statistical tests?
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011
A doubt on the definition of p-value
Interpretation of p-value in Mann-Whitney rank test
Confidence interval does not contains 0, should we consider the p value? How?
How to choose a suitable $p$-value?
How can we reject null hypothesis based on p-values?
How should I formulate this hypothesis test?
Yet another question about p-value interpretation
Why reject Null Hypothesis when p value< alpha?
Understanding p-values using an example
|
CommonCrawl
|
Fused Zamasu
Pages with script errors, Pages containing cite templates with deprecated parameters, Articles containing potentially dated statements from 2017,
Checksum algorithms
A 13-digit ISBN, 978-3-16-148410-0, as represented by an EAN-13 bar code
1970; 50 years ago (1970)
Managing organisation
International ISBN Agency
No. of digits
13 (formerly 10)
Weighted sum
The International Standard Book Number (ISBN) is a unique[lower-alpha 1][lower-alpha 2] numeric commercial book identifier. Publishers purchase ISBNs from an affiliate of the International ISBN Agency.[1]
An ISBN is assigned to each edition and variation (except reprintings) of a book. For example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, and 10 digits long if assigned before 2007. The method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country.
The initial ISBN configuration of recognitionTemplate:Clarify was generated in 1967 based upon the 9-digit Standard Book Numbering (SBN) created in 1966. The 10-digit ISBN format was developed by the International Organization for Standardization (ISO) and was published in 1970 as international standard ISO 2108 (the SBN code can be converted to a ten digit ISBN by prefixing it with a zero).
Privately published books sometimes appear without an ISBN. The International ISBN agency sometimes assigns such books ISBNs on its own initiative.[2]
Another identifier, the International Standard Serial Number (ISSN), identifies periodical publications such as magazines; and the International Standard Music Number (ISMN) covers for musical scores.
The Standard Book Numbering (SBN) code is a 9-digit commercial book identifier system created by Gordon Foster, Emeritus Professor of Statistics at Trinity College, Dublin,[3] for the booksellers and stationers WHSmith and others in 1965.[4] The ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker[5] (regarded as the "Father of the ISBN"[6]) and in 1968 in the US by Emery Koltay[5] (who later became director of the U.S. ISBN agency R.R. Bowker).[6][7][8]
The 10-digit ISBN format was developed by the International Organization for Standardization (ISO) and was published in 1970 as international standard ISO 2108.[4][5] The United Kingdom continued to use the 9-digit SBN code until 1974. ISO has appointed the International ISBN Agency as the registration authority for ISBN worldwide and the ISBN Standard is developed under the control of ISO Technical Committee 46/Subcommittee 9 TC 46/SC 9. The ISO on-line facility only refers back to 1978.[9]
An SBN may be converted to an ISBN by prefixing the digit "0". For example, the second edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has "SBN 340 01381 8" – 340 indicating the publisher, 01381 their serial number, and 8 being the check digit. This can be converted to ISBN 0-340-01381-8Script error; the check digit does not need to be re-calculated.
Since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with "Bookland" European Article Number EAN-13s.[10]
An ISBN is assigned to each edition and variation (except reprintings) of a book. For example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN.[11] The ISBN is 13 digits long if assigned on or after 1 January 2007, and 10 digits long if assigned before 2007. An International Standard Book Number consists of 4 parts (if it is a 10 digit ISBN) or 5 parts (for a 13 digit ISBN):
File:ISBN Details.svg
for a 13-digit ISBN, a prefix element – a GS1 prefix: so far 978 or 979 have been made available by GS1,[12]
the registration group element (language-sharing country group, individual country or territory),[13]
the registrant element,
the publication element,[12] and
a checksum character or check digit.[12]
A 13-digit ISBN can be separated into its parts (prefix element, registration group, registrant, publication and check digit), and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts (registration group, registrant, publication and check digit) of a 10-digit ISBN is also done with either hyphens or spaces. Figuring out how to correctly separate a given ISBN is complicated, because most of the parts do not use a fixed number of digits.[14]
How ISBNs are issued Edit
ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for that country or territory regardless of the publication language. The ranges of ISBNs assigned to any particular country are based on the publishing profile of the country concerned, and so the ranges will vary depending on the number of books and the number, type, and size of publishers that are active. Some ISBN registration agencies are based in national libraries or within ministries of culture and thus may receive direct funding from government to support their services. In other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the stated purpose of encouraging Canadian culture.[15] In the United Kingdom, United States, and some other countries, where the service is provided by non-government-funded organisations, the issuing of ISBNs requires payment of a fee.
Australia: ISBNs are issued by the commercial library services agency Thorpe-Bowker,[16] and prices range from $42 for a single ISBN (plus a $55 registration fee for new publishers) to $2,890 for a block of 1,000 ISBNs. Access is immediate when requested via their website.[17]
Brazil: National Library of Brazil, a government agency, is responsible for issuing ISBNs, and there is a cost of R$16 [18]
Canada: Library and Archives Canada, a government agency, is responsible for issuing ISBNs, and there is no cost. Works in French are issued an ISBN by the Bibliothèque et Archives nationales du Québec.
Colombia: Cámara Colombiana del Libro, a NGO, is responsible for issuing ISBNs. Cost of issuing an ISBN is about USD 20.
Hong Kong: The Books Registration Office (BRO), under the Hong Kong Public Libraries, issues ISBNs in Hong Kong. There is no fee.[19]
India: The Raja Rammohun Roy National Agency for ISBN (Book Promotion and Copyright Division), under Department of Higher Education, a constituent of the Ministry of Human Resource Development, is responsible for registration of Indian publishers, authors, universities, institutions, and government departments that are responsible for publishing books.[20] There is no fee associated in getting ISBN in India.[21]
Italy: The privately held company EDISER srl, owned by Associazione Italiana Editori (Italian Publishers Association) is responsible for issuing ISBNs.[22] The original national prefix 978-88 is reserved for publishing companies, starting at €49 for a ten-codes block[23] while a new prefix 979-12 is dedicated to self-publishing authors, at a fixed price of €25 for a single code.
Maldives: The National Bureau of Classification (NBC) is responsible for ISBN registrations for publishers who are publishing in the Maldives.[citation needed]
Malta: The National Book Council (Template:Lang-mt) issues ISBN registrations in Malta.[24][25][26]
Morocco: The National Library of Morocco is responsible for ISBN registrations for publishing in Morocco and Moroccan-occupied portion of Western Sahara.
New Zealand: The National Library of New Zealand is responsible for ISBN registrations for publishers who are publishing in New Zealand.[27]
Pakistan: The National Library of Pakistan is responsible for ISBN registrations for Pakistani publishers, authors, universities, institutions, and government departments that are responsible for publishing books.
Philippines: The National Library of the Philippines is responsible for ISBN registrations for Philippine publishers, authors, universities, institutions, and government departments that are responsible for publishing books. As of 2017[update], a fee of ₱120.00 per title was charged for the issuance of an ISBN.[28]
South Africa: The National Library of South Africa is responsible for ISBN issuance for South African publishing institutions and authors.
United Kingdom and Republic of Ireland: The privately held company Nielsen Book Services Ltd, part of Nielsen Holdings N.V., is responsible for issuing ISBNs in blocks of 10, 100 or 1000. Prices start from £120 (plus VAT) for the smallest block on a standard turnaround of ten days.[29]
United States: In the United States, the privately held company R.R. Bowker issues ISBNs.[5] There is a charge that varies depending upon the number of ISBNs purchased, with prices starting at $125 for a single number. Access is immediate when requested via their website.[30]
Publishers and authors in other countries obtain ISBNs from their respective national ISBN registration agency. A directory of ISBN agencies is available on the International ISBN Agency website.
Registration group identifier Edit
The registration group identifier is a 1- to 5-digit number that is valid within a single prefix element (i.e. one of 978 or 979).[12] Registration group identifiers have primarily been allocated within the 978 prefix element.[31] The single-digit group identifiers within the 978 prefix element are: 0 or 1 for English-speaking countries; 2 for French-speaking countries; 3 for German-speaking countries; 4 for Japan; 5 for Russian-speaking countries; and 7 for People's Republic of China. An example 5-digit group identifier is 99936, for Bhutan. The allocated group IDs are: 0–5, 600–621, 7, 80–94, 950–989, 9926–9989, and 99901–99976.[32] Books published in rare languages typically have longer group identifiers.[33]
Within the 979 prefix element, the registration group identifier 0 is reserved for compatibility with International Standard Music Numbers (ISMNs), but such material is not actually assigned an ISBN.[12] The registration group identifiers within prefix element 979 that have been assigned are 10 for France, 11 for the Republic of Korea, and 12 for Italy.[34]
The original 9-digit standard book number (SBN) had no registration group identifier, but prefixing a zero (0) to a 9-digit SBN creates a valid 10-digit ISBN.
Registrant element Edit
The national ISBN agency assigns the registrant element (cf. Category:ISBN agencies) and an accompanying series of ISBNs within that registrant element to the publisher; the publisher then allocates one of the ISBNs to each of its books. In most countries, a book publisher is not required by law to assign an ISBN; however, most bookstores only handle ISBN bearing publications.[citation needed]
A listing of more than 900,000 assigned publisher codes is published, and can be ordered in book form (€1399, US$1959). The web site of the ISBN agency does not offer any free method of looking up publisher codes.[35] Partial lists have been compiled (from library catalogs) for the English-language groups: identifier 0 and identifier 1.
Publishers receive blocks of ISBNs, with larger blocks allotted to publishers expecting to need them; a small publisher may receive ISBNs of one or more digits for the registration group identifier, several digits for the registrant, and a single digit for the publication element. Once that block of ISBNs is used, the publisher may receive another block of ISBNs, with a different registrant element. Consequently, a publisher may have different allotted registrant elements. There also may be more than one registration group identifier used in a country. This might occur once all the registrant elements from a particular registration group have been allocated to publishers.
By using variable block lengths, registration agencies are able to customise the allocations of ISBNs that they make to publishers. For example, a large publisher may be given a block of ISBNs where fewer digits are allocated for the registrant element and many digits are allocated for the publication element; likewise, countries publishing many titles have few allocated digits for the registration group identifier and many for the registrant and publication elements.[36] Here are some sample ISBN-10 codes, illustrating block length variations.
Country or area
99921-58-10-7 Qatar NCCAH, Doha
9971-5-0210-0 Singapore World Scientific
960-425-059-0 Greece Sigma Publications
80-902734-1-6 Czech Republic; Slovakia Taita Publishers
85-359-0277-5 Brazil Companhia das Letras
1-84356-028-3 English-speaking area Simon Wallenberg Press
0-684-84328-5 English-speaking area Scribner
0-8044-2957-X English-speaking area Frederick Ungar
0-85131-041-9 English-speaking area J. A. Allen & Co.
0-943396-04-2 English-speaking area Willmann–Bell
0-9752298-0-X English-speaking area KT Publishing
Pattern for English language ISBNs Edit
English-language registration group elements are 0 and 1 (2 of more than 220 registration group elements). These two registration group elements are divided into registrant elements in a systematic pattern, which allows their length to be determined, as follows:[37]
element length
0 – Registration group element
6 digits
0-00-xxxxxx-x 0-19-xxxxxx-x 20 1-00-xxxxxx-x 1-09-xxxxxx-x 10 30
0-200-xxxxx-x 0-699-xxxxx-x 500 1-100-xxxxx-x 1-399-xxxxx-x 300 800
0-7000-xxxx-x 0-8499-xxxx-x 1,500 1-4000-xxxx-x 1-5499-xxxx-x 1,500 3,000
0-85000-xxx-x 0-89999-xxx-x 5,000 1-55000-xxx-x 1-86979-xxx-x 31,980 36,980
0-900000-xx-x 0-949999-xx-x 50,000 1-869800-xx-x 1-998999-xx-x 129,200 179,200
1 digit
0-9500000-x-x 0-9999999-x-x 500,000 1-9990000-x-x 1-9999999-x-x 10,000 510,000
Check digits Edit
A check digit is a form of redundancy check used for error detection, the decimal equivalent of a binary check bit. It consists of a single digit computed from the other digits in the number. The method for the ten digit code is an extension of that for SBNs, the two systems are compatible, and SBN prefixed with "0" will give the same check-digit as without – the digit is base eleven, and can be 0-9 or X. The system for thirteen digit codes is not compatible and will, in general, give a different check digit from the corresponding 10 digit ISBN, and does not provide the same protection against transposition. This is because the thirteen digit code was required to be compatible with the EAN format, and hence could not contain an "X".
ISBN-10 check digits Edit
The 2001 edition of the official manual of the International ISBN Agency says that the ISBN-10 check digit[38] – which is the last digit of the ten-digit ISBN – must range from 0 to 10 (the symbol X is used for 10), and must be such that the sum of all the ten digits, each multiplied by its (integer) weight, descending from 10 to 1, is a multiple of 11.
For example, for an ISBN-10 of 0-306-40615-2:
$ \begin{align} s &= (0\times 10) + (3\times 9) + (0\times 8) + (6\times 7) + (4\times 6) + (0\times 5) + (6\times 4) + (1\times 3) + (5\times 2) + (2\times 1) \\ &= 0 + 27 + 0 + 42 + 24 + 0 + 24 + 3 + 10 + 2\\ &= 132 = 12\times 11 \end{align} $
Formally, using modular arithmetic, we can say:
$ (10x_1+9x_2+8x_3+7x_4+6x_5+5x_6+4x_7+3x_8+2x_9+x_{10})\equiv 0 \pmod{11}. $
It is also true for ISBN-10's that the sum of all the ten digits, each multiplied by its weight in ascending order from 1 to 10, is a multiple of 11. For this example:
$ \begin{align} s &= (0\times 1) + (3\times 2) + (0\times 3) + (6\times 4) + (4\times 5) + (0\times 6) + (6\times 7) + (1\times 8) + (5\times 9) + (2\times 10) \\ &= 0 + 6 + 0 + 24 + 20 + 0 + 42 + 8 + 45 + 20\\ &= 165 = 15\times 11 \end{align} $
Formally, we can say:
$ (x_1 + 2x_2 + 3x_3 + 4x_4 + 5x_5 + 6x_6 + 7x_7 + 8x_8 + 9x_9 + 10x_{10})\equiv 0 \pmod{11}. $
The two most common errors in handling an ISBN (e.g., typing or writing it) are a single altered digit or the transposition of adjacent digits. It can be proved that all possible valid ISBN-10's have at least two digits different from each other. It can also be proved that there are no pairs of valid ISBN-10's with eight identical digits and two transposed digits. (These are true only because the ISBN is less than 11 digits long, and because 11 is a prime number.) The ISBN check digit method therefore ensures that it will always be possible to detect these two most common types of error, i.e. if either of these types of error has occurred, the result will never be a valid ISBN – the sum of the digits multiplied by their weights will never be a multiple of 11. However, if the error occurs in the publishing house and goes undetected, the book will be issued with an invalid ISBN.[39]
In contrast, it is possible for other types of error, such as two altered non-transposed digits, or three altered digits, to result in a valid ISBN (although it is still unlikely).
ISBN-10 check digit calculation Edit
Each of the first nine digits of the ten-digit ISBN—excluding the check digit itself—is multiplied by its (integer) weight, descending from 10 to 2, and the sum of these nine products found. The value of the check digit is simply the one number between 0 and 10 which, when added to this sum, means the total is a multiple of 11.
For example, the check digit for an ISBN-10 of 0-306-40615-? is calculated as follows:
$ \begin{align} s &= (0\times 10)+(3\times 9)+(0\times 8)+(6\times 7)+(4\times 6)+(0\times 5)+(6\times 4)+(1\times 3)+(5\times 2)\\ &= 130 \end{align} $
Adding 2 to 130 gives a multiple of 11 (132 = 12 x 11) − this is the only number between 0 and 10 which does so. Therefore, the check digit has to be 2, and the complete sequence is ISBN 0-306-40615-2. The value $ x_{10} $ required to satisfy this condition might be 10; if so, an 'X' should be used.
Alternatively, modular arithmetic is convenient for calculating the check digit using modulus 11. The remainder of this sum when it is divided by 11 (i.e. its value modulo 11), is computed. This remainder plus the check digit must equal either 0 or 11. Therefore, the check digit is (11 minus the remainder of the sum of the products modulo 11) modulo 11. Taking the remainder modulo 11 a second time accounts for the possibility that the first remainder is 0. Without the second modulo operation the calculation could end up with 11 – 0 = 11 which is invalid. (Strictly speaking the first "modulo 11" is unneeded, but it may be considered to simplify the calculation.)
For example, the check digit for the ISBN-10 of 0-306-40615-? is calculated as follows:
$ \begin{align} s &= (11 - ( ((0\times 10)+(3\times 9)+(0\times 8)+(6\times 7)+(4\times 6)+(0\times 5)+(6\times 4)+(1\times 3)+(5\times 2) ) \,\bmod\, 11 ) \,\bmod\, 11\\ &= (11 - (0 + 27 + 0 + 42 + 24 + 0 + 24 + 3 + 10 ) \,\bmod\, 11) \,\bmod\, 11\\ &= (11-(130 \,\bmod\, 11))\,\bmod\, 11 \\ &= (11-(9))\,\bmod\, 11 \\ &= (2)\,\bmod\, 11 \\ &= 2 \end{align} $
Thus the check digit is 2.
It is possible to avoid the multiplications in a software implementation by using two accumulators. Repeatedly adding t into s computes the necessary multiples:
// Returns ISBN error syndrome, zero for a valid ISBN, non-zero for an invalid one.
// digits[i] must be between 0 and 10.
int CheckISBN(int const digits[10])
int i, s = 0, t = 0;
for (i = 0; i < 10; i++) {
t += digits[i];
s += t;
return s % 11;
The modular reduction can be done once at the end, as shown above (in which case s could hold a value as large as 496, for the invalid ISBN 99999-999-9-X), or s and t could be reduced by a conditional subtract after each addition.
The 2005 edition of the International ISBN Agency's official manual[40] describes how the 13-digit ISBN check digit is calculated. The ISBN-13 check digit, which is the last digit of the ISBN, must range from 0 to 9 and must be such that the sum of all the thirteen digits, each multiplied by its (integer) weight, alternating between 1 and 3, is a multiple of 10.
$ (x_1 + 3x_2 + x_3 + 3x_4 + x_5 + 3x_6 + x_7 + 3x_8 + x_9 + 3x_{10} + x_{11} + 3x_{12} + x_{13} ) \equiv 0 \pmod{10}. $
The calculation of an ISBN-13 check digit begins with the first 12 digits of the thirteen-digit ISBN (thus excluding the check digit itself). Each digit, from left to right, is alternately multiplied by 1 or 3, then those products are summed modulo 10 to give a value ranging from 0 to 9. Subtracted from 10, that leaves a result from 1 to 10. A zero (0) replaces a ten (10), so, in all cases, a single check digit results.
For example, the ISBN-13 check digit of 978-0-306-40615-? is calculated as follows:
s = 9×1 + 7×3 + 8×1 + 0×3 + 3×1 + 0×3 + 6×1 + 4×3 + 0×1 + 6×3 + 1×1 + 5×3
= 9 + 21 + 8 + 0 + 3 + 0 + 6 + 12 + 0 + 18 + 1 + 15
93 / 10 = 9 remainder 3
10 – 3 = 7
Thus, the check digit is 7, and the complete sequence is ISBN 978-0-306-40615-7.
In general, the ISBN-13 check digit is calculated as follows.
$ r = \big(10 - \big(x_1 + 3x_2 + x_3 + 3x_4 + \cdots + x_{11} + 3x_{12}\big) \,\bmod\, 10\big). $
$ x_{13} = \begin{cases} r &\text{ ; } r < 10 \\ 0 &\text{ ; } r = 10 . \end{cases} $
This check system – similar to the UPC check digit formula – does not catch all errors of adjacent digit transposition. Specifically, if the difference between two adjacent digits is 5, the check digit will not catch their transposition. For instance, the above example allows this situation with the 6 followed by a 1. The correct order contributes 3×6+1×1 = 19 to the sum; while, if the digits are transposed (1 followed by a 6), the contribution of those two digits will be 3×1+1×6 = 9. However, 19 and 9 are congruent modulo 10, and so produce the same, final result: both ISBNs will have a check digit of 7. The ISBN-10 formula uses the prime modulus 11 which avoids this blind spot, but requires more than the digits 0-9 to express the check digit.
Additionally, if the sum of the 2nd, 4th, 6th, 8th, 10th, and 12th digits is tripled then added to the remaining digits (1st, 3rd, 5th, 7th, 9th, 11th, and 13th), the total will always be divisible by 10 (i.e., end in 0).
ISBN-10 to ISBN-13 conversion Edit
The conversion is quite simple as one only needs to prefix "978" to the existing number and calculate the new checksum using the ISBN-13 algorithm.
Errors in usage Edit
Publishers and libraries have varied policies about the use of the ISBN check digit. Publishers sometimes fail to check the correspondence of a book title and its ISBN before publishing it; that failure causes book identification problems for libraries, booksellers, and readers.[41] For example, ISBN 0-590-76484-5Script error is shared by two books – Ninja gaiden®: a novel based on the best-selling game by Tecmo (1990) and Wacky laws (1997), both published by Scholastic.
Most libraries and booksellers display the book record for an invalid ISBN issued by the publisher. The Library of Congress catalogue contains books published with invalid ISBNs, which it usually tags with the phrase "Cancelled ISBN".[42] However, book-ordering systems such as Amazon.com will not search for a book if an invalid ISBN is entered to its search engine.[citation needed]
OCLC often indexes by invalid ISBNs, if the book is indexed in that way by a member library.
eISBN Edit
Only the term "ISBN" should be used; the terms "eISBN" and "e-ISBN" have historically been sources of confusion and should be avoided. If a book exists in one or more digital (e-book) formats, each of those formats must have its own ISBN. In other words, each of the three separate EPUB, Amazon Kindle, and PDF formats of a particular book will have its own specific ISBN. They should not share the ISBN of the paper version, and there is no generic "eISBN" which encompasses all the e-book formats for a title.[43]
EAN format used in barcodes, and upgrading Edit
Currently the barcodes on a book's back cover (or inside a mass-market paperback book's front cover) are EAN-13; they may have a separate barcode encoding five digits for the currency and the recommended retail price.[44] For 10 digit ISBNs, the number "978", the Bookland "country code", is prefixed to the ISBN in the barcode data, and the check digit is recalculated according to the EAN13 formula (modulo 10, 1x and 3x weighting on alternate digits).
Partly because of an expected shortage in certain ISBN categories, the International Organization for Standardization (ISO) decided to migrate to a thirteen-digit ISBN (ISBN-13). The process began 1 January 2005 and was planned to conclude 1 January 2007.[45] As of 2011, all the 13-digit ISBNs began with 978. As the 978 ISBN supply is exhausted, the 979 prefix was introduced. Part of the 979 prefix is reserved for use with the Musicland code for musical scores with an ISMN. 10 digit ISMN codes differed visually as they began with an "M" letter; the bar code represents the "M" as a zero (0), and for checksum purposes it counted as a 3. All ISMNs are now 13 digits commencing 979-0; 979-1 to 979-9 will be used by ISBN.
Publisher identification code numbers are unlikely to be the same in the 978 and 979 ISBNs, likewise, there is no guarantee that language area code numbers will be the same. Moreover, the ten-digit ISBN check digit generally is not the same as the thirteen-digit ISBN check digit. Because the GTIN-13 is part of the Global Trade Item Number (GTIN) system (that includes the GTIN-14, the GTIN-12, and the GTIN-8), the 13-digit ISBN falls within the 14-digit data field range.[46]
Barcode format compatibility is maintained, because (aside from the group breaks) the ISBN-13 barcode format is identical to the EAN barcode format of existing 10-digit ISBNs. So, migration to an EAN-based system allows booksellers the use of a single numbering system for both books and non-book products that is compatible with existing ISBN based data, with only minimal changes to information technology systems. Hence, many booksellers (e.g., Barnes & Noble) migrated to EAN barcodes as early as March 2005. Although many American and Canadian booksellers were able to read EAN-13 barcodes before 2005, most general retailers could not read them. The upgrading of the UPC barcode system to full EAN-13, in 2005, eased migration to the ISBN-13 in North America.
ASIN (Amazon Standard Identification Number)
CODEN (serial publication identifier currently used by libraries; replaced by the ISSN for new works)
ESTC (English Short Title Catalogue)
ETTN (Electronic Textbook Track Number)
ISAN (International Standard Audiovisual Number)
ISMN (International Standard Music Number)
ISWC (International Standard Musical Work Code)
ISRC (International Standard Recording Code)
ISSN (International Standard Serial Number)
ISTC (International Standard Text Code)
ISWN (International Standard Wine Number)
LCCN (Library of Congress Control Number)
List of group-0 ISBN publisher codes
OCLC number (Online Computer Library Center number[47])
BICI (Book Item and Component Identifier)
SICI (Serial Item and Contribution Identifier)
Special:Booksources, Wikipedia's ISBN search page
VD 16 (Verzeichnis der im deutschen Sprachbereich erschienenen Drucke des 16. Jahrhunderts)(in English: Bibliography of Books Printed in the German Speaking Countries of the Sixteenth Century)
VD 17 (Verzeichnis der im deutschen Sprachraum erschienenen Drucke des 17. Jahrhunderts)(in English: Bibliography of Books Printed in the German Speaking Countries of the Seventeenth Century)
↑ Occasionally, publishers erroneously assign an ISBN to more than one title—the first edition of The Ultimate Alphabet and The Ultimate Alphabet Workbook have the same ISBN, 0-8050-0076-3. Conversely, books are published with several ISBNs: A German second-language edition of Emil und die Detektive has the ISBNs 87-23-90157-8 (Denmark), 0-8219-1069-8 (United States), 91-21-15628-X (Sweden), 0-85048-548-7 (United Kingdom) and 3-12-675495-3 (Germany).
↑ In some cases, books sold only as sets share ISBNs. For example, the Vance Integral Edition used only two ISBNs for 44 books.
↑ "The International ISBN Agency". https://www.isbn-international.org/. Retrieved 20 February 2018.
↑ Bradley, Philip (1992). "Book numbering: The importance of the ISBN". http://www.theindexer.org/files/18-1/18-1_025.pdf. (245KB). The Indexer. 18 (1): 25–26.
↑ Foster, Gordon (1966). "INTERNATIONAL STANDARD BOOK NUMBERING (ISBN) SYSTEM original 1966 report". informaticsdevelopmentinstitute.net. Archived from the original on 30 April 2011. https://web.archive.org/web/20110430024722/http://www.informaticsdevelopmentinstitute.net/isbn.html. Retrieved 20 April 2014.
↑ 4.0 4.1 "ISBN History". isbn.org. 20 April 2014. Archived from the original on 20 April 2014. http://www.isbn.org/ISBN_history. Retrieved 20 April 2014.
↑ 5.0 5.1 5.2 5.3 (in Maltese) Manwal ghall-Utenti tal-ISBN (6th ed.). Malta: Kunsill Nazzjonali tal-Ktieb. 2016. p. 5. ISBN 978-99957-889-4-0. Archived from the original on 17 August 2016. https://web.archive.org/web/20160817083617/http://ktieb.org.mt/wp-content/uploads/2016/02/Manwal-ghall-Utenti-tal-ISBN-Maltese.pdf.
↑ 6.0 6.1 (PDF) Information Standards Quarterly, 8, ISO, July 1996, p. 12, http://www.niso.org/apps/group_public/download.php/6294/ISQ_vol8_no3_July1996.pdf
↑ US ISBN Agency. "Bowker.com – Products". Commerce.bowker.com. http://commerce.bowker.com/standards/home/isbn/about_information_standards.asp. Retrieved 2015-06-11.
↑ Gregory, Daniel. "ISBN". PrintRS. Archived from the original on 16 May 2016. http://arquivo.pt/wayback/20160516011735/http%3A//www.printrs.com/isbn.htm. Retrieved 2015-06-11.
↑ (PDF) ISO 2108:1978, ISO, http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=6897
↑ TC 46/SC 9, Frequently Asked Questions about the new ISBN standard from ISO, CA: LAC‐BAC, archived from the original on 10 June 2007, https://web.archive.org/web/20070610160919/http://www.lac-bac.gc.ca/iso/tc46sc9/isbn.htm
↑ "See paragraph 5.2 of ISBN Users' Manual International edition (2012)". https://www.isbn-international.org/sites/default/files/ISBN%20Manual%202012%20-corr.pdf. (548 KB)
↑ 12.0 12.1 12.2 12.3 12.4 International ISBN Agency (2012). ISBN Users' manual (Sixth International ed.). pp. 7, 23. ISBN 978-92-95055-02-5. Archived from the original on 29 April 2014. https://www.isbn-international.org/sites/default/files/ISBN%20Manual%202012%20-corr.pdf. Retrieved 29 April 2014.
↑ Some books have several codes in the first block: e.g. A. M. Yaglom's Correlation Theory..., published by Springer Verlag, has two ISBNs, 0-387-96331-6 and 3-540-96331-6. Though Springer's 387 and 540 codes are different for English (0) and German (3); the same item number 96331 produces the same check digit: 6. Springer uses 431 as their publisher code for Japanese (4) and 4-431-96331-? would also have check digit ? = 6. Other Springer books in English have publisher code 817, and 0-817-96331-? would also get check digit ? = 6. This suggests special considerations were made for assigning Springer's publisher codes, as random assignments of different publisher codes would not lead the same item number to get the same check digit every time. Finding publisher codes for English and German, say, with this effect amounts to solving a linear equation in modular arithmetic.
↑ The International ISBN agency's ISBN User's Manual says: "The ten-digit number is divided into four parts of variable length, which must be separated clearly, by hyphens or spaces" although omission of separators is permitted for internal data processing. If present, hyphens must be correctly placed; see ISBN hyphenation definition. The actual definition for hyphenation contains more than 220 different registration group elements with each one broken down into a few to several ranges for the length of the registrant element (more than 1,000 total). The document defining the ranges, listed by agency, is 29 pages.
↑ Canada, Library and Archives. "ISBN Canada". http://www.bac-lac.gc.ca/eng/services/isbn-canada/Pages/isbn-canada.aspx. Retrieved 2016-01-19.
↑ "About the U.S. ISBN Agency". https://www.myidentifiers.com.au/isbn/US_isbn_agency.
↑ "Bowker -- ISBN". Thorpe-Bowker. 5 Jan 2009. https://www.myidentifiers.com.au/isbn/main. Retrieved 29 March 2012.
↑ "TABELA DE PREÇOS DOS SERVIÇOS". Biblioteca Nacional, Government of Brazil. http://www.isbn.bn.br/website/tabela-de-precos. Retrieved 8 September 2015.
↑ "Introduction to Books Registration". Hong Kong Public Libraries. https://www.hkpl.gov.hk/en/about-us/services/book-registration/introduction.html. Retrieved 12 January 2017.
↑ "Union HRD Minister Smt. Smriti Zubin Irani Launches ISBN Portal". http://pib.nic.in/newsite/PrintRelease.aspx?relid=138686.
↑ "How to get ISBN in India". http://www.24by7publishing.com/isbn-for-self-publishers---independent-authors-in-india.html.
↑ "ISBN – Chi siamo e contatti" (in italian). EDISER srl. http://www.isbn.it/LAGENZIA.aspx. Retrieved 3 January 2015.
↑ "ISBN – Tariffe Servizi ISBN" (in italian). EDISER srl. http://www.isbn.it/TARIFFE.aspx. Retrieved 3 January 2015.
↑ "ISBN". Kunsill Nazzjonali tal-Ktieb. 2016. Archived from the original on 23 October 2016. https://web.archive.org/web/20161023123140/http://ktieb.org.mt/?page_id=22.
↑ (in Maltese) Manwal ghall-Utenti tal-ISBN (6th ed.). Malta: Kunsill Nazzjonali tal-Ktieb. 2016. pp. 1–40. ISBN 978-99957-889-4-0. Archived from the original on 17 August 2016. https://web.archive.org/web/20160817083617/http://ktieb.org.mt/wp-content/uploads/2016/02/Manwal-ghall-Utenti-tal-ISBN-Maltese.pdf.
↑ "Gazzetta tal-Gvern ta' Malta". Government Gazette. 23 January 2015. p. 582. Archived from the original on 23 November 2016. https://web.archive.org/web/20161123220053/https://govcms.gov.mt/en/Government/Government%20Gazette/Documents/2015/01/Government%20Gazette%20-%2023%20January.pdf.
↑ "ISBNs, ISSNs, and ISMNs". New Zealand Government. http://natlib.govt.nz/publishers-and-authors/isbns-issns-and-ismns. Retrieved 19 January 2016.
↑ "International Standard Book Number". National Library of the Philippines. http://web.nlp.gov.ph/nlp/?q=node/645. Retrieved December 25, 2017.
↑ "Nielsen UK ISBN Agency". Nielsen UK ISBN Agency. http://www.isbn.nielsenbook.co.uk/controller.php?page=123. Retrieved 2 January 2015.
↑ "Bowker -- ISBN". RR Bowker. 8 March 2013. http://www.bowker.com/en-US/products/servident_isbn.shtml. Retrieved 8 March 2013.
↑ "ISBN Ranges". isbn-international.org. 29 April 2014. Select the format you desire and click on the Generate button. Archived from the original on 29 April 2014. https://www.isbn-international.org/range_file_generation#. Retrieved 29 April 2014.
↑ See a complete list of group identifiers. ISBN.org sometimes calls them group numbers. Their table of identifiers now refers to ISBN prefix ranges, which must be assumed to be group identifier ranges.
↑ Hailman, Jack Parker (2008). Coding and redundancy: man-made and animal-evolved signals. Harvard University Press. p. 209. ISBN 978-0-674-02795-4.
↑ International ISBN Agency (5 December 2014). "International ISBN Agency – Range Message (pdf sorted by prefix)". isbn-international.org. p. 29. https://www.isbn-international.org/export_rangemessagebyprefix.pdf. Retrieved 15 December 2014.
↑ See Publisher's International ISBN Directory
↑ Splane, Lily (2002). The Book Book: A Complete Guide to Creating a Book on Your Computer. Anaphase II Publishing. pp. 37. ISBN 978-0-945962-14-4.
↑ "ISBN Ranges". isbn-international.org. International ISBN Agency. 15 September 2014. https://www.isbn-international.org/range_file_generation#. Retrieved 15 September 2014.
↑ "ISBN Users' Manual – 4. Structure of ISBN". Isbn.org. Archived from the original on 22 May 2013. https://web.archive.org/web/20130522043458/http://www.isbn.org/standards/home/isbn/international/html/usm4.htm. Retrieved 2013-05-27.
↑ For example, I'saka: a sketch grammar of a language of north-central New Guinea. Pacific Linguistics. ISBN "0-85883-554-4".
↑ "ISBN Users' Manual International edition (2012)". Archived from the original on 29 April 2014. https://web.archive.org/web/20140429101742/https://www.isbn-international.org/sites/default/files/ISBN%20Manual%202012%20-corr.pdf. (284 KB)
↑ Lorimer, Rowland; Shoichet, Jillian; Maxwell, John W. (2005). Book Publishing I. CCSP Press. pp. 299. ISBN 978-0-9738727-0-5.
↑ 020 – International Standard Book Number (R) – MARC 21 Bibliographic – Full. Library of Congress.
↑ "The Myth of the eISBN: Why Every eBook Edition Needs a Unique Number – Publishing services for self publishing authors and businesses" (in en-US). Publishing services for self publishing authors and businesses. 2013-06-28. http://www.sellbox.com/myth-eisbn-every-ebook-edition-needs-unique-number/.
↑ Frequently asked questions, US: ISBN, 12 March 2014, http://www.isbn-us.com/blog/2014/03/12/isbn-information-frequently-asked-questions/ — including a detailed description of the EAN-13 format.
↑ "ISBN", ISO TC49SC9 (FAQ), CA: Collections, http://www.collectionscanada.ca/iso/tc46sc9/isbn.htm
↑ "Are You Ready for ISBN-13?", Standards, ISBN, http://www.isbn.org/standards/home/isbn/transition.asp
↑ "xISBN (Web service)". Xisbn.worldcat.org. http://xisbn.worldcat.org/xisbnadmin/xoclcnum/index.htm. Retrieved 2013-05-27.
Template:Wikidata property Template:Wikidata property Template:Sisterlinks
International ISBN Agency—coordinates and supervises the worldwide use of the ISBN system
Numerical List of Group Identifiers—List of language/region prefixes
Free conversion tool: ISBN-10 to ISBN-13 & ISBN-13 to ISBN-10 from the ISBN agency. Also shows correct hyphenation & verifies if ISBNs are valid or not.
"Implementation guidelines". Archived from the original on 12 September 2004. https://web.archive.org/web/20040912203458/http://www.isbn-international.org/en/download/implementation-guidelines-04.pdfnone (51.0 KB) for the 13-digit ISBN code
Books at the Open Directory Project
"Are You Ready for ISBN-13?". R. R. Bowker LLC. http://www.isbn.org/standards/home/isbn/transition.asp.
RFC 3187—Using International Standard Book Numbers as Uniform Resource Names (URN)
Template:Audiovisual works Template:ISO standards Template:Books
Retrieved from "https://ultimatepopculture.fandom.com/wiki/International_Standard_Book_Number?oldid=61925"
|
CommonCrawl
|
friedman.1.data
First Friedman Dataset and a variation
Generate X and Y values from the 10-dim "first" Friedman data set used to validate the Multivariate Adaptive Regression Splines (MARS) model, and a variation involving boolean indicators. This test function has three non-linear and interacting variables, along with two linear, and five which are irrelevant. The version with indicators has parts of the response turned on based on the setting of the indicators
friedman.1.data(n = 100)
fried.bool(n = 100)
Number of samples desired
In the original formulation, as implemented by friedman.1.data the function has 10-dim inputs X are drawn from Unif(0,1), and responses are \(N(m(X),1)\) where \(m(\mathbf{x}) = E[f(\mathbf{x})]\) and $$m(\mathbf{x}) = 10\sin(\pi x_1 x_2) + 20(x_3-0.5)^2 + 10x_4 + 5x_5$$
The variation fried.bool uses indicators \(I\in \{1,2,3,4\}\). The function also has 10-dim inputs X with columns distributed as Unif(0,1) and responses are \(N(m(\mathbf{x},I), 1)\) where \(m(\mathbf{x},I) = E(f(\mathbf{x},I)\) and $$m(\mathbf{x},I) = f_1(\mathbf{x})_{[I=1]} + f_2(\mathbf{x})_{[I=2]} + f_3(\mathbf{x})_{[I=3]} + m([x_{10},\cdots,x_1])_{[I=4]}$$ where $$f_1(\mathbf{x}) = 10\sin(\pi x_1 x_2), \; f_2(\mathbf{x}) = 20(x_3-0.5)^2, \; \mbox{and } f_3(\mathbf{x}) = 10x_4 + 5x_5.$$
The indicator I is coded in binary in the output data frame as: c(0,0,0) for I=1, c(0,0,1) for I=2, c(0,1,0) for I=3, and c(1,0,0) for I=4.
Output is a data.frame with columns
X.1, …, X.10
describing the 10-d randomly sampled inputs
I.1, …, I.3
boolean version of the indicators provided only for fried.bool, as described above
sample responses (with N(0,1) noise)
Ytrue
true responses (without noise)
An example using the original version of the data (friedman.1.data) is contained in the first package vignette: vignette("tgp"). The boolean version fried.bool is used in second vignette vignette("tgp2")
Gramacy, R. B. (2007). tgp: An R Package for Bayesian Nonstationary, Semiparametric Nonlinear Regression and Design by Treed Gaussian Process Models. Journal of Statistical Software, 19(9). https://www.jstatsoft.org/v19/i09
Robert B. Gramacy, Matthew Taddy (2010). Categorical Inputs, Sensitivity Analysis, Optimization and Importance Tempering with tgp Version 2, an R Package for Treed Gaussian Process Models. Journal of Statistical Software, 33(6), 1--48. https://www.jstatsoft.org/v33/i06/.
Friedman, J. H. (1991). Multivariate adaptive regression splines. "Annals of Statistics", 19, No. 1, 1--67.
Gramacy, R. B., Lee, H. K. H. (2008). Bayesian treed Gaussian process models with an application to computer modeling. Journal of the American Statistical Association, 103(483), pp. 1119-1130. Also available as ArXiv article 0710.4536 https://arxiv.org/abs/0710.4536
Chipman, H., George, E., \& McCulloch, R. (2002). Bayesian treed models. Machine Learning, 48, 303--324.
https://bobby.gramacy.com/r_packages/tgp/
bgpllm, btlm, blm, bgp, btgpllm, bgp
fried.bool
|
CommonCrawl
|
Chebyshevskii Sbornik
Chebyshevskii Sb.:
Chebyshevskii Sb., 2018, Volume 19, Issue 3, Pages 257–269 (Mi cheb693)
On algebra and arithmetic of binomial and Gaussian coefficients
U. M. Pachev
Kabardino-Balkar State University
Abstract: In this paper we consider questions relating to algebraic and arithmetic properties of such binomial, polynomial and Gaussian coefficients.
For the central binomial coefficients $\binom{2p}{p}$ and $\binom{2p-1}{p-1}$, a new comparability property modulo $p^3 \cdot ( 2p-1 )$, which is not equal to the degree of a prime number, where $p$ and $(2p-1)$ are prime numbers, Wolstenholm's theorem is used, that for $p \geqslant 5$ these coefficients are respectively comparable with the numbers 2 and 1 modulo $p^3$.
In the part relating to the Gaussian coefficients $\binom{n}{k}_q$, the algebraic and arithmetic properties of these numbers are investigated. Using the algebraic interpretation of the Gaussian coefficients, it is established that the number of $k$-dimensional subspaces of an $n$-dimensional vector space over a finite field of q elements is equal to the number of $(n-k)$ -dimensional subspaces of it, and the number $q$ on which The Gaussian coefficient must be the power of a prime number that is a characteristic of this finite field.
Lower and upper bounds are obtained for the sum $\sum_{k = 0}^{n} \binom{n}{k}_q$ of all Gaussian coefficients sufficiently close to its exact value (a formula for the exact value of such a sum has not yet been established), and also the asymptotic formula for $q \to \infty$. In view of the absence of a convenient generating function for Gaussian coefficients, we use the original definition of the Gaussian coefficient $\binom{n}{k}_q$, and assume that $q>1$.
In the study of the arithmetic properties of divisibility and the comparability of Gaussian coefficients, the notion of an antiderivative root with respect to a given module is used. The conditions for the divisibility of the Gaussian coefficients $\binom{p}{k}_q$ and $\binom{p^2}{k}_q$ by a prime number $p$ are obtained, and the sum of all these coefficients modulo a prime number $p$.
In the final part, some unsolved problems in number theory are presented, connected with binomial and Gaussian coefficients, which may be of interest for further research.
Keywords: central binomial coefficients, Wolstenholme's theorem, Gaussian coefficient, the sum of Gaussian coefficients, divisibility by prime number, congruences modulo, primitive roots for this module.
DOI: https://doi.org/10.22405/2226-8383-2018-19-3-257-269
UDC: 511.17+519.114
Received: 30.07.2018
Accepted:15.10.2018
Citation: U. M. Pachev, "On algebra and arithmetic of binomial and Gaussian coefficients", Chebyshevskii Sb., 19:3 (2018), 257–269
\Bibitem{Pac18}
\by U.~M.~Pachev
\paper On algebra and arithmetic of binomial and Gaussian coefficients
\jour Chebyshevskii Sb.
\mathnet{http://mi.mathnet.ru/cheb693}
\crossref{https://doi.org/10.22405/2226-8383-2018-19-3-257-269}
http://mi.mathnet.ru/eng/cheb693
http://mi.mathnet.ru/eng/cheb/v19/i3/p257
Full text: 48
|
CommonCrawl
|
Predicting response to pembrolizumab in metastatic melanoma by a new personalization algorithm
Neta Tsur1,
Yuri Kogan1,2,
Evgenia Avizov-Khodak3 nAff7,
Désirée Vaeth4 nAff8,
Nils Vogler4,
Jochen Utikal5,6,
Michal Lotem3 &
Zvia Agur1,2
At present, immune checkpoint inhibitors, such as pembrolizumab, are widely used in the therapy of advanced non-resectable melanoma, as they induce more durable responses than other available treatments. However, the overall response rate does not exceed 50% and, considering the high costs and low life expectancy of nonresponding patients, there is a need to select potential responders before therapy. Our aim was to develop a new personalization algorithm which could be beneficial in the clinical setting for predicting time to disease progression under pembrolizumab treatment.
We developed a simple mathematical model for the interactions of an advanced melanoma tumor with both the immune system and the immunotherapy drug, pembrolizumab. We implemented the model in an algorithm which, in conjunction with clinical pretreatment data, enables prediction of the personal patient response to the drug. To develop the algorithm, we retrospectively collected clinical data of 54 patients with advanced melanoma, who had been treated by pembrolizumab, and correlated personal pretreatment measurements to the mathematical model parameters. Using the algorithm together with the longitudinal tumor burden of each patient, we identified the personal mathematical models, and simulated them to predict the patient's time to progression. We validated the prediction capacity of the algorithm by the Leave-One-Out cross-validation methodology.
Among the analyzed clinical parameters, the baseline tumor load, the Breslow tumor thickness, and the status of nodular melanoma were significantly correlated with the activation rate of CD8+ T cells and the net tumor growth rate. Using the measurements of these correlates to personalize the mathematical model, we predicted the time to progression of individual patients (Cohen's κ = 0.489). Comparison of the predicted and the clinical time to progression in patients progressing during the follow-up period showed moderate accuracy (R2 = 0.505).
Our results show for the first time that a relatively simple mathematical mechanistic model, implemented in a personalization algorithm, can be personalized by clinical data, evaluated before immunotherapy onset. The algorithm, currently yielding moderately accurate predictions of individual patients' response to pembrolizumab, can be improved by training on a larger number of patients. Algorithm validation by an independent clinical dataset will enable its use as a tool for treatment personalization.
Advanced melanoma is the most deadly skin cancer, with a total of 91,279 new cases, and 9320 deaths, expected in 2018 in the United States alone [1]. While early-detected melanoma is mostly curable [2, 3], advanced metastatic melanoma is life-risking. Over the past 10 years, increased biological understanding and access to innovative therapeutic modalities have transformed advanced melanoma into a new oncological model for treating solid cancers [4]. In particular, immune checkpoint blockers (ICB) have shown a major success in the treatment of advanced melanoma [5, 6]. The monoclonal antibody ipilimumab, blocking the cytotoxic T-lymphocyte antigen 4 (CTLA-4), was the first checkpoint blocker approved for the treatment of advanced melanoma, since it shows an objective response rate of 6–11% [7, 8]. The approval was then followed by the one of pembrolizumab and nivolumab—two monoclonal antibody drugs which block the programmed cell death 1 (PD-1) receptor, and show response rates of 30–40% [9, 10]. More recently, a highly toxic combination of ipilimumab and nivolumab was also approved for the treatment of advanced melanoma, with a resulting response rate of about 60% [11, 12]. But in spite of the relatively high response rate of this treatment combination, PD-1 monotherapy, such as the one with pembrolizumab, still remains a pivotal treatment for patients with advanced melanoma, due to its relatively low toxicity and acceptable response rate. Moreover, results of the phase Ib KEYNOTE-001 trial show that a high proportion of patients with metastatic melanoma, who had achieved complete response on pembrolizumab, maintained their complete response for prolonged durations after treatment discontinuation [13]. As ICBs become widely available, the ability to forecast duration of individual response can be critical. How to predict the patient's response, and adjust treatment plans accordingly, is a big challenge in the current immunotherapy practice [14].
Response rates would be improved and many treatment complications would be prevented if one could identify good responders already before therapy. Indeed, several biomarkers for response to pembrolizumab have been analyzed and the expression of programmed death-ligand 1 (PD-L1) on tumor and immune cells emerged as an acceptable response predictor [15]. Yet, the significant fraction of PD-L1-negative patients who benefit from pembrolizumab suggests that PD-L1 cannot serve as a reliable response biomarker, on its own [16]. In another endeavor, response scales were developed, based on several clinical factors, including localization of metastases, various blood measures, age, and gender. These scoring systems enable to stratify the patient cohort according to the overall response rate and the probability to survive a year from treatment initiation [17, 18]. In other studies, certain immune signatures on the tumor tissue [19, 20], and blood [21] were associated with response, as well. However, the utility of these methodologies has yet to be validated [21].
Acknowledging the urgent need of reliable response predictors, mathematical modelers have joined the efforts to develop tools for predicting personal response to immunotherapy [22]. For example, Kogan et al. [23] proposed a general algorithm for personalizing prostate cancer immunotherapy during the treatment for predicting future response. To this end the authors constructed personalized mathematical models and validated their prediction accuracy retrospectively, by accruing data from a clinical trial of prostate cancer vaccine. This was done using a new methodology of iterative real-time in-treatment evaluation of patient-specific parameters. Another algorithm for predicting response to cancer therapy is put forward in Elishmereni et al. [24], attacking hormonal treatment of patients with prostate cancer. Here too, the authors developed personalized mathematical models, describing the dynamic pattern of Prostate Specific Antigen. By inputting the personal clinical PSA levels during the first months of treatment, the authors created personal models, and predicted correctly the time to biochemical failure under androgen deprivation therapy in 19 out of 21 (90%) patients with hormone-sensitive prostate cancer.
In the above described algorithms, prediction is made possible only by inputting personal clinical measurements collected during the first months of therapy. While this approach may still be of significant benefit in the design of clinical trials or in the clinics [25, 26], most physicians would prefer to forecast the patient's response to the drug before treatment onset.
This is the primary goal set in the present work: to develop an algorithm which could be of benefit in the current clinical practice. This will be achieved, first and foremost, by predicting the patient response to therapy before its administration, and secondly, by inputting data that are routinely collected in the clinics, e.g., describing disease progression by the sum of diameters (SOD), as prescribed by the Response Evaluation Criteria In Solid Tumors 1.1 (RECIST 1.1). Most importantly, our goal is to generate instructive output information for the physician's decision-making process, e.g., aligning the prediction of disease progression with its effective confirmation by computed tomography (CT) or magnetic resonance imaging (MRI).
In the core of our computational algorithm lies a mathematical mechanistic model for the interactive dynamics of the disease, the cellular immune arm and the drug. By inputting clinical and molecular measurements of the patient's parameters before treatment, the algorithm enables to personalize the model and simulate it to predict the time to disease progression (TTP) of the individual patient under pembrolizumab. Such predictions are expected to assist the treating oncologists in planning the therapy program of the patient.
In this section we describe the mathematical mechanistic model we have developed, the model personalization method, the clinical data used for model calibration, and their application for the development of the personalization algorithm.
Mathematical mechanistic model
The mechanistic model we have developed is deliberately simple (skeletal), taking into account only the main interactions between the melanoma tumor, the cellular immune system, and the immunotherapeutic drug, pembrolizumab (Fig. 1). Model simplification, incorporating only the bare bones of the system, enables to more easily isolate the effect of each chosen variable and to achieve our stated goal, while retaining the fidelity of description.
A schematic representation of the model for the main interactions between the melanoma cancer, the cellular immune system, and the immune checkpoint blocker pembrolizumab. The model is based on the following assumptions: tumor cells stimulate antigen-presenting cells, APCs, depending on the tumor immunogenicity; functional APCs activate effector CD8+ T cells, which may eliminate tumor cells; tumor infiltrating lymphocytes, TILs, become exhausted, independently of the tumor cells elimination; tumor growth is determined by its net growth rate and by the rate of its destruction by Effector TILs; immunotherapy extends the activation of effector TILs, and reduces their exhaustion. Annotated ellipses represent the dynamic variables of the model, while arrows represent the interaction between them (see legends in the box)
The model equations for the dynamics of APCs (\(A_{pc}\)), T lymphocytes (\(T_{il}\)) and cancer cells (\(M_{el}\)) are given below here, while the definitions and estimated values of the model parameters are summarized in Table 1:
$$\frac{{dA_{pc} }}{dt} = \alpha_{im} \cdot \frac{{M_{el} }}{{M_{el} + b}} - \mu_{APC} \cdot A_{pc} ,$$
(1a)
$$\frac{{dT_{il} }}{dt} = a_{pem} \cdot \alpha_{eff} \cdot A_{pc} - b_{pem} \cdot \mu_{eff} \cdot T_{il} ,$$
(1b)
$$\frac{{dM_{el} }}{dt} = \gamma_{mel} \cdot M_{el} - \upsilon_{mel} \cdot \frac{{T_{il} \cdot M_{el} }}{{M_{el} + g}}.$$
(1c)
Table 1 Model parameters
Numerical analyses and simulations were performed using the ode15s Runge–Kutta ODE solver of Matlab R2016a (The Mathsworks, UK). From the initial time of the simulation (t = 0) to the time of treatment initiation (t = t1), the model in Tsur et al. [39] was simulated, and from t1 until the end of the simulation period, the model in Eq. (1) was simulated. The effect of pembrolizumab on the immune system and tumor was implemented here by the parameters \(a_{pem}\) and \(b_{pem}\).
The study population included 54 patients with advanced melanoma, who were treated in the past or still receive pembrolizumab as a single-agent between 09/01/2013 and 03/03/2017, at Hadassah Medical Center (HMC; 33 patients), and the University Medical Center Mannheim (UMM, 21 patients). Recruitment to the retrospective clinical trial was subjected to compliance, with the inclusion/exclusion criteria as listed hereafter. Thirty-five of the patients in our dataset did not have a documented progression during their follow-up period and were, therefore, censored, as will be described below.
Gender: female, male.
Age: 18 years and older at the start of treatment.
Histologically confirmed unresectable Stage III or Stage IV melanoma, as per AJCC staging system.
Prior radiotherapy or other oncological treatments must have been completed at least 2 weeks prior to drug administration.
Measurable disease by CT, or Positron Emission Tomography CT (PET-CT), or MRI, per Response Evaluation Criteria In Solid Tumors (RECIST 1.1) [40].
Patient has at least one quantitative measurement of at least one target lesion (primary tumor or metastasis) before treatment.
Patient has at least one quantitative measurement of at least one target lesion (primary tumor or metastasis) during or after the treatment.
Patient has at least one recorded visit to the treating oncologist before treatment.
Patient has at least one recorded visit to the treating oncologist during or after the treatment.
Treatment as per Standard of care for melanoma.
History of another malignancy within the previous 2 years, except for adequately treated Stage I or II cancer currently in complete remission, or any other cancer that has been in complete remission for at least 2 years.
Ocular melanoma.
The data collected from the medical records of the patients included demographics, information about the diagnosis and primary tumor, staging, applied oncological treatments, detailed information about administration of pembrolizumab (specific protocols), imaging data and blood measures, including relative lymphocyte counts. Baseline information and follow-up duration of the patients are summarized in Table 2.
Table 2 Characteristics of the patient cohort, and baseline information
Imaging data
Baseline and follow-up CT and MRI scans were retrospectively reviewed by radiologists at HMC and UMM. The time interval between consecutive scans was around 3 months. In each scan the maximal and perpendicular diameters of each morphologic detectable lesion in the x–y plane were evaluated, using the GE Centricity PACS software of GE Healthcare at HMC, and a dedicated post processing software (Syngo.Via, Siemens Healthineers, Erlangen, Germany) at UMM. We documented the organ each lesion was found in, and noted new lesions appearing in follow-up scans.
Response evaluation and identification of target lesions was made based on the RECIST 1.1 guidelines. A target lesion is defined by its size, having a minimum diameter of 10 mm for a non-nodal lesion, or a minimum diameter of 15 mm in case of a lymph node. In accordance with RECIST 1.1, we selected up to two target lesions per organ and a maximum of five in total. We summed up the target lesions diameters to obtain the SOD at each tumor size assessment for every patient. At each time of clinical tumor size evaluation we assigned to the patient one of the RECIST 1.1-defined response types, as specified in [40].
Development of the personalization algorithm
Selection of the personal model parameters
In order to personalize our mathematical model we first selected the model parameters which are expected to significantly affect the response and to vary among patients. We chose to personalize, that is, to adjust the values within a certain range, the following two parameters: (i) effect of pembrolizumab on the activation of CD8+ T cells (\(a_{pem}\)), (ii) tumor growth rate (\(\gamma_{mel}\)). The choice to personalize these two parameters was based on our theoretical analysis of the mathematical model described in Eq. (1), showing that changes in the maximum effect of pembrolizumab on the activation of CD8+ T cells, \(a_{pem}\), affect the balance between tumor growth rate and the efficacy of the immune system. We inferred that this parameter varies among patients. Furthermore, stability analysis of the mathematical model shows that in an untreated host, the net growth rate of tumor cells, \(\gamma_{mel}\), is the parameter having the largest effect on the tumor dynamics [39]. For this reason, we consider this parameter as an individual parameter as well.
For personalizing the mathematical model, we set the range of \(a_{pem}\) values to allow different tumor dynamics, as a result of the therapy. Moreover, we estimated the range of \(\gamma_{mel}\) from the doubling time (\(\Delta t\)) of human melanoma metastases: \(\gamma_{mel} = \frac{{{ \ln }\left( 2 \right)}}{\Delta t}\), which was estimated by Carlson [34] and Joseph et al. [41], as described in detail in Table 1. In order to improve parameter identifiability, we dichotomized \(\gamma_{mel}\) to be equal to either the minimum or the median of its range. The ranges of the personalization parameters are summarized in Table 3. For the first iteration of the fitting algorithm we chose the initial guess of each personalization parameter as the median of its range. As mentioned above, all other parameters were fixed to their values reported in Table 1.
Table 3 Ranges of the personalization parameters for identifying the patient-specific model parameters
Creating the personal models
To fit the model to data from the training set, we minimized the sum of squared errors of the observed and simulated tumor size, using 'fmincon' function in Matlab. The goodness of fit was determined by calculating the coefficient of determination, R-squared, for the fitted versus clinically measured tumor sizes of all the patients in the dataset. Subsequently, we determined the functions that enable personalization of the mathematical model, by considering several clinically measured factors, whose values were available for the majority of the patients in this study, in at least one time point before treatment onset, or at least in one time point at an early stage of the treatment (Table 4). Some of these factors, including lactate dehydrogenase (LDH) levels, relative counts of blood lymphocytes (LY%), and baseline SOD, are known to be associated with the response to pembrolizumab [17, 42]. The relationships between the other clinical variables considered for the personalization functions, and outcome under pembrolizumab, were examined by correlation analysis. We used four standard statistical methods to analyze the relationships between personal clinical data and model parameters: Pearson coefficient, receiver operating characteristic (ROC) analysis, confusion table and Cohen's kappa (κ). The obtained relationships were the basis for the formulation of the personalization functions.
Table 4 Availability of measurements in the recruited patient dataset
To overcome variations in the clinical and molecular values that are due to differences in the measurement and calibration techniques, used in each medical center, we normalized each measured value (\(X\)), relative to its given range, between \(X_{min}\) and \(X_{max}\), as specified for each of the two subsets. The normalized covariate value (\(\hat{X}\)) is
$$\hat{X} = \frac{{X - X_{min} }}{{X_{max} - X_{min} }} .$$
From the individual model fits we obtained the personal model parameters for all patients in the training set. From the clinical record files of each patient we retrieved the relevant clinical measurements for all patients at baseline, and around the time of the first follow-up imaging assessment. For estimation of \(a_{pem}\) from the clinical/molecular measurements we used the k-Nearest Neighbors (k-NN) algorithm. The number of nearest neighbors, k, was taken to be the integer part of the square root of the total number of patients (N = 54), i.e., k = 7. In case of missing data of a clinical/molecular measurements we replaced the missing value by the average value for this clinical factor, obtained from the data of the rest of the patients. Missing values of binary clinical/molecular factors were set to 0. We validated the resulting personalization functions by the Leave-One-Out cross validation (LOO CV) method. In order to evaluate the personal \(\gamma_{mel}\) values from the clinical measurements, we trained a classification tree, using the LOO CV, as above. After predicting the parameter values of each patient we simulated the personalized models (using the ode15s solver of Matlab), derived the simulated tumor size at the days of the imaging assessments, and evaluated TTP based on RECIST 1.1 [40].
Analysis of the TTP results
To evaluate the quality of TTP prediction, we compared the predicted versus the clinically observed TTP in three time intervals, including 0–90 days, 90–150 days and 150–365 days, from pembrolizumab initiation. We also took into account the number of patients for whom no disease progression was indicated during their follow-up period. As was mentioned above, from the practical point of view, the resolution of TTP predictions should be as coarse as the planned CT/MRI scanning schedule.
We categorized our predictions according to these time intervals, and generated a confusion table. To calculate the corresponding value of the Cohen's kappa (κ), we applied the multidimensional formula of Warrens [43], who defines the proportion \(p_{1}\) of patients, whose simulated time interval of the TTP (\(t_{s}\)) matched the reference one (\(t_{r}\)). The proportion \(p_{1}\) is the ratio between the number of these patients, denoted \(N_{TTP} \left( {t_{s} = t_{r} } \right)\), and the total number of patients in the cohort (\(N = 54\)):
$$p_{1} \equiv \mathop \sum \limits_{r,s = 1}^{4} \frac{{N_{TTP} \left( {t_{r} = t_{s} } \right)}}{N}.$$
The proportion of patients in each time interval is denoted \(p_{2}\). It is calculated from the number of simulated, and observed disease progression incidences in each time interval, denoted \(N_{TTP} \left( {t_{s} } \right)\), and \(N_{TTP} \left( {t_{r} } \right)\), respectively:
$$p_{2} \equiv \mathop \sum \limits_{{t_{s} ,t_{r} = 1}}^{4} \frac{{N_{TTP} \left( {t_{s} } \right)}}{N} \cdot \frac{{N_{TTP} \left( {t_{r} } \right)}}{N}.$$
The multidimensional Cohen's kappa (\(\kappa\)) is
$$\kappa = \frac{{p_{1} - p_{2} }}{{1 - p_{2} }}.$$
The data in the confusion table can be categorized into six different outcomes, as follows:
Progressive disease was clinically evidenced by imaging assessments, as well as predicted by the algorithm, at the same time interval (\(t_{r} = t_{s}\)).
The algorithm's simulated TTP was predicted to precede the observed TTP (\(t_{s} < t_{r}\)).
The algorithm's simulated TTP was predicted to occur later than the observed TTP (\(t_{s} > t_{r}\)).
Progressive disease was not clinically observed, but was predicted by the algorithm (\(t_{r} = 4;t_{s} = 1,2,3\)).
Progressive disease was clinically observed, but was not predicted by the algorithm (\(t_{r} = 1,2,3;t_{s} = 4\)).
Progressive disease was neither observed nor predicted by the algorithm, during the follow-up period (\(t_{r} = t_{s} = 4\)).
This section is divided into two parts. The first part describes the personalization algorithm and its development, while the second part shows the predictions of the personal TTP of the patients, obtained by using the personalization algorithm.
The personalization algorithm
First, we outline the personalization algorithm we have developed for predicting response to pembrolizumab in a patient with advanced melanoma:
Input personal baseline data.
Tumor burden from imaging scans.
Primary tumor information.
Construct a personalized model by inputting the personal clinical data in the algorithm's personalization functions and calculating the values of the personal parameters.
Input the calculated personal parameters in the personal model.
Simulate the personal model and extract the predicted tumor size, periodically, at predetermined time intervals, for example, every 3 months.
Determine disease state for each predicted tumor size, in conjunction with previously predicted tumor sizes and RECIST 1.1 criteria.
Determine the personal timing of progressive disease and the personal TTP.
Algorithm development: retrieving personal model parameters and evaluating TTP in the training set
The development of the above algorithm is described hereafter.
In the first stage of algorithm development we verified that the clinical information we have is sufficient for the training of the algorithm. We found that all four response categories of RECIST 1.1 are represented in the collected clinical information of our patient cohort, during the follow-up period: (i) full response of the target lesions (e.g., Fig. 2a); (ii) shrinkage of the target lesions by more than 30% from baseline size (e.g., Fig. 2b); (iii) progression of the target lesions, indicated by an increase of ≥ 20% relative to the nadir (e.g., Fig. 2c); (iv) stability in the size of the target lesions, not meeting the aforementioned conditions of shrinkage or progression (e.g., Fig. 2d). This information ensures that the training of the algorithm will be comprehensive. In the next stage we employed the longitudinal tumor size evaluations for retrieving the personal model parameters. This was done by fitting the model to the SOD time series, calculated from the clinical data (Fig. 2).
Representative fitting results of patients, whose target lesions completely shrunk under treatment with pembrolizumab (a), shrunk by more than 30% from baseline size (b), increased by over 20%, relative to the nadir measurement (c), was stabilized, as determined when the conditions for disease progression, partial response, and complete response were not met (d). The ranges of the personalization parameters used for the simulation are specified in Table 3. SOD sum of diameters
In order to estimate the goodness of the fit of the models, we compared between the clinically observed and the fitted tumor sizes of all the patients in the cohort (Fig. 3). Comparison of the absolute and log-scaled sizes yielded R2 = 0.94, and R2 = 0.96, respectively.
Fitting results of the model-simulated tumor size in the patients' cohort (N = 54), with the clinically observed tumor sizes. a Each point shows the fitted versus the clinically measured sum of diameters (SOD) of a patient, at the time an imaging assessment took place in the clinic. The observed SOD on the reference line equals to the fitted values. The personalization parameter ranges used for the simulation are specified in Table 3, and the values of the other model parameters are summarized in Table 1. Numerical analyses and simulations were performed using the ode15s Runge–Kutta ODE solver of Matlab R2016a (The Mathsworks, UK). From the initial time of the simulation (t = 0) to the time of treatment initiation (t = t1), the model in Tsur et al. [39], was simulated, and from t1 until the end of the simulation period, the model in Eq. (1) was simulated. The effect of pembrolizumab on the immune system and tumor was implemented here by the parameters \({\text{a}}_{\text{pem}}\) and \({\text{b}}_{\text{pem}}\). b Fitted versus observed SOD on a log scale. Values of 0 were excluded from the dataset for calculation of R-squared
We then compared the TTP derived from the fitting results to the clinically observed TTP, by counting the number of disease progression events in each one of four categories of time intervals, as described in the "Methods" section, and summarized in Table 5.
Table 5 Model-simulated versus clinically- measured TTP
From the histogram of the fitted \(a_{pem}\) values, we learned that the distribution of this parameter in the patient population is approximately log-normal (Fig. 4). This implies that lower values are more frequently encountered than large ones. Thus, in order to reduce the bias in the prediction of this parameter, we applied a logarithmic transformation to the values of \(a_{pem}\).
Histogram of \(a_{pem}\) values, obtained from fitting of the mathematical model to the clinically observed tumor size. The initial range of \(a_{pem}\) for the fit is defined in Table 3. a Absolute values of \(a_{pem}\). b Transformed values of \(ln\left( {a_{pem} } \right)\)
Predictions of the personal models
As summarized in Tables 6, 7, we found that the value of \(a_{pem}\) is most correlative with the baseline SOD (Table 6), and the value of \(\gamma_{mel}\) is most correlative with Breslow thickness and the status of nodular melanoma (Table 7).
Table 6 Pearson correlations between the values of model parameter \(ln\left( {a_{pem} } \right)\) and the clinical personal measures
Table 7 (a) Correlations between \(\gamma_{mel}\) and clinical measures, with multiple potential values. (b) Correlations between \(\gamma_{mel}\) and binary covariates
We calculated the R2 value of the parameter values derived by the k-NN algorithm versus the fitted ones, in order to estimate the goodness of fit. The value of R2 = 0.47 obtained for the baseline SOD, refers to results of LOO CV. The plot of the fitted \(ln\left( {a_{pem} } \right)\) value for each patient, versus the k-NN algorithm-derived value is shown in Fig. 5.
Patient-specific values of \(ln\left( {a_{pem} } \right)\), as obtained from fitting the mathematical model to the data of each patient in the training set, versus the estimated values of \(ln\left( {a_{pem} } \right)\) by a Leave-One-Out cross-validation (COO CV) of the k-NN algorithm. Each point represents the parameter values of one patient and the reference line satisfies equality between the fitted and regression-derived parameter values (see "Methods" section)
Using all the clinical factors listed in Table 4 to train and optimize a classification tree, and validating the classification by LOO CV, we found that the tree which most correctly classified the values of \(\gamma_{mel}\) was obtained from the data of Breslow thickness and status of nodular melanoma. The comparison between the classified and fitted values of \(\gamma_{mel}\), with the use of these two covariates, are summarized Table 8.
Table 8 Confusion table for the classification of the model parameter, \(\gamma_{mel}\)
Based on the above results we constructed the personalization functions, estimating the values of \(a_{pem}\) and \(\gamma_{mel}\) of each patient in the validation set, from the baseline SOD for \(a_{pem}\), and Breslow tumor thickness, and status of nodular melanoma for \(\gamma_{mel}\) (Tables 6, 7). We completed the personalization algorithm by implementing in it the personalization functions.
Prediction of the TTP using the personalization algorithm
We compared the predictions obtained by the personalization algorithm with the clinically measured tumor sizes in all patients. We evaluated the goodness of fit of the algorithm-predicted and clinically-measured tumor size, as shown in Fig. 6.
Comparison between the sum of diameters (SOD), derived by the personalization algorithm, and the value measured from imaging assessments, at each clinically measured time point, for all patients, presented on a normal scale (a), and on a log scale (b). The reference line marks equality between the fitted and predicted SOD values. Values of 0 were excluded from the dataset for calculation of R-squared
From the predicted tumor size dynamics, we also predicted the TTP, and compared it to the value estimated according to the clinically assessed progression. Results are shown in Table 9. The evaluation of the Cohen's kappa, κ = 0.489, suggests a moderate agreement between the prediction and clinical data.
Table 9 Personal predictions of TTP
For the patients who had progressive disease according to our personalization algorithm, we compared the predicted TTP to the clinically observed one (Fig. 7). The results show moderate agreement of the predictions with the clinical observations (R2 = 0.505).
Comparison between the predicted time to progression (TTP) and the measured clinical TTP, including only the cases in which disease progression was determined clinically, as well as by the personalization algorithm. Points on the reference line satisfy equality between the observed and computationally derived TTP
Treatment with ICB has proven successful, as it produces a significant clinical benefit in a subset of patients. However, identification of the potentially responsive patients before treatment initiation still remains a challenge, and the availability of personal response predictors has been pointed out as an unmet clinical need [44,45,46,47]. Here we showed that the personalization algorithm we developed can serve as a virtual response predictor in the clinic, along with clinical information about baseline tumor size, Breslow thickness, and the status of nodular melanoma. Taking into account the low life expectancy of untreated patients with advanced melanoma, and the involved side effects and high immunotherapy costs [48], the ability to pre-select patients for these treatments can significantly improve the quality of life of the patients.
The personalization algorithm we developed enables predictions of the time to progression, as defined by RECIST 1.1. Nowadays, the first response assessment in the clinic takes place at around 3 months into the treatment. As many patients progress within these first 3 months [49,50,51], the algorithm predicting the TTP before treatment can save several months of administration of an incompatible and expensive drug. For patients who benefit from the treatment, the algorithm provides information on the duration of the response.
Prediction of the type and duration of response is a unique addition of this study to the knowledge gained from previously developed biomarkers for ICB. Several markers in the tumor microenvironment and peripheral blood are associated with response to ICB in patients with malignant melanoma [52]. However, there is no way to quantify the relationships between the biomarker levels and the expected response, as yet. For example, elevation of the baseline LDH level is associated with shorter overall survival (OS) of patients with malignant melanoma under anti-PD-1 treatments [53]. However, the survival time of individual patients cannot be predicted by this marker. In our study, clinical disease progression was observed in all patients who had an elevated LDH level before treatment onset and more than 10% increase of the LDH level on the first CT scan (11 out of 29, 38%). In contrast, disease progression occurred in only 4 out of 18 patients who initially had elevated LDH levels, but less than 10% LDH change from baseline on the first CT scan. Therefore, the change from baseline of LDH levels can serve to predict disease progression within the first year of ICB initiation, but for many patients, the prediction does not considerably precede the detection of progression by imaging scans. Another study reports that an increase in tumor burden of less than 20% from baseline, during 3 months into treatment with pembrolizumab, is associated with longer OS of patients with advanced melanoma [54]. However, we note the difficulty in using early increase in tumor load as a response predictor, as this increase can be detected only a while after the initiation of treatment, when patients may have already experienced disease progression. The ability to predict ICB treatment outcomes before treatment, by use of our suggested personalization algorithm, can be a significant contribution to the currently available methodologies for response evaluation.
Our results show that the Breslow thickness, the baseline tumor burden, and the status of nodular melanoma can serve as markers for TTP prediction under pembrolizumab, when integrated and processed by our personalization algorithm. We found that different values of Breslow thickness and status of nodular melanoma are associated with different rates of tumor growth. Breslow thickness has been known as a prognostic biomarker for melanoma [55, 56], and here we show that it has a predictive power. Using the baseline tumor burden as a potential biomarker is supported by Joseph et al. [57], who analyzed the relationships between baseline tumor burden and overall survival of 583 patients with advanced melanoma under pembrolizumab. In addition, the peripheral blood from patients with advanced melanoma has been analyzed, showing that response to pembrolizumab is associated with the ratio between the baseline tumor burden and the reinvigoration of effector CD8+ T cells [42].
Using a small patient cohort (54 patients) for its training, our personalization algorithm yields moderately accurate predictions. We believe that by increasing the size of the training set we will significantly improve the performance of the regression and classification we employed for identification of the individual model parameters. Yet, considering the limited clinical information and the simple mathematical model implemented at the core of the algorithm, the results are encouraging.
One of the major problems in medical biomathematics is its failure to propose algorithms that can be of aid in the medical practice. Specifically, the two significant hurdles to mathematical models of cancer growth becoming clinically useful, are that in most of the models the required input information cannot be extracted in a straightforward manner from data that are routinely collected in the clinics, and that in most cases, the output information is not instructive for the physician's decision-making process. Wishing to overcome these shortcomings, we developed our algorithm and tested it using data that are routinely collected in the clinics, namely, the sum of diameters (SOD) or sum of the longest diameters (SLD), as prescribed by the RECIST 1.1. In our case, we could increase the physical and mechanistic realism of the description of tumor growth by asking the radiologists to measure, with little additional effort, more informative tumor size parameters than SOD. But the current standards in the field involve longitudinal measurement of SOD, and as our goal commands, we wish to adjust our tools to the reality in the field, rather than developing an idealized solution.
By the same token, our discretization policy, inevitably, entails loss of information. Treating oncologists do not evaluate the patient's disease progression status continuously, but rather, every 2–4 months, using the costly imaging technology (CT/MRI). As stated above, our goal was to generate clinically relevant output. For that it would be sufficient to align the prediction of disease progression with the time of its effective substantiation by imaging. For this reason, the resolution of TTP predictions is as coarse as the planned CT/MRI scanning schedule. Still, it would be of a significant help to the doctor to know whether the patient is expected to progress early, or will have moderately long TTP, or a very long TTP, as evaluated by RECIST1.1. The discrete categories of TTP used in this study roughly correspond to these possibilities of response duration.
As one can note, most of the recruited patients are non-progressing (censored). Our approach is to use their longitudinal lesion sizes for model training and validation, so that they have the same weight as the progressing patients in the major part of the work. We then sorted the censored patients as a separate category, checking whether the model had not falsely predicted progression for them during the follow-up period. The alternative way for taking account of censored patients is to construct the survival curves, e.g., by the Kaplan–Meier method, and to use log-rank tests or Cox regression for analysis. The latter methodology would be more suitable if we wished to compare two different populations, and to compare between individuals over the whole patient group.
Model simplicity is a prerequisite for generating a beneficial algorithm, since it requires to evaluate only a small number of personal parameters. A more complex model would entail the evaluation of a relatively large number of clinical measurements in the patients for determining the personal models. It should be borne in mind, also, that our evaluation of disease progression was not required to be more sensitive than that of RECIST 1.1, which takes into account only significant changes in tumor load. Our simple model is well suited for the estimation of similarly rough changes in disease progression.
One of the limitations of the personalization algorithm developed here is that it uses the RECIST 1.1 criteria, which include the appearance of new lesions. This option was not evaluated in our algorithm and we determined disease progression only by the change in size of the target lesions between following imaging scans. Inspecting the clinical patient data, we noted that in about 50% of those in whom new lesions were detected, treatment by pembrolizumab was continued after detection, practically implying that often clinicians do not consider the new lesion criterion as progressive disease. This finding is in line with the recent understanding that formation of new lesions under immunotherapy does not necessarily indicate actual progressive disease [58, 59]. Indeed, in the recently developed immune-related RECIST (irRECIST) criteria, pertinent to immunotherapy, appearance of new lesions is not a criterion for progressive disease [54]. The indicated response is then "unconfirmed progressive disease", and validation is required in another imaging scan, at least 4 weeks later. Adaptation of our algorithm to the irRECIST criteria will be made upon clinical validation of these criteria as part of the clinical follow-up routine.
Future recommendations for improving the predictive power of our personalization algorithm include training by a larger dataset, as well as validation of the algorithm by clinical data from an independent dataset. Following improvements in the prediction accuracy, our algorithm can be used as a tool in selecting personal treatment. In addition, our innovative methodology can be adapted to other available immunotherapies, including anti-CTLA-4, anti-PD-1 combination, or other immunotherapies when becoming clinically available. Taken together, this study demonstrates that using computational algorithms for predicting the response to immunotherapy in patients with metastatic melanoma is feasible in the clinical realm.
Our results suggest that personalization of a mathematical mechanistic model by various clinical and molecular pretreatment measurements, can serve for predicting TTP in the clinical setting. Using the developed algorithm to predict the TTP before immunotherapy application can guide the physician decision-making, save several months of administration of an incompatible drug, and significantly improve the quality of life of the patients. Following validation by a new dataset of pembrolizumab-treated patients with advanced melanoma, our algorithm will serve as a tool in the decision-making process of treating physicians. In the future, our algorithm can be adapted to other available therapies, by adjustment of the mathematical mechanistic model, using pertinent clinical data.
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
ICB:
immune checkpoint blockers
CTLA-4:
cytotoxic T-lymphocyte antigen 4
PD-1:
programmed cell death 1
PD-L1:
programmed death-ligand 1
ADT:
androgen deprivation therapy
TTP:
time to disease progression
APCs:
antigen-presenting cells
TILs:
effector CD8+ tumor infiltrating lymphocytes
HMC:
Hadassah Medical Center
UMM:
University Medical Center Mannheim
PET-CT:
Positron Emission Tomography CT
RECIST 1.1:
Response Evaluation Criteria In Solid Tumors
sum of diameters
LDH:
LY%:
relative counts of blood lymphocytes
ROC:
receiver operating characteristic
k-NN:
K-Nearest Neighbors
LOO CV:
Leave-One-Out cross validation
overall survival
irRECIST:
immune-related RECIST
Siegel RL, Miller KD, Jemal A. Cancer statistics, 2018. CA Cancer J Clin. 2018;68:7–30.
Friedman RJ, Rigel DS, Kopf AW. Early detection of malignant melanoma: the role of physician examination and self-examination of the skin. CA Cancer J Clin. 1985;35:130–51.
Terushkin V, Halpern AC. Melanoma early detection. Hematol/Oncol Clin. 2009;23:481–500.
Schadendorf D, van Akkooi AC, Berking C, Griewank KG, Gutzmer R, Hauschild A, Stang A, Roesch A, Ugurel S. Melanoma. Lancet. 2018;392:971–84.
Ott PA, Hodi FS, Robert C. CTLA-4 and PD-1/PD-L1 blockade: new immunotherapeutic modalities with durable clinical benefit in melanoma patients. Clin Cancer Res. 2013;19:5300–9.
Pardoll DM. The blockade of immune checkpoints in cancer immunotherapy. Nat Rev Cancer. 2012;12:252–64.
Hodi FS, O'Day SJ, McDermott DF, Weber RW, Sosman JA, Haanen JB, Gonzalez R, Robert C, Schadendorf D, Hassel JC. Improved survival with ipilimumab in patients with metastatic melanoma. N Engl J Med. 2010;363:711–23.
Wolchok JD, Neyns B, Linette G, Negrier S, Lutzky J, Thomas L, Waterfield W, Schadendorf D, Smylie M, Guthrie T. Ipilimumab monotherapy in patients with pretreated advanced melanoma: a randomised, double-blind, multicentre, phase 2, dose-ranging study. Lancet Oncol. 2010;11:155–64.
Robert C, Schachter J, Long GV, Arance A, Grob JJ, Mortier L, Daud A, Carlino MS, McNeil C, Lotem M. Pembrolizumab versus ipilimumab in advanced melanoma. N Engl J Med. 2015;372(26):2521–32.
Schachter J, Ribas A, Long GV, Arance A, Grob J-J, Mortier L, Daud A, Carlino MS, McNeil C, Lotem M. Pembrolizumab versus ipilimumab for advanced melanoma: final overall survival results of a multicentre, randomised, open-label phase 3 study (KEYNOTE-006). Lancet. 2017;390:1853–62.
Larkin J, Chiarion-Sileni V, Gonzalez R, Grob JJ, Cowey CL, Lao CD, Schadendorf D, Dummer R, Smylie M, Rutkowski P. Combined nivolumab and ipilimumab or monotherapy in untreated melanoma. N Engl J Med. 2015;373:23–34.
Wolchok JD, Chiarion-Sileni V, Gonzalez R, Rutkowski P, Grob J-J, Cowey CL, Lao CD, Wagstaff J, Schadendorf D, Ferrucci PF. Overall survival with combined nivolumab and ipilimumab in advanced melanoma. N Engl J Med. 2017;377:1345–56.
Robert C, Ribas A, Hamid O, Daud A, Wolchok JD, Joshua AM, Hwu W-J, Weber JS, Gangadhar TC, Joseph RW. Durable complete response after discontinuation of pembrolizumab in patients with metastatic melanoma. J Clin Oncol. 2017;36(17):1668–74.
Wang Q, Gao J, Wu X. Pseudoprogression and hyperprogression after checkpoint blockade. Int Immunopharmacol. 2018;58:125–35.
Fusi A, Festino L, Botti G, Masucci G, Melero I, Lorigan P, Ascierto PA. PD-L1 expression as a potential predictive biomarker. Lancet Oncol. 2015;16:1285–7.
Sunshine J, Taube JM. PD-1/PD-L1 inhibitors. Curr Opin Pharmacol. 2015;23:32–8.
Weide B, Martens A, Hassel JC, Berking C, Postow MA, Bisschop K, Simeone E, Mangana J, Schilling B, Di Giacomo A-M. Baseline biomarkers for outcome of melanoma patients treated with pembrolizumab. Clin Cancer Res. 2016;22(22):5487–96.
Nosrati A, Tsai KK, Goldinger SM, Tumeh P, Grimes B, Loo K, Algazi AP, Nguyen-Kim TDL, Levesque M, Dummer R. Evaluation of clinicopathological factors in PD-1 response: derivation and validation of a prediction scale for response to PD-1 monotherapy. Br J Cancer. 2017;116:1141.
Dronca RS, Liu X, Harrington SM, Chen L, Cao S, Kottschade LA, McWilliams RR, Block MS, Nevala WK, Thompson MA. T cell Bim levels reflect responses to anti-PD-1 cancer therapy. JCI Insight. 2016;1:e86014.
Chen P-L, Roh W, Reuben A, Cooper ZA, Spencer CN, Prieto PA, Miller JP, Bassett RL, Gopalakrishnan V, Wani K. Analysis of immune signatures in longitudinal tumor samples yields insight into biomarkers of response and mechanisms of resistance to immune checkpoint blockade. Cancer Discov. 2016;6:827–37.
Jacquelot N, Roberti M, Enot D, Rusakiewicz S, Ternès N, Jegou S, Woods D, Sodré A, Hansen M, Meirow Y. Predictors of responses to immune checkpoint blockade in advanced melanoma. Nat Commun. 2017;8:592.
Agur Z, Halevi-Tobias K, Kogan Y, Shlagman O. Employing dynamical computational models for personalizing cancer immunotherapy. Expert Opin Biol Ther. 2016;16:1373–85.
Kogan Y, Halevi-Tobias K, Elishmereni M, Vuk-Pavlović S, Agur Z. Reconsidering the paradigm of cancer immunotherapy by computationally aided real-time personalization. Cancer Res. 2012;72:2218–27.
Elishmereni M, Kheifetz Y, Shukrun I, Bevan GH, Nandy D, McKenzie KM, Kohli M, Agur Z. Predicting time to castration resistance in hormone sensitive prostate cancer by a personalization algorithm based on a mechanistic model integrating patient data. Prostate. 2016;76:48–57.
Agur Z, Vuk-Pavlovic S. Mathematical modeling in immunotherapy of cancer: personalizing clinical trials. Mol Ther. 2012;20:1–2.
Agur Z, Vuk-Pavlovic S. Personalizing immunotherapy: balancing predictability and precision. Oncoimmunology. 2012;1:1169–71.
Barrio MM, Abes R, Colombo M, Pizzurro G, Boix C, Roberti MP, Gelize E, Rodriguez-Zubieta M, Mordoh J, Teillaud J-L. Human macrophages and dendritic cells can equally present MART-1 antigen to CD8+ T cells after phagocytosis of gamma-irradiated melanoma cells. PLoS ONE. 2012;7:e40311.
Von Euw EM, Barrio MM, Furman D, Bianchini M, Levy EM, Yee C, Li Y, Wainstok R, Mordoh J. Monocyte-derived dendritic cells loaded with a mixture of apoptotic/necrotic melanoma cells efficiently cross-present gp100 and MART-1 antigens to specific CD8+ T lymphocytes. J Transl Med. 2007;5:19.
Lee T-H, Cho Y-H, Lee M-G. Larger numbers of immature dendritic cells augment an anti-tumor effect against established murine melanoma cells. Biotechnol Lett. 2007;29:351–7.
de Pillis L, Gallegos A, Radunskaya A. A model of dendritic cell therapy for melanoma. Front Oncol. 2013;3:56.
Ludewig B, Krebs P, Junt T, Metters H, Ford NJ, Anderson RM, Bocharov G. Determining control parameters for dendritic cell-cytotoxic T lymphocyte interaction. Eur J Immunol. 2004;34:2407–18.
Bossi G, Gerry AB, Paston SJ, Sutton DH, Hassan NJ, Jakobsen BK. Examining the presentation of tumor-associated antigens on peptide-pulsed T2 cells. Oncoimmunology. 2013;2:e26840.
Taylor GP, Hall SE, Navarrete S, Michie CA, Davis R, Witkover AD, Rossor M, Nowak MA, Rudge P, Matutes E, et al. Effect of lamivudine on human T-cell leukemia virus type 1 (HTLV-1) DNA copy number, T-cell phenotype, and anti-tax cytotoxic T-cell frequency in patients with HTLV-1-associated myelopathy. J Virol. 1999;73:10289–95.
Carlson JA. Tumor doubling time of cutaneous melanoma and its metastasis. Am J Dermatopathol. 2003;25:291–9.
Kuznetsov VA. A mathematical model for the interaction between cytotoxic T lymphocytes and tumour cells. Analysis of the growth, stabilization, and regression of a B-cell lymphoma in mice chimeric with respect to the major histocompatibility complex. Biomed Sci. 1991;2:465–76.
Kuznetsov VA, Makalkin IA, Taylor MA, Perelson AS. Nonlinear dynamics of immunogenic tumors: parameter estimation and global bifurcation analysis. Bull Math Biol. 1994;56:295–321.
Kuznetsov VA, Zhivoglyadov VP, Stepanova LA. Kinetic approach and estimation of the parameters of cellular interaction between the immunity system and a tumor. Arch Immunol Ther Exp (Warsz). 1993;41:21–31.
Kronik N, Kogan Y, Elishmereni M, Halevi-Tobias K, Vuk-Pavlovic S, Agur Z. Predicting outcomes of prostate cancer immunotherapy by personalized mathematical models. PLoS ONE. 2010;5:e15482.
Tsur N, Kogan Y, Rehm M, Agur Z. Response of patients with melanoma to immune checkpoint blockade – insights gleaned from analysis of a new mathematical mechanistic model. J Theor Biol. 2019. https://doi.org/10.1016/j.jtbi.2019.110033.
Eisenhauer E, Therasse P, Bogaerts J, Schwartz L, Sargent D, Ford R, Dancey J, Arbuck S, Gwyther S, Mooney M. New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1). Eur J Cancer. 2009;45:228–47.
Joseph WL, Morton DL, Adkins PC. Variation in tumor doubling time in patients with pulmonary metastatic disease. J Surg Oncol. 1971;3:143–9.
Huang AC, Postow MA, Orlowski RJ, Mick R, Bengsch B, Manne S, Xu W, Harmon S, Giles JR, Wenz B. T-cell invigoration to tumour burden ratio associated with anti-PD-1 response. Nature. 2017;545(7652):60.
Warrens MJ. A comparison of Cohen's kappa and agreement coefficients by Corrado Gini. Int J. 2013;16:7.
Fujii T, Naing A, Rolfo C, Hajjar J. Biomarkers of response to immune checkpoint blockade in cancer treatment. Crit Rev Oncol/Hematol. 2018;130:108–20.
Sharma P, Allison JP. The future of immune checkpoint therapy. Science. 2015;348:56–61.
Garrido MJ, Berraondo P, Trocóniz IF. Commentary on pharmacometrics for immunotherapy. CPT: Pharmacomet Syst Pharmacol; 2016.
Nishino M, Ramaiya NH, Hatabu H, Hodi FS. Monitoring immune-checkpoint blockade: response evaluation and biomarker development. Nat Rev Clin Oncol. 2017;14:655.
Kohn CG, Zeichner SB, Chen Q, Montero AJ, Goldstein DA, Flowers CR. Cost-effectiveness of immune checkpoint inhibition in BRAF wild-type advanced melanoma. J Clin Oncol. 2017;35:1194.
Ribas A, Hamid O, Daud A, Hodi FS, Wolchok JD, Kefford R, Joshua AM, Patnaik A, Hwu W-J, Weber JS. Association of pembrolizumab with tumor response and survival among patients with advanced melanoma. JAMA. 2016;315:1600–9.
Robert C, Ribas A, Wolchok JD, Hodi FS, Hamid O, Kefford R, Weber JS, Joshua AM, Hwu W-J, Gangadhar TC. Anti-programmed-death-receptor-1 treatment with pembrolizumab in ipilimumab-refractory advanced melanoma: a randomised dose-comparison cohort of a phase 1 trial. Lancet. 2014;384:1109–17.
Ribas A, Puzanov I, Dummer R, Schadendorf D, Hamid O, Robert C, Hodi FS, Schachter J, Pavlick AC, Lewis KD. Pembrolizumab versus investigator-choice chemotherapy for ipilimumab-refractory melanoma (KEYNOTE-002): a randomised, controlled, phase 2 trial. Lancet Oncol. 2015;16:908–18.
Kitano S, Nakayama T, Yamashita M. Biomarkers for immune checkpoint inhibitors in malignant melanoma. Front Oncol. 2018;8:270.
Diem S, Kasenda B, Spain L, Martin-Liberal J, Marconcini R, Gore M, Larkin J. Serum lactate dehydrogenase as an early marker for outcome in patients treated with anti-PD-1 therapy in metastatic melanoma. Br J Cancer. 2016;114:256.
Nishino M, Giobbie-Hurder A, Manos MP, Bailey N, Buchbinder EI, Ott PA, Ramaiya NH, Hodi FS. Immune-related tumor response dynamics in melanoma patients treated with pembrolizumab: identifying markers for clinical outcome and treatment decisions. Clin Cancer Res. 2017;23(16):4671–9.
Breslow A. Thickness, cross-sectional areas and depth of invasion in the prognosis of cutaneous melanoma. Ann Surg. 1970;172:902.
Morton DL, Davtyan DG, Wanek LA, Foshag LJ, Cochran AJ. Multivariate analysis of the relationship between survival and the microstage of primary melanoma by Clark level and Breslow thickness. Cancer. 1993;71:3737–43.
Joseph RW, Elassaiss-Schaap J, Kefford R, Hwu WJ, Wolchok JD, Joshua AM, Ribas A, Hodi FS, Hamid O, Robert C, Daud A, Dronca R, Hersey P, Weber JS, Patnaik A, de Alwis DP, Perrone A, Zhang J, Kang SP, Ebbinghaus S, Anderson KM, Gangadhar TC. Baseline tumor size is an independent prognostic factor for overall survival in patients with melanoma treated with pembrolizumab. Clin Cancer Res. 2018;24(20):4960–7.
Wolchok JD, Hoos A, O'Day S, Weber JS, Hamid O, Lebbé C, Maio M, Binder M, Bohnsack O, Nichol G. Guidelines for the evaluation of immune therapy activity in solid tumors: immune-related response criteria. Clin Cancer Res. 2009;15:7412–20.
Hodi FS, Hwu W-J, Kefford R, Weber JS, Daud A, Hamid O, Patnaik A, Ribas A, Robert C, Gangadhar TC. Evaluation of immune-related response criteria and RECIST v1. 1 in patients with advanced melanoma treated with pembrolizumab. J Clin Oncol. 2016;34:1510–7.
We thank Marina Kleiman (Optimata Ltd.) for participating in the clinical trial design and execution; Christoffer Gebhardt, and Mirko Gries (UMM) for sharing clinical knowledge; Christoffer Gebhardt, Mirko Gries, Beate Eisenecker, Christianne Schmidt, Carmen Weiler, and Yvonne Nowak (UMM), Tamar Hamburger and Hani Steinberg (HMC) for assisting in data collection.
This project has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Skłodowska-Curie Grant Agreement No. 642295 (MEL-PLEX).
Evgenia Avizov-Khodak
Present address: Radiology Department, Maccabi Healthcare Services, Yigal Alon Street 96, Tel Aviv, Israel
Désirée Vaeth
Present address: Netzwerk Radiologie, Kantonsspital St. Gallen, Rorschacher Strasse 95, 9007, St. Gallen, Switzerland
Optimata Ltd., Hate'ena St. 10, POB 282, 6099100, Bene-Ataroth, Israel
Neta Tsur, Yuri Kogan & Zvia Agur
Institute for Medical BioMathematichs (IMBM), Hate'ena St. 10, 6099100, Bene-Ataroth, Israel
Yuri Kogan & Zvia Agur
Hadassah Hebrew University Medical Center, Kiryat Hadassah, PO Box 12000, 91120, Jerusalem, Israel
Evgenia Avizov-Khodak & Michal Lotem
Institute of Clinical Radiology and Nuclear Medicine, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, Heidelberg, Germany
Désirée Vaeth & Nils Vogler
Medical Faculty Mannheim of Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
Jochen Utikal
German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
Neta Tsur
Yuri Kogan
Nils Vogler
Michal Lotem
Zvia Agur
NT developed the mathematical model described in this study, collected the clinical data at UMM and HMC, analyzed the data, developed the algorithm, and was a major contributor in the writing of the manuscript. ZA supervised the project, planned the algorithm, participated in its development, and wrote the paper. YK participated in supervising the project, planning and developing the algorithm, and revised the paper. EAK performed the tumor size assessments from imaging scans of the patients at HMC. DV and NV performed the tumor size assessments from imaging scans of the patients at UMM. JU, and ML assisted to receive permission to access to the patient data at UMM, and HMC, respectively. In addition, both of them provided general support during the clinical data collection, and contributed their clinical knowledge to the data analysis. All authors read and approved the final manuscript.
Correspondence to Zvia Agur.
The data collection in this study was retrospective. It was approved and registered as a retrospective clinical trial (ClinicalTrials gov identifier: NCT02581228), according to the requirements of the Helsinki Committee at HMC (0403-15-HMO), and upon signing a secrecy declaration prior to data retrieval at UMM.
ZA holds 15% shares in Optimata. Other authors declare that they have no competing interests.
Tsur, N., Kogan, Y., Avizov-Khodak, E. et al. Predicting response to pembrolizumab in metastatic melanoma by a new personalization algorithm. J Transl Med 17, 338 (2019). https://doi.org/10.1186/s12967-019-2081-2
Immune checkpoint blocker
Prediction algorithm
Effector CD8+ T Lymphocytes
T cell exhaustion
Advanced melanoma
Immunobiology and immunotherapy
|
CommonCrawl
|
Differences in pulse rate variability with measurement site
Emi Yuda1 na1,
Kento Yamamoto2 na1,
Yutaka Yoshida3 &
Junichiro Hayano ORCID: orcid.org/0000-0002-5340-63254
Journal of Physiological Anthropology volume 39, Article number: 4 (2020) Cite this article
Recently, attempts have been made to use the pulse rate variability (PRV) as a surrogate for heart rate variability (HRV). PRV, however, may be caused by the fluctuations of left ventricular pre-ejection period and pulse transit time besides HRV. We examined whether PRV differs not only from HRV but also depending on the measurement site.
In five healthy subjects, pulse waves were measured simultaneously on both wrists and both forearms together with single-lead electrocardiogram (ECG) in the supine and sitting positions. Although average pulse interval showed no significant difference from average R-R interval in either positions, PRV showed greater power for the low-frequency (LF) and high-frequency (HF) components and lower LF/HF than HRV. The deviations of PRV from HRV in the supine and sitting positions were 13.2% and 7.9% for LF power, 24.5% and 18.3% for HF power, and − 15.0% and − 30.2% for LF/HF, respectively. While the average pulse interval showed 0.8% and 0.5% inter-site variations among the four sites in the supine and sitting positions, respectively, the inter-site variations in PRV were 4.0% and 3.6% for LF power, 3.8% and 4.7% for HF power, and 18.0% and 17.5% for LF/HF, respectively.
These suggest that PRV shows not only systemic differences from HRV but also considerable inter-site variations.
With the spread of wearable pulse wave sensors in recent years, pulse wave signals are used not only to measure pulse rate but also to analyze pulse rate variability (PRV) as a surrogate for heart rate variability (HRV) [1, 2]. For PRV under certain conditions, such as during sleep [3] and at rest [1, 2], the validity of PRV as a surrogate of HRV has been supported by some studies. However, there are many reports that the amplitude of the respiratory component of PRV obtained from the peripheral pulse wave is larger than that obtained from HRV of electrocardiogram (ECG), and the difference increases especially in the standing position [4, 5].
In contrast to HRV, which reflects variations in the ventricular myocardial electrical excitation cycle, PRV reflects variations in the intervals of the pressure waves generated by ventricular contractions and conducted through the arterial wall to measurement site. Therefore, PRV includes, in addition to HRV, the beat-to-beat variations in the pre-ejection period and pulse conduction time which are affected by respiration and autonomic neural activity.
However, these are two questions here. Question 1 is whether the effects of these mechanisms on PRV are simply to modify (amplify or attenuate) certain components existing in HRV or to generate variability not existing in HRV. Question 2 is whether these mechanisms result in only systemic differences between PRV and HRV, or also in inter-site variations among PRVs measured at different sites.
Of these, there is an interesting report for Question 1 from Constant et al. [6]. In 10 children with an implanted cardiac pacemaker, they recorded finger pulse wave during ventricular pacing at a fixed rate of 80 bpm and performed PRV spectral analysis. Despite that ECG R-R interval was constant and there was virtually no short-term HRV, a clear spectrum of respiratory fluctuation components was observed in PRV. Their studies show that respiration can "generate" the respiratory variability in PRV without HRV. Of Question 2, there were only a few studies. Nilsson et al. [7] recorded pulse wave signals at the forearm, finger, forehead, wrist, and shoulder to determine suitable locations for monitoring heartbeat and breathing. They found the best coherence with heartbeat at the finger and that with respiration at the forearm, but they failed to analyze the relationship between PRV and HRV or site-to-site difference among PRVs. Because pulse wave velocity decreases as arterial diameter decreases, slight difference in local vasculature can cause inter-site differences not only in pulse transit time but also in its variations. In the present study, we measured pulse waves at both wrists and both forearms simultaneously together with ECG and examined whether PRV differs not only from HRV but also depending on the measurement site.
Subjects were instructed not to move during the measurement to avoid any movement of the body affecting the sensing of the pulse wave and causing noise in the signal. Compliance with this instruction was confirmed with a built-in 3-axis acceleration sensor in the PPG sensor. At least during the time where the recorded data was used for analysis, the sensors did not detect any movement.
Figure 1 shows PRV measures obtained from pulse waves at the four different sites and HRV measures obtained from ECG in five healthy subjects. Although mean pulse interval and mean R-R interval did not differ significantly, the LF and HF power were greater and LF/HF were smaller in PRV than in HRV in both supine and sitting positions (Table 1).
Mean interbeat intervals and variability measures of ECG R-R and pulse intervals at four different sites in five subjects. Different colors present different subjects and data of each individual subject are connected by lines for visibility. HF high-frequency component, LF low-frequency component, Lf left forearm, Lw left wrist, Rf right forearm, Rw right wrist
Table 1 Comparison of pulse rate variability (PRV) and heart rate variability (HRV)
Table 2 shows the coefficient of deviation (CD) of PRV measures from the corresponding measures of R-R interval and the coefficient of variance (CV) of PRV measures among four different sites. Mean pulse interval showed small CDs (1.2 ± 2.6% and 0.1 ± 1.4% in the supine and sitting positions, respectively) and small CVs (0.8 ± 1.1% and 0.5 ± 0.3%). In contrast, the power of PRV frequency components showed large deviations from those of HRV (13.2 ± 14.4% and 7.9 ± 10.8% for LF power and 24.5 ± 30.6% and 18.3 ± 15.3% for HF power on average over four sites). The CDs of the HF power were greater than those of the LF power (P = 0.002 and < 0.0001). LF/HF of PRV showed even larger negative deviations from that of HRV (− 15.0 ± 38.7% and − 33.0 ± 29.6%). Similarly, mean pulse interval showed a small inter-site difference (CV, 0.8 ± 1.1% and 0.5 ± 0.3% in the supine and sitting positions). In contrast, the LF and HF power showed large CVs, and the CVs of LF/HF were even lager.
Table 2 Deviation from HRV measures and site-to-site variations of PRV measures
Table 3 shows the significances of the effects of laterality (left and right), position (forearm and wrist), posture (supine and sitting), and their interactions on CDs and the effects of posture on CVs of PRV. Although the CD of mean pulse interval was greater and CD of LF/HF was smaller in the supine position than in the sitting position, the CDs of the other PRV measures showed no significant difference with laterality, position, or posture. The CV of any PRV measures showed no significant difference with posture.
Table 3 Effects of laterality, position, and posture on CD and CV of PRV measures
To answer the question whether the measures of PRV show only systemic differences from those of HRV or they also show differences among measurement sites, we measured pulse wave at four different sites simultaneously and compared PRV and HRV measures. We observed that mean pulse interval showed only 1.2% and 0.1% deviations from mean R-R interval and 0.8% and 0.5% inter-site variations in the supine and sitting positions, respectively. In contrast, the power of PRV showed 13.2% and 7.9% deviations for the LF component and 24.5% and 18.3% deviations for the HF component from those of HRV in the respective positions. LF/HF showed even greater deviations of − 15.0% and − 33.0%. PRV measures also showed inter-site variations of 4.0% and 3.6% for the LF power, 3.8% and 4.7% for the HF power, and 18.0% and 17.5% for LF/HF in the supine and sitting positions. These observations suggest that there may be not only systemic differences between PRV and HRV measures, but also local differences in PRV measures depending on the pulse wave measurement site.
Theoretically, PRV is considered to be caused by fluctuations in cardiac autonomic nervous activity reflected in HRV, which are superimposed and modified by fluctuations in the pre-ejection period and pulse transit time due to mechanical and neurohumoral regulatory effects on the left ventricle and arterial system [1, 2, 5, 6]. PRV is observed even in patients with fixed rate ventricular pacing without HRV [6], and the difference between PRV and HRV is known to be amplified in the standing than in the supine position [1, 2, 5]. On the other hand, it has been unclear whether these factors generate only systemic differences between PRV and HRV or they also cause site-to-site differences in PRV and, if so, what is the magnitude of their contributions. Our observations indicated that systemic factors may cause a 10–25% difference in the LF and HF power and local factors cause a 4–5% variance among PRV measured at the forearms and wrists of the left and right arms.
The mechanism by which the difference between PRV and HRV occurs requires considerations not only of physiological factors but also of technical issues. The pulse wave signal is a smooth curve without distinct fiducial points like ECG R waves. Thus, there is a technical limit to measuring the pulse intervals with the same accuracy as the R-R intervals. To overcome this problem, we avoided to measure directly the pulse intervals, but instead used the method of PFDM [3] that extracts the instantaneous pulse interval as a continuous function. In a previous study using simulation data, we demonstrated that PFDM faithfully detects changes in pulse interval even when it changes greatly and is resistant to changes in pulse height and baseline fluctuations of signals [3]. In addition, the extracted pulse interval function exhibits an almost flat frequency characteristic in which the transfer gain in the range of 0 to 0.43 Hz falls in the range of 0.97 to 1.02. Furthermore, since PFDM does not depend on the detection of the peaks, the pulse interval estimation accuracy does not depend on the sampling frequency; the accuracy for pulse wave signal with 20-Hz sampling frequency is equivalent to that of R-R interval from ECG sampled at 125 Hz. PFDM does not provide pulse intervals of individual beats but directly estimates continuous pulse interval function. This may be a disadvantage of PFDM. In time series analysis such as a spectrum analysis, however, the beat-to-beat pulse interval time series is interpolated into a continuous function and resampled at equal time intervals. PFDM does not need this step. Therefore, this can be said to be an advantage rather than a disadvantage of PFDM.
Another technical issue of PRV is body motion artifact. Body movement may cause noise due to poor adhesion of the sensor to the skin or penetration of external light and may also cause artifacts in the pulse wave signal due to inertial movement of the peripheral blood volume. There are several studies on technologies against the motion artifacts in pulse wave. Fukushima et al. [8] have improved the accuracy of pulse rate estimation with an algorithm that uses three-axis acceleration data to eliminate motion artifacts in reflective photoplethysmographic (PPG) sensors. Kagawa et al. [9] has developed an array-type sensor that increases the level of pulse wave detection, detects noise based on the average value of the pulse wave spectrum amplitude, and excludes data at that time. Maeda et al. [10, 11] studied the optimal mounting pressure for reflective PPG sensors to reduce body motion artifacts. They showed that among the upper arm, the forearm, and the wrist, the measurement of the upper arm having the smallest acceleration due to the arm swing is least affected by body motion artifact. In these studies, the optimal pulse wave measurement site was examined from the viewpoint of removing motion artifacts, but the effect of the measurement site on PRV was not examined. In this study, the pulse wave was measured at rest and no movement of the PPG sensors was detected by the built-in acceleration sensors. Thus, the effect of body motion artifacts was considered to be small. Nevertheless, we observed that the PRV obtained from the pulse wave still showed variation between measurement sites.
From the considerations above, the cause of the inter-site difference in PRV observed in the present study is unlikely to be due only to the technical problems of the pulse interval measurement, but is more likely to be due to local physiological property. It is not simply, however, explained by the distance from the heart to the measurement site, because the inter-site difference in CD was not explained by laterality (left or right) or position (forearm or wrist) of measurement. The possible mechanisms may include the anatomic differences in local vascular architecture and the functional variations caused by local autonomous vasomotions [12].
In this study, the number of subjects and the distribution of age and gender were limited, and the measurement site of pulse wave was limited to the wrist and forearm. Therefore, the deviation of PRV from HRV and the variation between measurement sites observed in this study do not estimate population values. To estimate the quantitative differences between PRV and HRV, and the proportion of contribution of systemic and local factors, we need to include pulse wave measurements at fingers, earlobes, and face. Additionally, although we recorded pulse wave signals at four different sites and ECG simultaneously, the exact timing of the signals could not be adjusted in millisecond order because each signal was recorded on a separate device. Determining the exact mechanism that causes inter-site differences in PRV requires accurate spatiotemporal analysis of pulse wave signals with perfectly synchronized recordings at different sites.
We studied PRV obtained from pulse wave signals measured at four different sites and compared them with HRV obtained from ECG. We observed that PRV differed from HRV and also differed between sites. Our findings suggest that PRV is not only affected by systemic factors that cause differences from HRV but also influenced by local factors that cause site-to-site differences. These observations also suggest the importance of researches on PRV-specific biological information not available from HRV, rather than the substitutability of PRV as a surrogate for HRV.
We studied five healthy subjects (age, 30 ± 7 years, two females), who gave their written informed consent to participate this study. The protocol of this study was approved by the Research Ethics Committee of Nagoya City University Graduate School of Medical Sciences and Nagoya City University Hospital (No. 60-18-0093).
Pulse waves were recorded with a wearable bracelet-type reflective PPG sensor (APM02, Suzuken, Nagoya, Japan). The pulse wave was measured from the reflection intensity of the green light exposed to the skin. In the sensor module, the pulse wave was digitized at 200 Hz and bandpass filtered from 0.6 to 4.0 Hz. The gain of the initial- and post-stage amplifiers was 1000x and 2000x. The signal-to-noise ratio was 1000. The output frequency of the final pulse wave signal was 32 Hz. The PPG sensor also incorporated a 3-axis accelerometer.
ECG was recorded with a bioelectric amplifier (Biotop Mini, East Medic Co., Ltd., Kanazawa, Japan) at 500 Hz with a bandpass filter between 10 and 1000 Hz.
Experiments were performed between 11:00 and 13:00 in a quiet room air-conditioned at 24 ± 3 °C. The pulse wave sensors were attached on the dorsal side of the wrist and forearm of both arms, and ECG electrodes with CM5 leads were attached on the chest wall. Data were recorded in the supine and sitting positions for 10 min each. During the sitting position, subjects relaxed their arms in pronation and placed on the desk. They were also instructed not to move during the measurement to avoid any movement of the body affecting the sensing of the pulse wave and causing noise in the signal.
Pulse interval was measured by the method of pulse frequency demodulation (PFDM) [3]. PFDM is a time series analysis method developed for assessing instantaneous pulse frequency continuously from pulse wave signal, which has no distinct fiducial point for measuring beat intervals. In this method, the pulse interval is not directly measured, but the pulse wave signal is regarded as a cosine function and its amplitude and instantaneous frequency (pulse rate) are determined by the complex demodulation method [13, 14]. The accuracy of instantaneous pulse frequency measurement by PFDM including the robustness to the fluctuations of baseline and pulse height has been reported elsewhere [3].
In the present study, 10-min continuous pulse frequency data obtained from the four measurement sites in each body position were converted into continuous pulse interval data and resampled equidistant time interval so that 1024-point pulse interval time series data were obtained.
From 10-min ECG data, beat-to-beat R-R interval data were obtained by a fast peak detection algorithm that determined temporal position of all QRS complexes [15]. The results of QRS complex detection were reviewed with ECG wave forms displayed on a computer screen with the marker of detected R wave positions. All errors of QRS detections were edited and non-sinus beats (QRS complexes without the preceding normal P wave in the range of 0.12–0.20 s) were marked as it. R-R interval data were interpolated only using those comprised continuous sinus rhythm beats, interpolated with a step function, and resampled so that 1024-point R-R interval time series data were obtained.
After calculating mean intervals for 10 min, Hanning window was applied on the pulse and R-R interval data and fast Fourier transformation was performed. From the power spectrum of each data, the power of low-frequency (LF, 0.04–0.15 Hz) and high-frequency (HF, 0.15–0.45 Hz) components and LF-to-HF ratio (LF/HF) were computed.
The differences in measures between pulse interval and R-R interval were evaluated by ANOVA with SAS Mixed procedure (SAS Institute, Cary, NC, USA) with source signal (pulse interval or R-R interval) and posture (supine or sitting) as fixed effects and subject as random effect.
To evaluate the effects of laterality (left or right), position (forearm or wrist), and posture (supine or sitting) on the magnitude of deviation of pulse interval measures from the corresponding measures of R-R interval, we introduced the CD defined with the following equation:
$$ CD\left(\%\right)=100\times \frac{x-y}{y} $$
where x represents a measure of pulse intervals and y represents the corresponding measure of R-R intervals. To evaluate the effects of posture (supine or sitting) on the variations in the measures of pulse intervals among four sites, we introduced the CV defined with the following equation:
$$ CV\left(\%\right)=100\times \frac{SD}{E} $$
where E and SD represent average and standard deviation of measures at four sites.
The effects of laterality, position, posture, and their interactions on CD and the effect of posture on CV were evaluated by ANOVA with SAS Mixed procedure. In these analyses, the power of frequency components was converted into natural logarithmic values. P < 0.05 was used as the criterion of statistical significance.
The datasets used for the current study are available from the corresponding author on reasonable request.
Lu S, Zhao H, Ju K, Shin K, Lee M, Shelley K, Chon KH. Can photoplethysmography variability serve as an alternative approach to obtain heart rate variability information? J Clin Monit Comput. 2008;22(1):23–9.
Gil E, Orini M, Bailon R, Vergara JM, Mainardi L, Laguna P. Photoplethysmography pulse rate variability as a surrogate measurement of heart rate variability during non-stationary conditions. Physiol Meas. 2010;31(9):1271–90.
Hayano J, Barros AK, Kamiya A, Ohte N, Yasuma F. Assessment of pulse rate variability by the method of pulse frequency demodulation. Biomed Eng Online. 2005;4:62.
Takada M, Ebara T, Sakai Y. The acceleration plethysmography system as a new physiological technology for evaluating autonomic modulations. Health Eval Promot. 2008;35(4):373–7.
Schafer A, Vagedes J. How accurate is pulse rate variability as an estimate of heart rate variability? A review on studies comparing photoplethysmographic technology with an electrocardiogram. Int J Cardiol. 2013;166(1):15–29.
Constant I, Laude D, Murat I, Elghozi JL. Pulse rate variability is not a surrogate for heart rate variability. Clin Sci (Lond). 1999;97(4):391–7.
Nilsson L, Goscinski T, Kalman S, Lindberg LG, Johansson A. Combined photoplethysmographic monitoring of respiration rate and pulse: a comparison between different measurement sites in spontaneously breathing subjects. Acta Anaesthesiol Scand. 2007;51(9):1250–7.
Fukushima H, Kawanaka H, Oguri K. Estimating heart rate using wrist-type photoplethysmography sensor while running. IEICE Tech Rep. 2012;MBE2012-10:49–54.
Kagawa T, Nakajima N. Reliable wristwatch-type pulse sensor which excludes noise caused by the movement of human body. IEICE Transac Inform Syst (Jap Ed). 2013;J96-D-3:743–52.
Maeda Y, Sekine M, Tamura T, Suzuki T, Kameyama K. Relationship btween motion artifacts and contact pressure to photoplethysmography. Seitai Ikogaku. 2012;50(1):78–83.
Maeda Y, Sekine M, Tamura T, Suzuki T, Kameyama K. Comparison of measurement sites and light sources in photoplethysmagraphy during walking. Seitai Ikogaku. 2011;49(1):132–8.
Nava E, Llorens S. The local regulation of vascular function: from an inside-outside to an outside-inside model. Front Physiol. 2019;10:729.
Hayano J, Taylor JA, Yamada A, Mukai S, Hori R, Asakawa T, Yokoyama K, Watanabe Y, Takata K, Fujinami T. Continuous assessment of hemodynamic control by complex demodulation of cardiovascular variability. Am J Physiol. 1993;264:H1229–H38.
Hayano J, Taylor JA, Mukai S, Okada A, Watanabe Y, Takata K, Fujinami T. Assessment of frequency shifts in R-R interval variability and respiration with complex demodulation. J Appl Physiol (1985). 8750-7587. 1994;77:2879–88.
Engelse WAH, Zeelenberg C. A single scan algorithm for QRS detection and feature extraction. Proc IEEE Comp Cardiol. 1979;6:37–42.
Emi Yuda and Kento Yamamoto contributed equally to this work.
Tohoku University Graduate School of Engineering, Aoba 6-6-05 Aramaki Aoba-ku, Sendai, 980-8759, Japan
Emi Yuda
University of Tsukuba Graduate School of Comprehensive Human Sciences, 1-1-1 Tennodai, Tsukuba, Ibaraki, 305-8577, Japan
Kento Yamamoto
Nagoya City University Graduate School of Design and Architecture, Kita Chikusa 2-1-10 Chikusa-ku, Nagoya, 464-0083, Japan
Yutaka Yoshida
Department of Medical Education, Nagoya City University Graduate School of Medical Sciences, 1 Kawasumi Mizuho-cho Mizuho-ku, Nagoya, 467-8601, Japan
Junichiro Hayano
EY participated in the concept/design, data interpretation, critical revision of the article, and approval of the article. KY took part in the concept/design, data collection, data analysis, drafting of the article, and approval of the article. YY developed the software and contributed in the data collection, data analysis, and approval of the article. JH did the concept/design, data interpretation, statistics, drafting of the article, and approval of the article.
Correspondence to Junichiro Hayano.
All subjects in both studies gave their written informed consent to participate in this study. The study was performed according to the protocol approved by the Ethics Review Committee of Nagoya City University Graduate School of Medical Sciences and Nagoya City University Hospital (approved No. 60-18-0093).
Yuda, E., Yamamoto, K., Yoshida, Y. et al. Differences in pulse rate variability with measurement site. J Physiol Anthropol 39, 4 (2020). https://doi.org/10.1186/s40101-020-0214-1
Pulse rate variability
Pulse wave
Wearable sensor
|
CommonCrawl
|
Journal of Shipping and Trade
Assessment of port efficiency within Latin America
Kahuina Miller ORCID: orcid.org/0000-0002-0623-230X1 &
Tetsuro Hyodo1
Journal of Shipping and Trade volume 7, Article number: 4 (2022) Cite this article
The Panama Canal expansion has influenced the development of ports within the Latin America and the Caribbean (LAC) region, intending to capitalise on economic opportunities through seaborne trade. Examining port performance is essential to ascertain the PCE impact on port efficiency within the LAC region. Stochastic frontier analysis (SFA) was used to determine the technical efficiency of the 19 major ports within the LAC from 2010 to 2018. The result indicates that, among the four (4) port performance indicators (berth length, port area, the number of cranes (STS gantry and mobile), and the number of berths), the number of STS gantry cranes and berth length had the largest and most significant impact. Some ports with high technical efficiency experienced TEU losses despite port infrastructural development and privatization. The findings also revealed that the increased competition among regional and US East and Gulf Coast ports has negatively impacted some LAC ports' TEU volumes due to port proximity. The dynamism of the maritime sector, especially containerization, requires ports to implement value-added services and logistics centers in tandem with port performance indicators to remain sustainable and competitive in the maritime industry.
The evolution in supply chain and logistics models has caused container terminals to rethink their logistics processes. The concept of ports and their functions has evolved throughout the decades. In the nineteenth and twentieth centuries, the port sector tended to be instruments of the state, and port access was deemed to control markets. As a result, there was a minimum competition between ports, and ports related costs were insignificant compared to ocean and inland transport costs, resulting in a lack of initiative to improve port efficiency (PE). Currently, ports are competing globally and reaping tremendous gains from ocean transportation and improvement in logistics. This drive has made the port sectors focus on improving PE, lower cargo throughput handling costs, and providing added value service to catering to other components of the global distribution network (Talley 2017; Notteboom et al. 2021). Port activity and seaborne trade are often associated with positive socio-economic effects, such as GDP and employment growth (Nogué-Algueró 2019; Notteboom et al. 2021; Munim and Schramm 2018; Rodrigue 2020; Talley 2006, 2017). In addition, ports are the drivers of urban and regional economic growth, which is a function of port productivity (Lonza and Marolda 2016; Munim and Schramm 2018; Talley 2017; Shetty and Dwarakish 2018).
Port performance indicators (PPIs) is simply defined as a measured aspect of a port's operation to maximize profitability and economic objectives (UNCTAD 2016). Hence a cost-effective port must achieve optimum and technical efficient (TE) throughput to meet its goals (Shetty and Dwarakish 2018; Talley 2006). A port performance measurement depends on several PPIs that affect regional competitiveness and optimum throughput. These factors may vary depending on the port location and region; however, the essential PPIs are berthing capacity, storing capacity, loading/unloading equipment, floor size, and the number of gates lanes (Melalla et al. 2016). Nevertheless, the standard measurement of port performance is related to several factors such as vessel dwell time (DT), loading/unloading the cargo, quality storage, and inland transport (Shetty and Dwarakish 2018). Traditionally, a port performance was assessed by actual throughput and optimum service levels, where the optimum throughput is the maximum (TE) throughput that the port can handle under certain conditions (Talley 2006).
Several authors agreed that PPIs is necessary for rational decision and precise performance measurement. These PPIs reflect port activities that determine overall port performance (Talley 2006; Shetty and Dwarakish 2018; UNCTAD 2016; Munim and Schramm 2018). Port activity can be evaluated using container traffic, voyage productivity, container dwell time, berth area, wharf entrance, departure gates, and port-channel (depth of channel) (Talley 2017; Suárez-Alemán et al. 2016; Figueiredo de Oliveira and Cariou 2015). On the other hand, port performance can be affected by both endogenous and exogenous factors. Endogenous factors involve the port affairs originating from the public and private sectors, such as administration and management inefficiencies. Exogenous factors refer to the shipping and logistics industries and trade economies directly impacting port activities (CEPAL 2019).
The geographical location of ports can also influence port performance. The changing geography of seaports is impacted by technical constraints such as the port users, intermodal connectivity, and maritime shipping networks (Notteboom et al. 2021).
Asian Ports (Singapore, Tianjin, Yokohama, Busan, and Nhava Sheva) have the highest global port performance and rankings. While African ports have displayed mixed trends (Lagos, Durban, and TangerMed) (UNCTAD 2020; World Bank 2019), most developing countries have shown significant advances in port performance and TE (Sarriera et al. 2015; UNCTAD 2020). The Latin America and the Caribbean (LAC) regional ports demonstrated an increase in container throughput during 2000 at 17 million TEUs to 53.4 million TEUs in 2018, representing 6.6% of global throughput (UNCTAD 2019). The top fifteen ports of LAC, shown in Table 1, have also demonstrated sustained positive container throughput growth (UNCTAD 2020; World Bank 2020).
Table 1 Number of terminals by location and specialization.
The Panama Canal (PC) has played a vital role in LAC's port infrastructural developments and transport logistics improvements. The Panama Canal expansion (PCE) has further improved PE among regional ports since the advent of Neo-Panamax ships (Sarriera et al. 2015; Suárez-Alemán et al. 2016; Figueiredo de Oliveira and Cariou 2015). Port infrastructural developments involve; deepening the water channel, acquiring neo-Panamax ship-to-shore (STS) cranes and post-Panamax cranes, and port expansion to construct berths and terminals. These developments have fuelled competition within the regions where US gulf and East coast ports compete for container traffic. This increasing competition depicts that LAC ports will have to display improving levels of TE to be competitive while maintaining optimum service to satisfy economic objectives to maximize profits (Sarriera et al. 2015; Rodrigue and Notteboom 2021; Talley 2006, 2017). However, it is also essential to determine which PPIs are most significant to port productivity within the LAC region and ascertain whether regional ports have experienced improvements in port performance and TE during the Post-PCE era. A port's productivity depends on the type of PPIs that needs to be measured. The individual performance of each port is vividly measured by the output increase in container throughput (Pallis and Rodrique 2021).
This research seeks to investigate the effect of Panama Canal expansion (PCE) on technical efficiency (TE) for LAC ports during the pre-and post-PCE era among 19 regional ports that account for over 85% of container throughput (TEUs) by using stochastic frontier analysis (SFA). Our objectives focus on determining port performance indicators (PPIs) necessary to improve LAC regional ports' productivity and efficiency. This study aims to contribute to the body of academic research regarding the Panama Canal expansion (PCE) impact on regional port efficiency (PE) by analyzing the technical efficiency (TE) during the pre and post-PCE era. The rest of the paper is organized as follows; introduction in the first section, "Literature review" section comprises the literature review, "Methodology" section outlines the methodology for empirical analysis employed, "Results" section presents the results, "Discussion" section discusses the findings from the results and limitations, and "Conclusion" section concludes with a key takeaway from our analysis and results.
In this section, we summarize the existing studies in three research areas: (i) the impact of port development on economic growth, (ii) the relationship between dwell time (DT) and port productivity, and (iii) port performance indicator and port efficiency.
Port development and economic growth
Ports are harbor areas where marine terminal facilities transfer cargo and passengers between ships and land transportation (Rodrigue and Notteboom 2021). Talley (2017) referred to ports as the engine for economic development. Thus, port development is a keen driver towards economic growth in a rapidly changing competitive market. Munim and Schramm (2018) studied the impact of port infrastructure and logistics performance on economic growth. The structural equation model (SEM) provided empirical evidence of this objective among 91 countries from 2010 to 2014. The findings revealed that it is of utmost importance for developing countries to continuously improve port infrastructure and logistics to achieve higher yields in economic growth. Mudronja et al. (2020) analyze the effects of seaports on regional growth. Endogenous growth theory based on research and development (R&D) was used for a sample of 107 ports within the European Union (EU) from 2005 to 2015. The findings revealed that seaports significantly contributed to economic growth among ports within the EU region. The results also showed a close relationship between investment in transportation infrastructure and economic growth. On the other hand, not all port development and productivity contribute to local economic growth. Jung (2011) studied the economic contribution of ports to the local economies in Korea. Content analysis was conducted on port-city, and input–output linkage on ports was investigated. Empirical data of the port throughput and economic indicators were used to find the relationship between ports and the financial performance of major cities in Korea. The results revealed that readily available port services do not guarantee economic success for port cities. Therefore local economies were not benefiting from nearby ports. Consequently, not in all cases does infrastructural expansion contribute to productivity. For example, Herrera and Pang (2008) studied the efficiency of the infrastructure of container ports. They used non-parametric methods to estimate the efficiency of frontiers on 86 ports globally. The results revealed that most ports in developing countries could reduce inefficiency by increasing the scale of operation. However, 33% of these ports can also reduce inefficiency by contracting the scale of operation.
Dwell time and port productivity
Port dwell time (DT) is the amount of time a cargo or ship spends within a port (Rodrigue and Notteboom 2021). It is also an indication of the efficiency levels of a seaport (Notteboom et al. 2021). DT impacts port productivity and efficiency; thus, reducing DT will improve port productivity. Port productivity is used frequently to measure and compare the performance of a firm's ratio of output over input, while PE analyses the ability of a port to obtain the maximum result under a given amount of input (Suarez-Aleman et al. 2016; Talley 2017). Several authors studied the relationship between DT and port productivity. Shetty and Dwarakish (2018) reviewed the relationship between performance parameters and the port's productivity. PPI's data was retrieved from the new Mangalore port from 1990 to 2015. Results revealed a strong negative correlation between idling time at berth, turnaround time of a vessel, and idle time at berth to the port's productivity. Aminatou et al. (2018) studied the impact of long cargo DT on port performance. A shipment level analysis was conducted using original and extensive data on container imports in the Port of Douala, Cameroon. They investigated why containers stay an average exceeding two weeks at berth. Their findings revealed that internal factors such as the logistics performance of consignees, port operations, and the efficiency of customs clearance operations and external factors such as customs procedures, shippers, and shipping lines were the main contributors to long DT. Hassan et al. (2017) analyze the DT of containers at container terminals in Indonesia. Root Cause Analysis and Problem Tree framework analyzed operational data and interviews. The results from the simulation revealed that container handling equipment had a significant impact on DT. Finding also revealed that most DT was contributed by a prolonged time of containers stay at the terminal yard.
Understanding and resolving the root cause of long DT at port terminals are essential for improving port productivity and efficiency. Furthermore, predicting the container dwell time is vital for enhancing port operations. According to PortStrategy (2020), the German container terminals will predict DT by implementing a new terminal operation system (TOS) based on machine-learning technology. This system will improve container stacking and optimize pick-up handling.
Figure 1 shows the median time spend in port for container ships per LAC country. This DT indicates the overall port productivity of the country. For example, Panama and Colombia have the least time delay for increasing vessel traffic at 0.66 days for 3883 vessels and 0.6 days for 3689 vessels, respectively. On the other hand, Argentina has the highest median time in ports, showing 1.46 days for 1104 vessels. DT can also indicate the efficiency of a port's processes and infrastructure (Shetty and Dwarakish 2018).
Source: World Bank (2020)
Median time (days) and number of vessels (No.) for leading container ports (country) within LAC.
Port performance indicators (PPIs) and port efficiency (PE)
Port performance indicators (PPIs)
PPIs are used to measure various aspects of a port's operation. The weight of these indicators may vary based on location, throughput volumes, nature of cargoes, port infrastructure, equipment, and facilities (Melalla et al. 2016; Talley 2017). These indicators measure a port's performance by monitoring activities, checking their efficiency, and comparing the present with past performance (Shetty and Dwarakish 2018; Notteboom et al. 2021). Port performances require a set of measures related to vessel dwell time, cargo throughput volumes, berth area, harbor depth, quality storage, and inland transport (Shetty and Dwarakish 2018). However, not all measurements are related to a port's physical infrastructure.
Langen et al. (2007) Studied the feasibility of using performance indicators from the airport and the business industries to the port sector. New indicators such as services variability and average time to deliver cargo could potentially measure port performance. Furthermore, they analyzed performance indicators in other economic and spatial entities such as airports, regional economies, and business parks. The results revealed that these new PPIs would be useful for the port industry.
Port efficiency (PE)
PE analyses the ability of a port to obtain the maximum output under a given amount of inputs. Therefore, gains in efficiency represent an improvement in performance closer to optima (Suarez-Aleman et al. 2016). PE is a keen component of port performance (Notteboom et al. 2021). Several authors studied the effects of PE on transportation cost, trade, port competition, and socio-economic issues.
Serebrisky et al. (2016) explored the driver of PE in LAC. The Stochastics Frontier model developed a TE evaluation on container ports within LAC. Using data from 1999 to 2009 among 63 ports of container throughput, port terminal area, berth length, and the number of available cranes. The finding revealed an overall improvement in the average TE of ports within the region, from 52 to 64%. Furthermore, the results showed a positive and strong correlation between TE and private port operation.
Pérez et al. (2016) analyzed the development of major container terminals within LAC. The paper's main objective was to investigate factors that influenced container port inefficiency among inter-port and intra-port competition. Stochastic Production Frontier was used for this analysis for all LAC ports from 2000–2010. The results revealed that PE within the LAC has positively evolved despite the economic crisis, whereby container terminals located among Mercosur countries with three or four container terminals were more efficient than transshipment ports within the region. Interestingly, transshipment ports were least efficient than other types of ports.
Merk and Dang (2012) studied the global PE for container and bulk cargo. Data envelopment analysis (DEA) methodology was used to find the overall efficiency score of 63 of the largest international container ports. The findings revealed that ports with noticeable increases in TE showed significant improvement in PE. Also, promoting port policies to raise throughput levels was essential for improving production scale inefficiencies. However, they also found production scale inefficiency increases whenever a port throughput level is below or above optimum operating terminal capacity. This inefficiency was predominantly found for ports that handled crude oil and iron ore, suggesting that efficiency was affected by exogenous factors relating to traffic flow.
Blonigen and Wilson (2007) studied PE and Trade flow. The Gravity trade model was used to analyze US imports and associated imports cost, yielding estimates across ports, products, and time. The results revealed that PE significantly increases trade volumes.
Clark et al. (2004) examined shipping costs to the United States using 300,000 observations per year on the shipments of products accumulated for various global ports. They found that PE was an essential element of shipping costs. Enhancing PE from the 25th to the 75th percentile reduced shipping costs by 12%. Overall, their research revealed that a port's (in) efficiency also increased handling and shipping costs.
Figueiredo de Oliveira and Cariou (2015) studied competition on container port (in) efficiencies. They investigated competition impacts on container PE scores at regional, local and global levels. Using truncated regression with Bootstrapping model for 200 container ports from the period 2007 to 2010. Results revealed that PE decreased with competition intensity varies with distance. For instance, regional range from 400 to 800 km, local range from less than 300 km, and global level more than 800 km were insignificant at all three levels. Estimates also show a tendency for ports that invested from 2007 to 2010 to experience a general decrease in efficiency scores, which the time lag between the investment could explain.
Tongzon and Heng (2005) examined port privatization, efficiency, and competitiveness. They also investigate the determinants of port competitiveness using principal component analysis and Linear regression model among international container terminals. The results of the study revealed that private sector participation in the port industry could improve port efficiency, therefore, increasing port competitiveness.
The efficiency of ports can be affected by endogeneity and exogenous factors. Several authors extensively studied the link between PE in relation to corruption and socio-economic issues. Suarez-Aleman et al. (2016) examine the drivers of productivity and the efficiency changes for ports among developing regions. Using data from the period 2000 to 2010. The results revealed that PE for developing regions improved, increasing from 51 to 61% in 2010. The analysis indicated public sector corruption that PE in developing countries could be improved if there were reduced ship liner connectivity improvements and increased multimodal connectivity among ports.
Several authors' studies revealed a positive link between port productivity and economic growth (Mudronja et al. 2020; Munim and Schramm 2018; Talley 2006). Furthermore, most research revealed that PE positively impacts trade volumes, freight transport, shipping cost, and DT (Shetty and Dwarakish 2018; Aminatou et al. 2018; Hassan et al. 2017; PortStrategy 2020). The authors also connect exogenous and endogeneity factors such as corruption and social-economic factors negative relationship to PE (Serebrisky et al. 2016; Pérez et al. 2016; Merk and Dang 2012). However, little research analyses the PCE influence on PE among the LAC region. The SFA model will address this research gap to determine the most significant PPIs towards PE and regional competitiveness.
LAC profile
The LAC region is a diverse economy that is mainly export-driven. This region comprises thirty-three (33) countries that are divided into three (3) sub-regions; South America, Central America, and the Caribbean. Figure 2 shows the main sub-regions of Central America, the Caribbean, the east coast of South America (ECSA), Mexico (both coasts), the north coast of South America (NCSA), and the west coast of South America (WCSA).
Source: Wilmsmeier and Monios (2016)
Map shows Latin America's and the Caribbean port system (TEU).
LAC port system
The rapid increase in global container trade in the past two decades has significantly influenced the LAC region's port geography (Wilmsmeier and Monios 2016). The LAC system can be classified by territory and coastline into Central America (split by East and West coast), South America (split by East, West, and North Coast), and the Caribbean. The geographic location of the LAC region, as shown in Table 1 and Fig. 2, consists of 575 terminals on the eastern coast of South America, representing 38% of the regional total. In comparison, 390 terminals were located on the western coast of South America, representing 25.7% of the regional total (Wilmsmeier and Monios 2016). In addition, the Caribbean has 345 terminals, representing 22.5% of the regional total (22.8%), and Central America has 205 terminals, representing 13.8% (CEPAL 2020). Table 1 shows the number of terminals in the region: South America (East coast and West coast) has 49 container terminals, the Caribbean has 27 container terminal ports, and Central America has 13 container terminals.
The quality of port infrastructure (QPI) measured business executives' perception of their country's port facilities where WEF (1 = extremely underdeveloped to 7 = well developed and efficiency by international standards) as shown in Table 2. Panama tops the region's ranking at 5.7, proving that SFA results were justifiable overall port performance. Brazil had the lowest overall rank at 3.2; however, Brazil comprises 175 ports; therefore, only top-performing ports were considered because each port will have different TE performance and QPI.
Table 2 Quality of Port Infrastructure among the 19 top Port countries in Latin America and the Caribbean.
Table 3 shows the port infrastructure and the average annual throughput of each port. The data period spans eight years, from 2010 to 2018. Displaying keen port infrastructural indicators such as berth length, port area, number of mobile and quay cranes, ship-to-ship (STS) gantry cranes, number of berths, draft, transshipment, and the annual container throughput in TEUs.
Table 3 Key port infrastructural indicators for the 19 LAC ports.
Table 4 shows that all ports recorded significant growth except for regional ports in Central America and the Caribbean that Balboa Panama (− 9%), Kingston; Jamaica (− 3%), San Juan; Puerto Rico (− 8%), and Freeport; Bahamas (− 7%) for the 2010 to 2018 period.
Table 4 LA America and the Caribbean top 19 ports (TEU).
Influential factors that affect port efficiency (PE) within the LAC
The maritime industry is dynamic and responsive to global economic changes; therefore, several factors have influenced port development and efficiency during the pre and post PCE era. Factors include trade policy, port liner shipping connectivity, and the world seaborne trade growth.
Trade policy influence on trade
Trade plays an integral role in ending global poverty because it has a positive and statistically significant impact on economic growth (World Bank 2020). Open trade and investment with the rest of the world are essential to sustainable economic growth, mainly determined by the type of trade policies in place (IMF 2001). Trade Policy allows bilateral trade among countries to improve exports and imports. Although several studies support that port efficiency (PE) positively impacts trade (Tongzon 1995; Shetty and Dwarakish 2018). However, a port's ability to handle export and import volumes efficiently indicates a level of port performance. Naanwaab and Diarrassouba (2013) studied the influences of economic freedom on bilateral trade in intra-African. Their findings revealed that trade agreements (trade policy) positively impact bilateral trade among African countries. Further results indicated that port inefficiencies in Africa had hindered trade growth. On the contrary, not all trade policies are beneficial. According to Tran (2019), Trade freedom (TRFR) inhibits trade and economic development among some ASEAN countries (Fig. 3).
Container through for 2018 for the Nineteen (19) LAC ports.
Trade Freedom (TRFR) Index for Latin America and the Caribbean (LAC), as shown in Fig. 4. declined from 74.8 in 2007 to 74.6 in 2014, then rebounded to 74.7 in 2018. Overall, showing improvements in Trade Freedom (TRFR) within the region.
LAC region Trade Freedom (TRFR) from 2007 to 2018.
Port liner shipping connectivity index (PLSCI) in LAC
The Port liner shipping connectivity index (PLSCI) assesses how well a country links to the global shipping networks (UNCTAD 2021a, b). The LSCI is measured by five (5) components of the maritime transport sector: number of ships, container-carrying capacity, maximum vessel size, number of services, and companies that deploy container ships in a country's ports (World Economic Forum 2018). Port infrastructure and PLSCI strongly affect freight rates in the LAC region (Wilmsmeier and Monios 2016). The port liner connectivity is an essential factor influencing trade activity in the maritime industry for regional ports within LAC and US East and Gulf coast. Therefore, PLCSI also indicates the level of efficiency of a port.
In recent times within the LAC region, the global recession has influenced significant consolidation of shipping lines, whereby shipping lines were forced to reduce cost and optimize ship deployment and services to their customers. Overall, this has led to a higher concentration of container handling among regional ports (Caribbean Development Bank (CDB) 2017). For example, G6 Alliance was established during that period consisting of Hapag-Lloyd, NYK Line, OOCL, Hyundai Merchant Marine, APL, and Mitsui O.S.K. Lines. The merger between Maersk and MSC, forming the 2M alliance. And several other international and regional unions not listed have influenced port efficiency (PE) within the region (Rodrigue and Notteboom 2021; CDB 2017; UNCTAD 2021a, b).
The average Port Liner Shipping Index (PLSCI) for the three (3) regions showed consistent growth in South America, Central America, and the Caribbean. As shown in Fig. 5, for South America (SA), the PLSCI score increases from 8.50 (2010) to 12.40 (2019), Central America (CA) score increases from 8.63 (2010) to 13.82 (2019), and the Caribbean score from 8.63 (2010) to 12.41 (2019). Also, for transshipment ports, the PLSCI is significantly higher than the overall regional port. The PLSCI for transshipment ports increases from 20.6 (2010) to 30.1 (2019).
Source: UNCTAD (2020)
Port Liner Shipping Connectivity Index (PLCI). Index (Maximum Q1 2006 = 100).
World seaborne trade influence on economic growth and port development
Port is the gateway of trade therefore, as trade increases so will economic growth. According to UNCTAD (2021a, b), around 80% of volume trade in goods is carried by sea, by which the percentage is higher for developing countries. Several authors link port development, trade, and maritime transport to economic growth (Munim and Schramm 2018; Talley 2017; Gani 2017). Therefore investments towards port development in tandem with logistics infrastructure will improve PE and positively influence economic growth (Munim and Schramm 2018). Poor port and logistics infrastructures among developing countries increase the costs and time required for trade (Töngür et al. 2020; Gani 2017). For example, small Caribbean states have high transportation costs because of the inadequate port infrastructure that has hindered the efficient movement of goods (Munim and Schramm 2018). Trade has a direct impact on GDP growth, therefore, as PE improves through infrastructural development as a result trade volume will be enhanced. Figure 6 shows that the growth of the LAC's GDP with Seabourne trade % (Tonnage) is highly correlated.
LAC's GDP growth (%) and Global Seaboune trade (%).
Different approaches to technical efficiency frontiers
Data envelopment analysis (DEA) and stochastic frontier analysis (SFA)
The assessment of multiport performance for TE of ports was conducted using the Frontier models. TE is usually calculated using two approaches, DEA and SFA, where both rely on the estimation efficiency frontier. The Frontier is most frequently used to determine the best performance of data sample information (Serebrisky et al. 2016) while, the DEA is commonly used for multiport TE assessment. According to Talley (2017), DEA is a mathematical programming technique used to derive and estimate TE rating for a group of ports relative to each other. However, the main drawback to this approach is that it assumes sample measurement errors and random variation (Serebrisky et al. 2016).
SFA refers to a body of statistical techniques used to evaluate a port inefficiency by estimating performance and productivity. (Encyclopedia 2021; Aigner et al. 1977). SFA relies on the parametric estimation of the production function with a stochastic component (Kuosmanen and Kortelainen 2010). The error term of SFA is comprised of two random effects that depict statistical noise and other TE. Table 5 shows the main characteristic of DEA and SFA.
Table 5 Characteristics of DEA and SFA.
Cullinane et al. (2006) use both DEA and SFA approaches to analyze the performance of the world's largest container ports and compare the findings. The results revealed a high level of TE for private-sector-owned and transshipment ports than gateway ports. Similarly, Notteboom et al. (2000) presented an approach for assessing the container terminal efficiency using the Bayesian stochastic frontier modeling. The model was tested using a sample of thirty-six (36) European container terminals and four (4) Asian container ports. The results revealed that feeder ports were less efficient than terminals located in hub ports. Finally, Yang et al. (2011) used SFA and other inefficiencies such as Delphi technique models to evaluate the efficiency of seaport operations. The study results highlight areas of seaport operations that need to be resolved and show which characteristic needs rectification.
The SFA model was applied and supported by several authors to calculate the TE of port performance; both the DEA and SFA were used in various articles to analyze the TE. The two approaches have different strengths and weaknesses. DEA is sensitive to measurement errors or noise within the in data because of its deterministic approach. However, the SFA considers stochastic noise in data and allows statistical testing of hypotheses concerning production structure and degree of inefficiency as shown in Table 5. The SFA model will assess the port performance of Nineteen (19) top-performing ports within LAC regions. The results will be necessary to determine the PCE impact on top regional ports and the TE since the expansion.
The characteristic of ports within the LAC region varies in infrastructure and added value services. The accommodation of Neo-Panamax port is the main agender for port development through the upgrade and acquisition of neo-Panamax compatible equipment such as cranes, hinterland expansion, and deepening of water channels. The PCE has fueled the port project among the region's major ports that seek to capitalize on container throughput and added value service, logistics hubs, and ship repairs. Table 4 shows the characteristics and profiles of the 19 top regional ports that account for 85% of regional container throughput.
SFA is a method used to calculate a port or firm's TE. It is known as comprised error, model for production function \({y}_{i} = g\left({x}_{i} ,\beta \right)+ {\varepsilon }_{i} \left( i = 1, 2, 3, \dots ,N\right)\). Where \({y}_{i}\) is the output for statement i, xi is a vector of input for statement i, β is the vector of parameters, εi is error term for statement i, postulates that the error term εi is made up of two independent components, \({y}_{i} = g\left({x}_{i} ,\beta \right)+ {\varepsilon }_{i} \left( i = 1, 2, 3, \dots ,N\right), {\varepsilon }_{i} = {v}_{i} - {u}_{i}\) where vi is a two-side error term representing statistical noise in any relationship; \({u}_{i} > 0\) is one-side error term representing technical inefficiency. The exponential form of the proposed model giving production function in Eq. (1) as,
$${y}_{it} =\mathrm{exp}\left({x}_{it} \beta + {v}_{it} - {u}_{it} \right)$$
where yit is the production at the tth observation (t = 1, 2, …, T) for the ith firm (i = 1, 2, …, N); xit is the logarithm of input variable vit is random error assumed to be variance, N(0, σv2), and independently distributed of a non-negative random variable, uit. The truncated normal distribution using Wald or generalized likelihood- ratio test is specified in this research to justify the selection of distribution form for technical inefficiency effects.
Regression of effects of inefficiency on the variables that explain inefficiency is given by Eq. (2) as,
$${u}_{it} = {z}_{it} \delta + {W}_{it}$$
Zit is a vector of explanatory variable; δ is a vector of unknown scalar parameters; Wit is the truncation of normal distribution, \(N\left(0,{ \sigma }_{v}^{2}\right)\) truncation is such that point of truncation is \({-z}_{it} \delta\). The likelihood function is expressed in terms of variance parameter \({\sigma }_{s}^{2}= {\sigma }_{v}^{2}+{\sigma }_{v}^{2}\) and \(\gamma = \sigma /\sigma_{{\varvec{s}}}^{2} \user2{ }\), inefficiency can therefore be defined in terms of the ratio between observed output and potential output is given input xit as,
$$TE_{it} \user2{ } = \user2{ }y_{it} /exp \left( {x_{it} \beta + v_{it} } \right) = exp\left( { - z_{it} \delta {-} w_{it} } \right)$$
Stochastics frontier analysis (SFA) for LAC
In assessing the PE of 19 LAC ports using an SFA methodology is the production function specification (Cobb–Douglas form) as shown in Eq. (4) below. Time invariant TE is specified as follows.:
$$ln ({Y}_{it }) = {\beta }_{0} +{ \beta }_{1} ln({A}_{it}) +{\beta }_{2} ln({B}_{it}) +{\beta }_{3} ln({C}_{it}) + {\beta }_{4} ln ({Q}_{it}) + {V}_{it} - {u}_{it}$$
These variables are defined as follows:
$$\forall I = 1...N\;\;and\;\;t = 1...T$$
where Yit is the container throughput in TEUs handled by port i in period t; Ait is the total area (in square meters) of the container terminals in port i in period t; Bit is the total length (meters) of the berths used for container handling in port i in period t; Cit is the number of container cranes (Mobile Crane + STS gantry cranes) owned by port i in period t, and the number of berths (Qit) is the number of berths in port i in period t.
The data was gathered from nineteen (19) top container ports in the LAC regions; nine (9) ports in South America, six (6) ports in Central America, and four (4) ports in the Caribbean, as shown in Table 3. The database was primarily populated with information published by CEPAL (2019), World Port Source (2021), and World bank (2020). Economic Commission for Latin America and the Caribbean (ECLAC) database gives the Port activity report of container throughput for 31 countries and 118 port and port zones. The World Port source (WPS) gives the profile on each regional port, and the World Bank gives the data on container throughput and ports infrastructural rankings as shown in Fig. 3 and Table 2.
The results for TE were derived from the SF model for the period 2010 to 2018; as shown in Table 6, the TE of ports in LAC ranged from 43.3 to 100%. Port of Colon (Panama), Balboa (Panama), El Callo (Peru), Guayaquil (Ecuador) and San Juan (Puerto Rico) were 100%. South American ports consisted of Santos (72%), Cartagena (87.5%), El Callo (100%), Guayaquil (100%), Buenos Aires, (54.5%), San Antonio (43.3%) Itajai (84.1%) and Valparaiso (58.5%). For Central America, the TE results were Port of Colon (100%), Balboa (100%), Manzanillo (85.4%), Limon Moin (74.2%), and Altamira (55.1%). For Caribbean Ports, TE were Kingston (60%), San Juan (100%), and Caucedo (66.7%).
Table 6 Technical efficiency results for the 19 LAC ports. Period 2010–2018.
Table 7 shows Stochastic Frontier analysis results for the 19 LAC ports for output variables: berth length (Bit), area of port (Ait), cranes (Cit), and number of Berths (Qit) were all statistically significant at 1% with the following coefficient, − 0.0622, 0.0621, 0.2719, and 0.0148, respectively with log-likelihood was 3.8095.
Table 7 Estimation of stochastic production frontier.
The SFA model also revealed that the most TE container terminals are Colon, Balboa, El Callo, Guayaquil, and San Juan; these ports have a TE of 100%. The port of San Antonio recorded the lowest TE at 43.3%. The TE results for transhipment ports within the region, Colon (100%), Santos (72%), Balboa (100%), Cartagena (87.5%), Freeport (98.5%), Caucedo (66.6%) and Kingston (60%).
Pre and post PCE era
Figure 7 showed that during the pre and post-PCE era, 2014 to 2016 (Before) and 2017 to 2018 (After). The result shows El Callo, Guayaquil, and San Juan maintained 100% TE. Ports that have improved TE percentages were Manzanillo (95 to 100), San Antonio (46 to 48), Buenos Aires (42 to 47), Buenaventura (39 to 60), Caucedo (66 to 71) and Freeport (74 to 75). Declined TE; Colon (98 to 97), Santos (91 to 41), Balboa (100 to 78), Kingston (64 to 53), Itajai (100 to 65), Valparado (62 to 51) and Altamira (87 to77).
Source: Own elaboration
Technical efficiency (period 2014–2016, 2017–2018).
The regional assessment shown in Table 8 reveals that the average TE for South American ports has increased from 72 to 75% for the pre and post-PCE era. Conversely, Central American ports and Caribbean ports experience a reduction in TE. For example, central America had a percentage drop from 92 to 85 while, Caribbean ports experienced a 1% reduction in TE percentage.
Table 8 Mean technical efficiency per region, pre and post expansion era.
Figure 8 shows the median time spent and the number of vessel arrivals. Colombia, Panama, and Dominican Republic (DR) recorded the lowest time for ships at the port; 0.6, 0.66, and 0.67, respectively. Conversely, Argentina, Ecuador, and Costa Rica have recorded the highest time delay at 1.46, 1.14, 1.06, respectively. For transshipment port countries, Colombia's median time was lowest at 0.6, followed by Panama at 0.66.
Median time at port and vessel arrivals for 13 LAC countries.
The SFA model results showed that the four (4) output variables, berth length (Bit), area of port (Ait), cranes (Cit), and the number of berths (Qit) used in the model, were all statistically significant, as shown in Table 7. All four (4) output variables, had significant increases from 2000 to 2010, as shown in Table 11. However, in Table 9 from 2010 to 2016, only the variable; Cranes (Cit) had established considerable increases among the 19 major regional ports. During the period 2016 to 2018 as shown in Table 10, all variables had no changes except for the deepening of the harbor for the Port of Kingston; this variable was excluded from the model.
The keen factors in improving port productivity and efficiency are the improvements of port and logistics infrastructures, reducing shipping and handling costs, and lowering DT, which will eventually improve a port's trade volumes and competitiveness (Clark et al. 2004; Töngür et al. 2020; Gani 2017; Merk and Dang 2012; Blonigen and Wilson 2007; Figueiredo de Oliveira and Cariou 2015). Interestingly, the SFA results showed that the coefficient for the crane was 0.2719, which constituted as having the largest impact than the other variables (Hassan et al. 2017; Suárez-Alemán et al. 2016; Talley 2017; Serebrisky et al. 2016). The PCE had spurred the LAC region towards port investment (Suarez-Aleman et al. 2016; Notteboom et al. 2021). These investments were largely focused on the acquisition of STS gantry and neo-Panamax cranes, port hinterland expansion, and for some ports; the deepening of waterways or harbor to accommodate ships with a draft of 15 m and more (Mudronja et al. 2020; Munim and Schramm 2018; Rodrigue and Notteboom 2021; Serebrisky et al. 2016). Therefore, the utilization of the four (4) output variables (Bit, Ait, Cit, and Qit) were important components for improving PE and regional competitiveness (Suárez-Alemán et al. 2016; Talley 2017; Rodrigue and Notteboom 2021; Töngür et al. 2020; Gani 2017, Serebrisky et al. 2016).
Table 6, shows that PE results for ports within the LAC vary depending on two factors; (1) a port's ability to handle larger container vessels, and (2) the surge in container throughput (TEUs) due to increases in transshipment activities (Talley 2017; Mudronja et al. 2020; Suárez-Alemán et al. 2016; Clark et al. 2004). On the other hand, some ports especially traditional transshipment ports such as Kingston (Jamaica) and Freeport (the Bahamas) encounter a decline in TEUs due to port proximity, and inefficiencies (Figueiredo de Oliveira and Cariou 2015; Pérez et al. 2016; Clark et al. 2004).
The PCE may play a major role in maritime activities among regional ports however, it is not the only influencing factor for PE improvement (Merk and Dang 2012; Suárez-Alemán et al. 2016; Talley 2017; World Bank 2020; UNCTAD, 2021a, b; CDB 2017). Several other factors such as port privatization, trade policy, global economic growth, port liner connectivity, infrastructure, and the culture of corruption, can affect port productivity and efficiency (Tongzon and Heng 2005; Tongzon 1995; Serebrisky et al. 2016; Shetty and Dwarakish 2018; Park et al. 2020; Çelebi 2017; World Bank 2020; UNCTAD 2021a, b). Moreover, port efficiency (PE) in relation to throughput also depends on the port location, frequency of ship calls, port charges, economic activity, and terminal efficiency (Tongzon 1995; Talley 2017; Figueiredo de Oliveira and Cariou 2015; Suarez-Aleman et al. 2016; Jung 2011). For example, the efficiency of a container port depends on the crane efficiency, economies of scale (Vessel size and cargo exchange), work practices, and mixed container (Tongzon, 1995; Shetty and Dwarakish 2018). These PPIs are frequently used to determine PE. Nevertheless, exogenous factors such as governmental trade policies, liner connectivity, economic growth, trade, intermodal connectivity, and logistics infrastructure have impacted regional port performance (Merk and Dang 2012; Serebrisky et al. 2016; Shetty and Dwarakish 2018; Park et al. 2020).
The TE results shown in Table 6; revealed that ports within the region experience different levels of growth in TEUs that depend on their scale of operation (Hassan et al. 2017; Herrera and Pang 2008). PPIs improvements enhance the quality of service to the port users, reduce technical and cost inefficiencies, and increase the port's compatibilities (Talley 2017; Melalla et al. 2016; Shetty and Dwarakish, 2018; Hassan et al. 2017). As shown in Tables 2 and 3, Panama has the highest port infrastructure index and container throughput within the region. The results from the SFA revealed that both the ports of Colon and Balboa showed TE values of 100% from 2010 to 2018. Argentina has a quality of QPI recorded rank of 3.7, and port of Buenos Aires, TE was 54%.
A shorter time at a port is a positive indicator of the port's efficiency and trade competitiveness (UNCTAD 2019; Aminatou et al. 2018; Hassan et al. 2017; PortStrategy 2020). Therefore, reducing vessel time at the port will accommodate more vessel calls (Tongzon and Heng 2005; Talley 2017; Notteboom et al. 2021). Figure 5; shows the median time vessel spent on region ports. Panamanian and Colombian ports displayed the shortest at 0.67 and 0.6, respectively. These values correlate to the high level of TE of 100 and 87.5%, respectively. The results clearly, revealed that ports with the shortest time median (dwell time) normally displayed larger TE values.
Port is the gateway to trade and economic growth (Talley 2017; Notteboom et al. 2021). Therefore, improving PE is a necessary component for enhancing a port's productivity and competitiveness for developing countries (Tongzon and Heng 2005; Talley 2017; Serebrisky et al. 2016; Shetty and Dwarakish 2018; Park et al. 2020). The results revealed that for the LAC region four (4) PPIs were significant for PE during the pre and post-PCE era. However, factors such as trade and port policy, liner shipping connectivity, and the utilization of technological innovation can be essential tools to alleviate port congestion and improve dwell time (PortStrategy 2020).
The sample size of this research was taken from the ECLAC (CEPAL), World Port Source, and port website. This sample was among 19 top regional port consisting of transshipment hubs that accounts for over 80% of container throughput. The limited sample size resulted from the exclusion of smaller ports that provided limited data on berth length, port area, number of cranes, and number of berths. In addition, most small ports cannot accommodate Neopananax and post-Panamax vessels; therefore, throughput volume will be lower than large ports. Thus generalization of the findings is constrained to major ports.
In order to assess port efficiency (PE) in Latin America and the Caribbean (LAC), Stochastic Frontier Analysis (SFA) was used to determine the technical efficiency (TE) for 19 ports from 2010 to 2018. Container throughput (TEUs) was used as the output variable, whereas berth length (Bit), port area (Ait), cranes (Cit), and number of berths (Qit) were input variables.
The estimation from the SFA indicates productivity from cranes, ship-to-shore (STS) gantry, and berth length had the largest impact and are positively significant. Findings also revealed that LAC countries with low QPI rankings displayed low TE. The pre and post PCE Era highlighted that 'timely' investment towards port development and infrastructural improvements increases productivity and efficiency which is partly influenced by the privatization of ports (Tongzon and Heng 2005; Nogué-Algueró 2019; Notteboom et al. 2021; Munim and Schramm 2018; Rodrigue 2020; Talley 2006; Talley 2017; Serebrisky et al. 2016). For instance, Caucedo (Dominican Republic) and Buenaventura (Colombia) were good examples of these findings; they showed significant improvements in TE because of the regional port administration's initiatives towards the improvement and development of ports before the completion of the PCE. Furthermore, most of the top and emerging regional ports executed the long-term strategy of improving port competitiveness through port privatization and policies (Rodrigue and Notteboom 2021; Tongzon and Heng 2005; Merk and Dang 2012; Serebrisky et al. 2016). For instance, CMA-CGM signed a $509 million, 30-year concession with the Port Authority of Jamaica in 2015. Likewise, APM Terminal signed a $992 million, 33-year concession with the government of Costa Rica in 2011.
Improvements in PE for the LAC region were not solely influenced by the PCE. Other factors such as trade agreements among Latin American countries were implemented during the pre and post PCE era, for example, the Central America Free Trade Agreement (CAFTA) and Free Trade Area of the Americas (FTAA) (CEPAL 2019). Liner consolidation, port privatization, and the growth of the seaborne trade have certainly had a positive impact on regional port development (Nogué-Algueró 2019; Notteboom et al. 2021; Munim and Schramm 2018; Rodrigue 2020; Talley 2006; Talley 2017; Shetty and Dwarakish 2018).
The results, as shown in Table 8 also, revealed that South American's TE has improved since the PCE, while Central America and the Caribbean have experienced a reduction in TE influenced by the regional port competitions (Bhadury 2016; Park et al. 2020). This reduction could be a result of both port inefficiency and proximity. Take, for instance, Freeport (Bahamas); one of the significant transship hubs have experienced TEUs losses due to the port's proximity to US East coasts ports such as Miami, Everglades, and Charleston (Notteboom et al. 2000; Merk and Dang 2012; Bhadury 2016; Park et al. 2020).
Assessing TE using PPIs can guide port seeking to improve productivity, cost reduction, and competitiveness (Blonigen and Wilson 2007; Serebrisky et al. 2016). PPIs such as berth length (Bit), terminal Area (Ait), (STS gantry and mobile) cranes (Cit), and the number of berths (Qit) are crucial areas that investors should focus on to improve productivity. However, other variables such as corruption, type of ownership, added-value services, port proximity, and income classification could further validate the TE results. These variables were not considered within this research. Further studies on these variables may be considered for future research. Overall, the SFA model can be an effective tool for assessing port productivity within the LAC region.
CEPAL (2019). Port Activity report of Latin America and Caribbean. https://www.cepal.org/en/notes/port-activity-report-latin-america-and-caribbean-2018; United Nations Conference on Trade and Development. (UNCTAD). https://unctad.org/en/Pages/statistics.aspx; Clarkson Research data 2020. https://www.clarksons.net/portal; Container Port traffic (TEU: 20 Foot equivalent units). https://data.worldbank.org/indicator/IS.SHP.GOOD.TU; Latin America and Caribbean Ports. http://perfil.cepal.org/l/en/portmovements_classic.html.
Aigner D, Lovell C, Schmidt P (1977) Formulation and estimation of stochastic frontier production function models. J Econom 6(1):21–37. https://doi.org/10.1016/0304-4076(77)90052-5
Aminatou M, Jiaqi Y, Okyere S (2018) Evaluating the impact of long cargo dwell time on port performance: an evaluation model of Douala International Terminal in Cameroon. Arch Transp 46(2):7–20. https://doi.org/10.5604/01.3001.0012.2098
Bhadury J (2016) Panama Canal expansion and its impact on East and Gulf Coast ports of USA. Marit Policy Manag 43(8):928–944. https://doi.org/10.1080/03088839.2016.1213439
Blonigen BA, Wilson WW (2007) Port efficiency and trade flows*. Rev Int Econ 16(1):21–36. https://doi.org/10.1111/j.1467-9396.2007.00723.x
Caribbean Development Bank (CDB) (2017) Transforming the caribbean port services industry: towards the efficiency frontier [E-book]. Caribbean Development Bank, Bridgetown
Çelebi D (2017) The role of logistics performance in promoting trade. Marit Econ Logist 21(3):307–323. https://doi.org/10.1057/s41278-017-0094-4
CEPAL (2019) Port activity report of Latin America and the Caribbean 2018 | Briefing note | Economic Commission for Latin America and the Caribbean. Economic Commission for Latin America and the Caribbean. https://www.cepal.org/en/notes/port-activity-report-latin-america-and-caribbean-2018
CEPAL (2020) Economic Commission for Latin America and the Caribbean. Ports. http://perfil.cepal.org/l/en/portmovements_classic.html
Çetin SB, Balcı G, Esmer S (2017) Effects of prolonged port privatization process: a case study of port of İZMİR alsancak. Dokuz Eylül Üniversitesi Denizcilik Fakültesi Dergisi 1309–4246:114–134. https://doi.org/10.18613/deudfd.351630
Clark X, Dollar D, Micco A (2004) Port efficiency, maritime transport costs, and bilateral trade. NBER Working Papers, 417–450. https://doi.org/10.3386/w10353
Cullinane K, Wang TF, Song DW, Ji P (2006) The technical efficiency of container ports: comparing data envelopment analysis and stochastic frontier analysis. Transp Res Part A Policy Pract 40(4):354–374. https://doi.org/10.1016/j.tra.2005.07.003
Da Silva F, Rocha C (2012) A demand impact study of southern and southeastern ports in Brazil: an indication of port competition. Marit Econ Logist 14:204–219. https://doi.org/10.1057/mel.2012.4
De Langen PW, Sharypova K (2013) Intermodal connectivity as a port performance indicator. Res Transp Bus Manag 8:97–102. https://doi.org/10.1016/j.rtbm.2013.06.003
Encyclopedia (2021) Stochastic frontier analysis | Encyclopedia.com. Encyclopedia.Com. https://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/stochastic-frontier-analysis
Figueiredo De Oliveira G, Cariou P (2015) The impact of competition on container port (in)efficiency. Transp Res Part A Policy Pract 78:124–133. https://doi.org/10.1016/j.tra.2015.04.034
Gani A (2017) The logistics performance effect in international trade. Asian J Shipp Logist 33(4):279–288. https://doi.org/10.1016/j.ajsl.2017.12.012
Hassan R, Gurning R, Handani D (2017) Analysis of the container dwell time at container terminal by using simulation modelling. Int J Mar Eng Innov Res 1(3):2320. https://doi.org/10.12962/j25481479.v1i3.2320
Herrera S, Pang G (2008) Efficiency of infrastructure: the case of container ports. Economia 9:165–194. https://core.ac.uk/download/pdf/6535717.pdf
IMF (2001) Global trade liberalization and the developing countries—an IMF issues brief. International Monetary Fund. https://www.imf.org/external/np/exr/ib/2001/110801.htm
Jung BM (2011) Economic contribution of ports to the local economies in Korea. Asian J Shipp Logist 27(1):1–30. https://doi.org/10.1016/s2092-5212(11)80001-5
Kuosmanen T, Kortelainen M (2010) Stochastic non-smooth envelopment of data: semi-parametric frontier estimation subject to shape constraints. J Prod Anal 38(1):11–28. https://doi.org/10.1007/s11123-010-0201-3
Langen P, Nijdam M, Van der Horst M (2007) New indicators to measure port performance. J Marit Res JMR 4(1):23–36. 4. ISSN 1697-4840. https://www.researchgate.net/publication/28199982_New_indicators_to_measure_port_performance
Lonza L, Marolda MC (2016) Ports as drivers of urban and regional growth. Trans Res Proced 14:2507–2516. https://doi.org/10.1016/j.trpro.2016.05.327
Melalla O, Vyshka E, Lumi D (2016) Defining the most important port performance indicators: a case of albanian ports. Int J Econ , Commer Manag U K 3(10):808–819. http://ijecm.co.uk/wp-content/uploads/2015/10/31049.pdf
Merk O, Dang T (2012) The efficiency of World Ports in container and Bulk Cargo (oil, coal, ores and grain). OECD Regional Development Working Papers, 9–22. https://doi.org/10.1787/5k92vgw39zs2-en
Mudronja G, Jugović A, Škalamera-Alilović D (2020) Seaports and economic growth: panel data analysis of EU Port Regions. J Mar Sci Eng 8(12):1017. https://doi.org/10.3390/jmse8121017
Munim ZH, Schramm HJ (2018) The impacts of port infrastructure and logistics performance on economic growth: the mediating role of seaborne trade. J Ship Trade 3(1). https://doi.org/10.1186/s41072-018-0027-0
Naanwaab C, Diarrassouba M (2013) The impact of economic freedom on bilateral trade: a cross-country analysis. Int J Business Manag Econ Res 4(1):668–672
Nogué-Algueró B (2019) Growth in the docks: ports, metabolic flows and socio-environmental impacts. Sustain Sci 15(1):11–30. https://doi.org/10.1007/s11625-019-00764-y
Notteboom T, Coeck C, Van Den Broeck J (2000) Measuring and explaining the relative efficiency of container terminals by means of Bayesian stochastic frontier models. Int J Marit Econ 2(2):83–106. https://doi.org/10.1057/ijme.2000.9
Notteboom T, Pallis A, Rodrigue JP (2021, March 28) Port economics, management, and policy. Port economics, management, and policy | A comprehensive analysis of the port industry. https://porteconomicsmanagement.org
Pallis A, Rodrigue JP (2021) Chapter 6.2—Port efficiency | Port economics, management and policy. Port economics, management and policy | A comprehensive analysis of the port industry. https://porteconomicsmanagement.org/pemp/contents/part6/port-efficiency/
Park C, Richardson HW, Park J (2020) Widening the Panama Canal and US ports: historical and economic impact analyses. Marit Policy Manag 47(3):419–433. https://doi.org/10.1080/03088839.2020.1721583
Pérez I, Trujillo L, González MM (2016) Efficiency determinants of container terminals in Latin American and the Caribbean. Util Policy 41:1–14. https://doi.org/10.1016/j.jup.2015.12.001
PortStrategy (2020) Container dwell time prediction solution. https://www.portstrategy.com/news101/port-operations/cargo-handling/1252746.article
Rodrigue J (2020) Geography of transport systems, 5th edn. The Geography of Transport Systems. https://transportgeography.org/geography-of-transport-systems-5th-edition/
Rodrigue JP, Notteboom T (2021, January 7) Chapter 7.2—Ports and economic development | Port economics, management and policy. Port economics, management and policy | A comprehensive analysis of the port industry. https://porteconomicsmanagement.org/pemp/contents/part7/port-and-economic-development/#:%7E:text=Ports%20are%20catalysts%20for%20economic,direct%2C%20indirect%2C%20and%20induced
Sarriera J, Suárez-Alemán A, Serebrisky T, Trujillo L (2015) When it comes to container port efficiency, are all developing regions equal? (IDB Working Paper Series no. IDB-WP-568). https://publications.iadb.org/publications/english/document/When-It-Comes-to-Container-Port-Efficiency-Are-All-Developing-Regions-Equal.pdf
Serebrisky T, Sarriera JM, Suárez-Alemán A, Araya G, Briceño-Garmendía C, Schwartz J (2016) Exploring the drivers of port efficiency in Latin America and the Caribbean. Trans Policy 45:31–45. https://doi.org/10.1016/j.tranpol.2015.09.004
Shetty DK, Dwarakish G (2018) Measuring port performance and productivity. ISH J Hydraul Eng 26(2):221–227. https://doi.org/10.1080/09715010.2018.1473812
Suárez-Alemán A, Morales Sarriera J, Serebrisky T, Trujillo L (2016) When it comes to container port efficiency, are all developing regions equal? Transp Res Part A Policy Pract 86:56–77. https://doi.org/10.1016/j.tra.2016.01.018
Suárez-Alemán A, Serebrisky T, Ponce de León O (2017) Port reforms in Latin America and the Caribbean: where we stand, how we got here, and what is left. Marit Econ Logist 20(4):495–513. https://doi.org/10.1057/s41278-017-0086-4
Talley WK (2006) Chapter 22—Port performance: an economics perspective. Res Transp Econ 17:499–516. https://doi.org/10.1016/S0739-8859(06)17022-5
Talley WK (2017) Port economics (Routledge maritime masters), 2nd edn. [E-book]. Routledge. https://www.routledge.com/Port-Economics/Talley/p/book/9781138952195
Tongzon JL (1995) Determinants of port performance and efficiency. Transp Res Part A Policy Pract 29(3):245–252. https://doi.org/10.1016/0965-8564(94)00032-6
Tongzon J, Heng W (2005) Port privatization, efficiency, and competitiveness: some empirical evidence from container ports (terminals). Transp Res Part A Policy Pract 39(5):405–424. https://doi.org/10.1016/j.tra.2005.02.001
Töngür N, Türkcan K, Ekmen-Özçelik S (2020) Logistics performance and export variety: evidence from Turkey. Central Bank Rev 20(3):143–154. https://doi.org/10.1016/j.cbrev.2020.04.002
Tran DV (2019) A study on the impact of economic freedom on economic growth in ASEAN countries. Business Econ Horizons (BEH) 15(3):423–449. https://doi.org/10.22004/ag.econ.301155
UNCTAD (2016) UNCTAD port management series—volume 4. www.unctad.org/Trainfortrade. https://unctad.org/system/files/official-document/dtlkdb2016d1_en.pdf
UNCTAD (2019) Review of maritime transport 2019, Chapter 3. UNCTAD 2019. https://unctad.org/system/files/official-document/rmt2019ch3_en.pdf
UNCTAD (2020) Maritime transport. https://unctadstat.unctad.org/wds/ReportFolders/reportFolders.aspx
UNCTAD (2021a) Port liner shipping connectivity index, quarterly. UNCTAD STAT. https://unctadstat.unctad.org/wds/?aspxerrorpath=/wds/TableViewer/tableView.aspx
UNCTAD (2021b) Review of maritime transport 2021. https://unctad.org/webflyer/review-maritime-transport-2021
Wilmsmeier G, Monios J (2016) Container ports in Latin America: Challenges in a changing global economy. Dynam Ship Port Develop Glob Econ 11–52. https://doi.org/10.1057/9781137514233_2
World Bank (2019) Container port traffic (TEU: 20 ft equivalent units) | data. World Bank. https://data.worldbank.org/indicator/IS.SHP.GOOD.TU
World Bank (2021) Liner shipping connectivity index (maximum value in 2004 = 100)—Latin America & Caribbean | Data. Liner Shipping Connectivity Index. https://data.worldbank.org/indicator/IS.SHP.GCNW.XQ?locations=ZJ
World Economic Forum (2018) The global competitiveness index 4.0 methodology and technical notes. The Global Competitiveness Report. http://www3.weforum.org/docs/GCR2018/04Backmatter/3.%20Appendix%20C.pdf
World Port Source (2021) WPS—Home Page. http://www.worldportsource.com/
Yang H, Lin K, Kennedy OR, Ruth B (2011) Seaport operational efficiency: an evaluation of five Asian ports using stochastic frontier production function model. J Serv Sci Manag 04(03):391–399. https://doi.org/10.4236/jssm.2011.43045
The article processing charge of this work is supported by China Merchants Energy Shipping. Special thanks to Tokyo University of Marine Science and Technology (TUMSAT) and Japan Internation Cooperation Agency (JICA) for their invaluable support.
All funding for this research is sponsored by the Japan International Corporation Agency (JICA). JICA provides an annual academic budget for research. This budget is managed by Professor Tetsuro Hyodo, Tokyo University of Marine Science and Technology (TUMSAT) from the Department of Logistics and Information engineering.
Department of Logistics and Information Engineering, Tokyo University of Marine Science and Technology, 2-1-6, Etchujima Koto-ku, Tokyo, 135-8533, Japan
Kahuina Miller & Tetsuro Hyodo
Kahuina Miller
Tetsuro Hyodo
Professor Tetsuro Hyodo is advisor for this research and was instrumental in recommending the appropriate methodology for this article. Both authors read and approved the final manuscript.
Professor Tetsuro Hyodo, Head of the Department of Logistics and Information Engineering. He graduated in Civil Engineering, Tokyo Institute of Technology in 1984. In 1986, completed the master's Course in Civil Engineering at the Graduate School. 1989.Completed the Doctoral Course (Doctor of Engineering). 1998, Visiting Researcher at the Institute of Transportation Research, University of California, Berkeley. He is the author and co-author of several research journals, please see link https://tumsatdb.kaiyodai.ac.jp/html/100000623_ronbn_1_en.html.
Kahuina Hassan Miller, 2nd year Doctoral Student from the Tokyo University of Marine Science and Technology (TUMSAT). Course of Applied Marine Environmental Studies specialization logistics and information engineering. He is the graduate World Maritime University (2014), obtaining MSc in Maritime Affair specialization Ship management and Logistics.
Correspondence to Kahuina Miller.
Authors declares no competing interests.
Appendix A: LAC port characteristics
See Tables 9, 10 and 11.
Table 9 Port characteristics of LAC average between 2014 and 2016 (Pre-PCE-Era).
Table 10 Port characteristics. Average between 2017 and 2018 (Post-PCE-Era).
Table 11 Port characteristics. Average between 2000 and 2016.
Miller, K., Hyodo, T. Assessment of port efficiency within Latin America. J. shipp. trd. 7, 4 (2022). https://doi.org/10.1186/s41072-021-00102-5
Technical efficiency
Port performance indicator
|
CommonCrawl
|
Environmental and Resource Economics
Does Absolution Promote Sin? A Conservationist's Dilemma
Matthew Harding
David Rapson
This paper shows that households signing up for a green program exhibit an intriguing behavioral rebound effect: a promise to fully offset customers' carbon emissions resulting from electricity usage increases their energy use post-adoption by 1–3%. The response is robust across empirical specifications, and is consistent with an economic model of rational energy consumption. Our results provide a cautionary tale for designing green product strategies in which the adoption of a product may lead to unexpected consequences.
Carbon offsets Behavioral rebound Green marketing Energy consumption
We are grateful to PG&E for sharing the data and for assisting with the data preparation. We thank Marcel Priebsch for excellent research assistance. We are grateful to Hunt Allcott, Antonio Bento, Wesley Hartmann, Grant Jacobsen, Matthew Kotchen, Prasad Nair, Ted O'Donoghue, Olivier Rubel, Rob Stavins and seminar participants at Cornell, Dartmouth, Harvard, Stanford, UC Berkeley and UC Santa Barbara for excellent comments. The Precourt Energy Efficiency Center at Stanford University generously funded this project.
Funding was provided by Stanford Precourt Institute for Energy (US).
A Appendix
A.1 Background on the Voluntary Carbon Offset Market
By definition one carbon offset corresponds to the removal or neutralization of one metric ton of \(\text {CO}_2\) or an equivalent amount of other gases such as methane, nitrous oxide, hydrofluorocarbons, perfluorocarbons or sulphur hexafluoride, all of which contribute to the greenhouse effect.
The basic intuition behind this market is that since carbon emissions contribute to a global stock, abatement in one part of the world is equivalent to abatement elsewhere. If marginal abatement costs differ across regions, it should be possible to offset one's emissions in an indirect, cost effective way. Each carbon offset is generated as a result of a specific environmental project, most of which can be located at a considerable distance from the buyer of the carbon offset. The range of environmental projects which can offset carbon emissions is vast and range from clean energy generation such as wind power to forest conservation to livestock waste management. Companies engaging in these environmental projects can claim to have produced offsets as long as the carbon removed is in excess of what would have been occurred in the absence of the offset. For example, one cannot label a project as generating carbon offsets if it would have happened without the funding generated by carbon offsets. The most popular types of projects involve either agricultural land use or forestry and many are located in developing countries.
In order to guarantee the validity of the carbon offsetting claims, common practice in the industry is to gain third-party certification for projects. In some cases buyers transact directly with the offset generator, while in other cases sophisticated markets have developed which allow carbon offsets to be traded. At the present, numerous companies are involved in marketing and trading carbon offsets, and numerous concerns about permit legitimacy underscore the importance of the verification process. It is often very difficult to accurately verify an alleged certification, since offsets are often purchased for future projects and the baseline level of emissions is often debatable. To address these concerns, PG&E engaged a reputable third-party certification organization, Climate Action Reserve,24 to verify the emissions reductions associated with its offsets. PG&E circus a broad solicitation to purchase offsets, and each ton of greenhouse gas abated must then be certified by the Climate Action Reserve. Nonetheless, the implications of the findings in this study should be considered in the context of the broader market.
While a majority of the carbon market consists of regulated markets such as the EU Emission Trading Scheme, which covers the emissions of several thousand energy intensive European companies, a relatively small fraction of the market consists of voluntary carbon offsets. In 2011 the voluntary carbon market had a total value of $576 million down from a peak of $728 million in 2008 (Bloomberg 2012). In spite of the decline in this market resulting from the recent financial crisis, the voluntary offset market is particularly popular in North America where offsets are purchased through numerous over-the-counter contracts. Since American buyers appear to prefer more local projects, the majority of carbon offsetting projects red to the voluntary market are now to be found within the US.
Individual consumers are currently offered a number of ways in which they can offset their carbon footprint. Airline passengers are routinely asked if they wish to offset their carbon footprint resulting from air travel (sometimes at considerable cost). Voluntary carbon offsetting programs have also recently been offered to residential consumers in order to offset the carbon footprint resulting from everyday energy use at home.
A.2 Seasonality Corrections
A.2.1 Interactive Effects
One concern is that the seasonal controls (aggregate month-by-year effects) may be insufficiently capturing important sources of heterogeneity in time-varying unobservables. For example, if young households are more vulnerable to economic shocks or adverse weather conditions, electricity usage patterns within this cohort may exhibit higher volatility which is not fully captured by the mean time effects we control for. This would represent an unobserved source of heterogeneity which may potentially bias our estimation. At the core of this problem is the extent to which the assumption on the error term in Eq. 5 is correct. We can think of the total error term as given by:
$$\begin{aligned} u_{it}=\alpha _t+\gamma _i+\epsilon _{it}, \end{aligned}$$
where \(\epsilon _{it}\) is iid across households and time.
We are concerned that the true model may in fact have interactive effects of the form:
$$\begin{aligned} u_{it}=\gamma _i+\omega _iF_t+\epsilon _{it}. \end{aligned}$$
where \(F_t\) is an aggregate effect that is scaled by some demographic variable, \(\omega _i\). Such a model is not directly estimable, but the presence of demographic variables offers a way to test the validity of this concern and, at least partially, to eliminate it. We can consider various proxies for \(\omega _i\) without necessarily requiring the unobserved trends \(F_t\) to vary over each individual. We can divide the sample into different groups (e.g. by age quartiles) an estimate a model under the following assumption:
$$\begin{aligned} u_{it}=\gamma _i+\omega (1(i \in Group ~ g))F_t+\epsilon _{it}, \end{aligned}$$
where \(\omega \) is now observable for each i and \(F_t\) is approximated by year-month specific indicator variables.
In Table 6, we present the coefficients from Eqs. 3 and 4, but with month and years controls interacted with an array of demographic characteristics, and each cell again represents the estimate of \(\beta \) from a separate regression. The results are consistent with the baseline difference-in-differences (zero effect) and first differences estimates (1.5–2.5% rebound).
Robustness of model specifications to interactive trends
Time control interactions
Avg. usage
Difference in difference specifications
ClimateSmart
\(-\,0.00864\)
HH FEs
Month-by-year FEs
R-squared
First difference specifications
0.01528*
0.01992**
*Significant at the 0.10 level, **Significant at the 0.05 level, ***Significant at the 0.01 level. Standard errors clustered at the HH level
It also possible that the true model has more than one interactive effect and may be multidimensional with an error term that has an unknown nt factor structure. As an additional robustness check we have implemented the control variable approach described in Pesaran (2006) and Harding and Lamarche (2011). The result was also very similar to the baseline specification indicating that our estimation is robust to a variety of interactive effects specifications.
A.2.2 Time Series Filtering
Finally, it is possible that the interactive effects specifications discussed above may not capture all the relevant heterogeneity. One such case is when the error term can be decomposed into two components, \(\epsilon _{it}\) which is iid across observations and another component \(v_{it}\) which is non-stationary and exhibits seasonality and trending behavior. This case requires us to filter the time series of electricity consumption for each household. We use the H–P (Hodrick–Prescott filter) commonly used in macroeconomics. Consider some arbitrary household. To remove the trend from log electricity consumption for that household \(ln(k_t)\), we decompose \(ln(k_t) = \tau _t + c_t\), where \(\tau _t\) is the trend, and \(c_t\) is the cyclical component. \(\tau _t\) is estimated by solving
$$\begin{aligned} \min _{\{\tau _t\}} \, \sum _{t=1}^T (x_t -\tau _t)^2 + \mu \sum _{t=2}^{T-1} [(\tau _{t+1}-\tau _t)-(\tau _t-\tau _{t-1})]^2 \end{aligned}$$
The parameter \(\mu \) penalizes variation in the first difference of the trend and is set to 6.25, 1600, and 129, 600 (Hodrick and Prescott 1997; Ravn and Uhlig 2002). We then use the filtered residual \(c_t = ln(k)_t - \tau _t\) as the new dependent variable in Eq. 5. Figure 5 shows the intuition behind the H–P filter for the time series of consumption for one arbitrary household. Notice the effect of the choice of the smoothing parameter \(\mu \). This suggests that a small value of \(\mu \) is required in order to remove the cyclical component. It is important to note an important limitation of this approach. Since this method is applied at the individual level it will remove to a large degree any pre and post adoption trends in behavior (such as pre-adoption conservation and the disappearance of the "rebound effect" post-adoption. The individual filtering approach however allows us to focus on the immediate post-adoption period and remains reliable in detecting an immediate jump in consumption after the household enrolls in the CS program.
Example of different smoothing parameters in the HP filter
Estimation of the dynamic effect of adoption in event time after filtering individual observations
In Fig. 6 we present the results for the event study conducted on residuals from different applications of the H–P filter with various degrees of smoothing \(\mu \). We see that even though the H–P filter removes some of the pre and post adoption trends it continues to clearly identify a discontinuity in usage at the time of adoption. The effect is slightly diminished but we still observe a 2% increase in consumption post adoption.
A.3 Profile of Adopters
Given the surprising and robust behavioral response resulting from adoption, it is important to understand if customers may differ along predictable demographic dimensions. This has substantial managerial implications for program design and customer targeting. In order to investigate how selection into adoption is driven by observable characteristics of the households, we construct a household specific variable \(T_{i}\), which equals one if household i signed up for the CS program, and zero otherwise. We use households that never sign up (\(T_{i}=0\)) as our reference group and we formally identify the model by normalizing the corresponding coefficients. We wish to model the probability of adopting the CS program by household i conditional on observed covariates \(x_i\), \(Pr(T_{i}=1|x_i)\). We encounter one important technical limitation however. By construction, our sample is non-random. In fact our very data request from PG&E was not formulated as a random sample of all PG&E customers. Given the low number of CS program adopters relative to PG&E's large residential customer base this would have been impractical as adoption would have been a very rare event in a random sample. As such we chose to sample conditional on adoption status. The final sample contains a sizable proportion of CS program adopters and a very small but representative sample of the population of non-adopters subject to the restrictions on residence imposed earlier to achieve balance.
This sampling framework is usually referred to as "choice based sampling" or "retrospective sampling" since it uses the ex-post outcomes as part of the sampling frame. It is well known in this setting that estimation by maximum likelihood leads to inconsistent parameter estimates (Manski and Lerman 1977). While several approaches are available to address this issue, consistent estimates are typically obtained by pseudo-maximum likelihood where observations are weighted by a factor \(\mu _j=n_j/(NPr(T_j))\), where \(n_j\) corresponds to the observed sample in group j. N and \(Pr(T_j)\) however are population parameters denoting the total population of possible adopters and \(Pr(T_j)\) the unconditional probability of adoption in period j. These quantities are not observed in the sample (we cannot simply assume that the ratio of adopters to non-adopters from a short-run program corresponds to the respective population adoption ratios).
In order to avoid controversies over population priors we rely on a stronger functional form assumption and assume that \(Pr(T_{i}=1|x_i)\) can be written in a multiplicative intercept form (Hsieh et al. 1985). The logit model is a particular example of the multiplicative intercept form and thus we assume that:
$$\begin{aligned} Pr(T_{i}=1|x_i)=\frac{exp(c_1+x'_i\beta _1)}{1+exp(c_1+x'_i\beta _1)}, \end{aligned}$$
where \(\beta _1\) is a parameter vector which measures the extent to which the observed covariates explain adoption. Note that the coefficient \(\beta _0\) is normalized to 0. Thus, the results can be easily interpreted. For each covariate of interest a positive coefficient in \(\beta _1\) tells us that a particular regressor makes it more likely that a household with that characteristic will adopt. The estimated intercept coefficient, \(c_1\), is however inconsistently estimated and is a function of the unknown parameter, \(\mu \). This imposes some restrictions as it prevents us from computing marginal effects without imposing out-of-sample priors on this unobserved parameter.
With these econometric subtleties in mind, let us now turn our attention to Table 7. This table presents estimates of the slope coefficients of the adoption equation for different specifications with an increasing number of explanatory demographics. The results are very similar across different specifications. All else equal, adopters in our sample are younger, wealthier, and live in homes with a smaller number of inhabitants. The first two attributes act as one might expect. Since the adopters enrolled through the PG&E website, this shows that younger households are more likely to invest the effort in searching and finding environmental programs. Similarly, wealthier households appear more willing to incur the costs of search and the (albeit small) increase in electricity prices from enrollment. The mechanism for the importance of household size is open to interpretation. Perhaps there are costs to coordinating with many inhabitants. Autonomy may also play a role; one may view it less advantageous to enroll in an energy-related program when the energy use decisions are made by many different people.
Logit model of ClimateSmart adoption
log(kWh)
\(-\,0.17302\)***
\(-\,0.17130\)*
\(-\,0.19701\)**
HHIncome $80k+
0.26492***
Working woman
HH size
Home age
Sqft 2500\(\,+\)
Home Value $500k\(\,+\)
− 0.20900
\(-1.80536\)***
Pseudo R-squared
Robust standard errors in parentheses. *Significant at the 0.10 level, **Significant at the 0.05 level, ***Significant at the 0.01 level
Softer household characteristic also appear to be important, consistent with the findings of Costa and Kahn (2013). Adoption is strongly predicted by the Environmental interest variable. This is consistent with the stylized model in Sect. 2 as this variable proxies for \(\delta \), the extent to which a household is aware of the social cost of carbon emissions. It is interesting to note however that Green Living is not a significant predictor for adoption and in fact it has a negative sign. This may indicate complementarities across domains, whereby if a household already is involved to a substantial degree in other environmental activities they are less likely to adopt a new one. An alternative interpretation may have these households preferring to conserve as part of their lifestyle, rather than purchasing conservation in the form of offsets from an external source. As expected, lifestyle variables related to a perceived interest in the outdoors or activities related to wildlife or camping are also important predictors of adoption. Their presence is likely to also be correlated with the degree to which a household perceives carbon emissions and global warming to be a potential utility cost.
The extent to which a household is involved in the community and local charities also appears to be a strong predictor of adoption. This is also consistent with our model as it reflects overall awareness and concern for the local community. By contrast the propensity to contribute substantial amounts to charitable causes is a negative predictor for adoption. This indicates that adoption into the program is not seen as a charitable contribution or expression of altruism. This is not indicative of a contradiction since it is common to think of contributing to the community financially as being very different than contributing time or effort. The propensity for adoption is substantially higher among high income households. It is interesting to note that adoption does not appear to be driven by the level of education. The presence of children is insignificant as a driver of adoption. While the relatively weak impact of children on adoption may also be explained by the higher age in our sample as a consequence of the time series balance requirement, most of these households will have adult children. Perhaps it shows that they discount the welfare of future generations to a substantial degree, although there are a variety of possible explanations. We find no statistically significant differences between renters and owners. This may also be a consequence of our balance requirement since renters are more likely to be excluded from our sample due to their transitory dwelling patterns.
Ayal S, Gino F (2011) Honest rationales for dishonest behavior. Exploring the causes of good and evil, chap. Honest rationales for dishonest behavior. APAGoogle Scholar
Baker JS, McCarl BA, Murray BC, Rose SK, Alig RJ, Adams D, Latta G,Beach R, Daigneault A (2010) Net farm income and land use under a USgreenhouse gas cap and trade. In: Agricultural and Applied EconomicsAssociation. Policy Issues 7: April 2010. p 5Google Scholar
Bloomberg (2012) State of the voluntary carbon market 2012. New energy finance, ecosystem marketplaceGoogle Scholar
Bolderdijk JW, Steg L, Geller ES, Lehman P, Postmes T (2013) Comparing the effectiveness of monetary versus moral motives in environmental campaigning. Nat Clim Chang 3(4):413CrossRefGoogle Scholar
Borenstein S (2015) A microeconomic framework for evaluating energy efficiency rebound and some implications. Energy J 36:1–21Google Scholar
Brown TR, Elobeid AE, Dumortier JR, Hayes DJ (2010) Market impact of domestic offset programsGoogle Scholar
Clayton S, Devine-Wright P, Stern PC, Whitmarsh L, Carrico A, Steg L, Swim J, Bonnes M (2015) Psychological research and global climate change. Nat Clim Chang 5(7):640CrossRefGoogle Scholar
Costa D, Kahn M (2013) Energy conservation "nudges" and environmentalist ideology: evidence from a randomized residential electricity field experiment. J Eur Econ Assoc 11(3):680–702CrossRefGoogle Scholar
DellaVigna S, List J, Malmendier U (2012) Testing for Altruism and social pressure in charitable giving. Q J Econ 127(1):1–56CrossRefGoogle Scholar
Ebeling F, Lotz S (2015) Domestic uptake of green energy promoted by opt-out tariffs. Nat Clim Chang 5(9):868–871CrossRefGoogle Scholar
Effron D, Monin B (2010) Letting people off the hook: When do good deeds excuse transgressions? Pers Soc Psychol Bull 36:1618–1634CrossRefGoogle Scholar
Gillingham K, Rapson D, Wagner G (2016) The rebound effect and energy efficiency policy. Rev Environ Econ Policy 10(1):68–88CrossRefGoogle Scholar
Gneezy U, Rustichini A (2000) A fine is a price. J Legal Stud 29:1–18CrossRefGoogle Scholar
González-Ramírez J, Kling CL, Valcu A (2012) An overview of carbon offsets from agriculture. Annu Rev Resour Econ 4(1):145–160CrossRefGoogle Scholar
Graff-Zivin J, Lipper L (2008) Poverty, risk, and the supply of soil carbon sequestration. Environ Dev Econ 13(3):353–373CrossRefGoogle Scholar
Harding M, Lamarche C (2011) Least squares estimation of a panel data model with multifactor error structure and endogenous covariates. Econ Lett 111(3):192–199CrossRefGoogle Scholar
Herberich D, List J, Price M (2011) How many economists does it take to change a light bulb? A natural field experiment on technology adoption, Working PaperGoogle Scholar
Hodrick R, Prescott E (1997) Post war business cycles: an empirical investigation. J Money Credit Bank 29:1–16CrossRefGoogle Scholar
Hsieh D, Manski C, McFadden D (1985) Estimation of response probabilities from augmented retrospective observations. J Am Stat Assoc 80:651–662CrossRefGoogle Scholar
Jacobsen G (2010) Do environmental offsets increase demand for dirty goods? Evidence from residential electricity demandGoogle Scholar
Jacobsen G (2011) The Al Gore effect: an inconvenient: truth and voluntary carbon offsets. J Environ Econ Manage 61:67–78CrossRefGoogle Scholar
Jacobsen GD, Kotchen MJ, Vandenbergh MP (2012) The behavioral response to voluntary provision of an environmental public good: evidence from residential electricity demand. Eur Econ Rev 56(5):946–960CrossRefGoogle Scholar
Jacobson L, Lalonde R, Sullivan D (1992) Earnings losses of displaced workers. Am Econ Rev 83:685–709Google Scholar
Kotchen M (2006) Green markets and private provision of public goods. J Polit Econ 114:816–834CrossRefGoogle Scholar
Kotchen M (2009) Voluntary provision of public goods for bads: a theory of environmental offsets. Econ J 119:883–899CrossRefGoogle Scholar
Kotchen M, Moore M (2007) Private provision of environmental public goods: household participation in green-electricity programs. J Environ Econ Manage 53:1–16CrossRefGoogle Scholar
Kouchaki M (2011) Vicarious moral licensing: the influence of others' past moral actions of moral behavior. J Person Soc Pychol 101:702–715CrossRefGoogle Scholar
Litvine D, Wüstenhagen R (2011) Helping " light green" consumers walk the talk: results of a behavioural intervention survey in the Swiss electricity market. Ecol Econ 70(3):462–474CrossRefGoogle Scholar
Manski C, Lerman S (1977) The estimation of choice probabilities from choice based samples. Econometrica 45:1977–1988CrossRefGoogle Scholar
Merritt A, Effron D, Monin B (2010) Moral self-licensing: when being good frees us to be bad. Soc Personal Psychol Compass 4(5):344–357CrossRefGoogle Scholar
Monin B, Miller D (2001) Moral credentials and the expression of prejudice. J Personal Soc Psychol 81:33–43CrossRefGoogle Scholar
Pesaran H (2006) Estimation and inference in large heterogeneous panels with a multifactor error structure. Econometrica 74(4):967–1012CrossRefGoogle Scholar
Pichert D, Katsikopoulos KV (2008) Green defaults: information presentation and pro-environmental behaviour. J Environ Psychol 28(1):63–73CrossRefGoogle Scholar
Ravn M, Uhlig H (2002) On adjusting the Hodrick–Prescott filter for the frequency of observations. Rev Econ Stat 84(2):371–376CrossRefGoogle Scholar
© Springer Nature B.V. 2019
1.Department of Economics and Department of StatisticsUniversity of California - IrvineIrvineUSA
2.Department of EconomicsUniversity of California - DavisDavisUSA
Harding, M. & Rapson, D. Environ Resource Econ (2019) 73: 923. https://doi.org/10.1007/s10640-018-0301-5
|
CommonCrawl
|
A new deep sparse autoencoder for community detection in complex networks
Rong Fei1,
Jingyuan Sha1,
Qingzheng Xu2,
Bo Hu3,
Kan Wang1 &
Shasha Li1
Feature dimension reduction in the community detection is an important research topic in complex networks and has attracted many research efforts in recent years. However, most of existing algorithms developed for this purpose take advantage of classical mechanisms, which may be long experimental, time-consuming, and ineffective for complex networks. To this purpose, a novel deep sparse autoencoder for community detection, named DSACD, is proposed in this paper. In DSACD, a similarity matrix is constructed to reveal the indirect connections between nodes and a deep sparse automatic encoder based on unsupervised learning is designed to reduce the dimension and extract the feature structure of complex networks. During the process of back propagation, L-BFGS avoid the calculation of Hessian matrix which can increase the calculation speed. The performance of DSACD is validated on synthetic and real-world networks. Experimental results demonstrate the effectiveness of DSACD and the systematic comparisons with four algorithms confirm a significant improvement in terms of three index Fsame, NMI, and modularity Q. Finally, these achieved received signal strength indication (RSSI) data set can be aggregated into 64 correct communities, which further confirms its usability in indoor location systems.
Community detection has a great significance to the study of complicated systems and our daily life; meanwhile, it is also one of the important methods for understanding many network structures in the real world. In the network, community structure implies some nodes in the network that are closely connected with each other, but sparsely connected with other nodes. Community detection divides nodes into a module in the graph, so that the inner side numbers of the module is larger than the edge numbers between modules with the topological structure of graphs as the source of information [1, 2].
At present, a variety of community detection algorithms have been proposed explore the community structure of complex networks.
The community mining method LPA [3] based on label propagation was proposed in 2007. It counts the labels of adjacent nodes of each node, and the highest frequency label is used as the new label of the node. The LPA is still a classic algorithm because it can process large networks for its linear running time to network size, and the propagation of labels also avoids predefine the community number. In addition, the LPA allows a vertex to carry multiple labels [4].The LPA is particularly suitable for large social networks with complex and overlapping communities [5]. There are many improved LPA that appears, such as a new LPA parallelization scheme from a different perspective [5], the local affinity propagation (LAP) algorithm with near-linear time, and space complexities [6].
Classical clustering algorithms such as K-means are basic methods to solve the community discovery problem [7]. The speed of classical algorithms is fast enough, but their accuracy and stability need to be improved. When a similarity matrix is generated, only neighbor nodes are taken into account, and the related nodes are not included in the scope of consideration. The large scale of social networks poses challenges from the viewpoint of clustering methods. The high-dimensional similarity matrix in the network will aggravate the decline of accuracy.
Deep learning is about learning multiple levels of representation and abstraction that help to make sense of data such as images, sound, and text. Deep learning has shown how hierarchies of features can be learned in an unsupervised manner directly from data. The idea is to learn the representation of data at different levels or aspects through a computational model with multi-layer networks [8–10].
The application of deep learning algorithm has developed rapidly, such as driverless, biological information, and community detection, which displays excellent adaptability and practicability.
Chi-Hua Chen [11] proposed a deep learning model with generalization performance to obtain probability density function from the cumulative distribution function-based real data, which can be produced to improve further analyses with game theory or queueing theory. He [12] also proposed a cell probe-based method to analyze the cellular network signals and trained regression models for vehicle speed estimation, which is effective for cellular floating vehicle data.
In 2017, Shang et al. proposed the community detection algorithm based on deep sparse autoencoder (CoDDA) algorithm [13] that reduced the dimension of the network similarity matrix by establishing a deep sparse autoencoder. The lower dimension matrix with more obvious community structure was obtained. Finally, the K-means algorithm was used to cluster, and results with higher accuracy were obtained.
A recursive form is common in back propagation (BP) algorithm. The essence of the BP process is to minimize the reconstruction error, which can be classified as optimization problem [14–17]. The problem is generally solved by using nonlinear optimization methods which include gradient descent method [18], nonlinear conjugate gradient method [19], and quasi-Newton method. Each step of the calculation process of quasi Newton method only involves the calculation of function value and function gradient value; in this way, Hessian matrix calculation problem in other gradient descent method is effectively avoided [20].
Location-based services (LBS) make the wireless content industry develop to be close with public market application. WiFi is the common positioning signals in location -based services (LBS) [21, 22], and the signal intensity or spacetime attributes are combined with positioning algorithms to form the position technology [23]. Recently, Wi-Fi finger-printing positioning method often combines deep learning methods to improve the accuracy of classical KNN for indoor positioning [24].
Based on the deep sparse autoencoder and quasi-Newton method, we construct a community detection architecture, which improves the information loss in high-dimensional reduction. The proven architecture-DSACD first reduces the dimension of the high-dimensional matrix and then realizes community discovery by a deep sparse autoencoder. DSACD also improves the accuracy of K-means algorithm. In the application, the real community data sets for LBS are trained by the deep sparse autoencoder and optimized by the quasi-Newton method. The time loss in the process of community discovery is reduced to improve the efficiency of the algorithm; meanwhile, the accuracy is guaranteed.
The rest of this paper is organized as follows: Section 2 explains the proper nouns and algorithms appearing. In Section 3, the experimental process are introduced. DSACD performs confirmatory experiments through multiple experimental sets, including several parameter experiments to optimize the algorithm. The results of the experiment were evaluated by citing several evaluation criteria. Section 4 gives results discussion and an application. In Section 5, we consider the idea of future development based on these results.
We construct a deep sparse autoencoder with L-BFGS method for community detection to accelerate the process of clustering community structures in large data set.
In DSACD, a similarity matrix is constructed to show the indirect connections between nodes, a deep sparse automatic encoder is constructed with L-BFGS algorithm based on unsupervised learning to reduce the dimension and extract the feature structure of the network, and comparison experiments with four algorithms show a significant improvement in terms of three index Fsame, NMI, and modularity Q of DSACD.
We creatively design community detection experiments—a location-based service networks. The results show that DSACD can divide the partition into 64 communities which can prove its practicability.
The definitions of DSACD studied in this paper are given as follows.
Matrix preprocessing
Let G=(V,E) be the graph, where V=v1,v2,...,vn represents the set of nodes (vertices) in the graph and E represents the set of edges in the figure. Let N(u) be the neighbor nodes set of node u. Let matrix A=[ aij]n×n be the adjacency matrix of graph G, and the corresponding elements of the matrix represent whether there are edges between two points in graph G. For example, aij equals 1 indicates that there exists an eij. If aij equals 0, the indication is that there is no eij.
In the small graph, the adjacency matrix A can directly calculate the community relationship in the graph using a clustering algorithm such as K-means, and the result is more accurate (see Section 3). However, the adjacency matrix records only the relationship between adjacent nodes, and does not express the relationship between the node and its neighbors, or even more distant nodes. For any two nodes in the community, even if they are not connected to each other, it is possible to have the same community. Therefore, if the adjacency matrix is directly used as the similarity matrix for community partitioning, the complete community relationship cannot be reflected. If the adjacency matrix is directly clustered, the information will be lost.
In this paper, on the premise of definition: the similarity matrix which can express the non-adjacent information matrix is calculated by transforming the adjacency. Based on this, the definitions are given as follows.
Definition 1
Let a network graph be G=(V,E) for ∀v∈V. If the number of the shortest path from the node vi to another node vj is s, then, the node vi can jump to node vj through s hops. That is, hop is the number of least traversed edges from the node vi to vj.
As shown in the network of Fig. 1, node v1 reaches v2,v3, or v6 after one hop and arrives at v4 or v5 after two hops. For an instance, from v1→v2, the number of least traversed edges is 1, so the hop count is 1; from v1→v5, the number of least traversed edges is 2, and the hop count is 2.
Small network schematic
In G=(V,E), the similarity between two pointsvi and vj is defined as formula 1:
$$ Sim(i,j)=e^{\tau (1-s)},s\geq 1,\tau \in (0,1) $$
where the hop number from vi to vj, and τ is the attenuation factor. The node similarity decreases with the increase in the hop threshold s, τ controls the attenuation rate of the similarity, and the velocity of the node similarity relationship decays faster with the increase in τ.
In G=(V,E), its (similarity matrix) S=[sij]n×n is calculated by the node similarity between two points in G where sij=Sim(vi,vj),vi,vj∈V.
The similarity matrix obtained by processing the adjacency matrix by the hop count and the attenuation factor can better reflect the relationship between the distant nodes in the high-dimensional matrix, and the results of the community discovery are also improved. Obviously, the selection of the hop count threshold and the attenuation factor will have an important impact on the similarity matrix. The selection of hop count is obtained from the parameter learning process, which is explained at Section 3.6. Section 3 of this paper will set up experiments on these two parameters to explore the impact of different parameters on the results.
Deep sparse autoencoder
Based on a sparse autoencoder, the structure of deep sparse autoencoder is shown in Fig. 2.
Deep sparse autoencoder structure
The output of the previous layer, that is, the code h after dimension reduction, is shown in Fig. 2, as the input of the next layer. Then, the dimensions are reduced one by one.
Autoencoder [25] is an unsupervised learning artificial neural network that can learn the efficient encoding of data to express the eigenvalues of the data. The typical usage of the AE is to reduce dimensionality.
As shown in Figs. 3 and 4, given an unlabeled data set \(\{x^{(i)}\}_{i=1}^{m}\), the automatic encoder learns the nonlinear code through a two-layer neural network (input layer is not counted) to express the original data, the [25] training process uses the back propagation algorithm, and the sign of the end of the training is that the difference between the learned nonlinear code and the original data is minimized.
Structure of the automatic encoder
Process of the automatic encoder
The automatic encoder is composed of two parts: the coder (encode) and the decoder (decode). The encoding process is from the input layer to the hidden layer. At this time, the input data are subjected to dimensionality reduction to form a code, which is encoded as the output of the encoder, and then, the code is used as an input of the decoder for decoding, and the decoded result has the same dimension as the input data, used as the output of the decoder. After the output result is obtained, the output result is compared with the input result, the reconstruction errors are calculated, and then, the back-propagation algorithm is used to adjust the weight matrix of the automatic encoder. The reconstruction errors are calculated again and iterated continuously until the number of iterations or the reconstruction errors are less than the specified range. The output is equal to or close to the input result. The process of training a neural network using a back-propagation algorithm is also referred to as the minimization of the reconstruction error. Finally, the output of the encoder, i.e., the encoding, is taken as the output of the automatic encoder.
Specific steps are as follows:
Let X be the network graph G similarity matrix with dimension n. As the input matrix, where xi∈(0,1),X∈R(n×n). xi∈R(n×1) represents the ith column vector in X, W1∈R(d×n) is the weight matrix of the input layer [26], and W2∈R(n×d) is the weight matrix of the hidden layer [27].
b∈R(d×1) is the offset column vector of the hidden layer [27].
c∈R(n×1) is the offset column vector of the input layer [27].
The output h of the coding layer is obtained by formula 2:
$$ h_{i}=\tau (W_{1}x_{i}+b) $$
where hi∈R(d×1) is the encoded ith column vector. τ is the activation function, and the sigmoid function [28] is chosen as the activation function τ, which is shown by formula 3.
$$ f(z)=\frac{1}{1+e^{-z}} $$
In formula 3, z=WTX.
The matrix h obtained at this time is a matrix after dimensionality reduction.
The output z of the decoding layer is obtained by formula 4:
$$ z_{i}=\tau (W_{2}h_{i}+c) $$
where zi∈R(n×1) is the decoded ith column vector. τ is the activation function. The resulting matrix z is the same as the X dimension of the input matrix.
Combining the formula 2 with the formula 4, the reconstruction error is obtained:
$$ \text{error}=\sum_{i=1}^{n}\left \| \tau(W_{2}\tau(W_{1}x_{i}+b)+c)-x_{i} \right \|_{2}^{2} $$
When the activation function is sigmoid, the mapping range of the neurons is (0,1). When the output is close to 1, it is called active, and when the output is close to 0, it is called inactive [29]. In a sparse autoencoder, sparseness restrictions are added to the hidden layer. A sparse restriction means that neurons are suppressed most of the time, that is, the output is close to 0. A sparse expression has been successfully applied to many applications, such as target recognition [30,31], speech recognition [32], and behavior recognition [33]. The sparsity calculation method is as follows:
First, the average value of the output of the coding layer \(\widehat {\rho }_{j}\) is calculated and hj(x) denotes the output value of the neuron for the jth neuron (hj) of the hidden layer when the input is x [34]. The average value of the neuron output in the hidden layer is:
$$ \widehat{\rho }_{j}=\frac{1}{m}\sum_{i=1}^{m}[h_{j}(x_{i})] $$
To achieve sparsity, it is necessary to add a sparsity limit, which is achieved by:
$$ \widehat{\rho }_{j}=\rho $$
where ρ is the sparsity parameter, generally, ρ≪1, such as 0.05. When formula 7 is satisfied, the activation value of the hidden layer neurons is mostly close to 0.
A sparsity limit is added to the reconstruction error, that is, a penalty term is added to the reconstruction error, and \(\widehat {\rho }_{j}\) deviating from ρ is punished. The penalty function is as follows:
$$ \sum_{j=1}^{d} \rho \log \frac{\rho}{\widehat{\rho}_{j}} + (1-\rho) \log \frac{1-\rho}{1-\widehat{\rho}_{j}} $$
where d represents the number of hidden layer neurons. This formula is based on Kullback-Leibler divergence (KL [35]), so it can also be written as formula 9:
$$ \sum_{j=1}^{d} KL(\rho \parallel \widehat{\rho}_{j}) $$
In summary, formula 8 and formula 9 are combined to obtain the following:
$$ KL(\rho \parallel \widehat{\rho}_{j})= \rho \log \frac{\rho}{\widehat{\rho}_{j}} + (1-\rho)\log\frac{1-\rho}{1-\widehat{\rho}_{j}} $$
When \(\widehat {\rho }_{j}=\rho \), the penalty function \(KL(\rho \parallel \widehat {\rho }_{j})\) is 0. When \(\widehat {\rho }_{j}=\rho \) is far from ρ, the function monotonically increases and tends to infinity, as shown in Fig. 5:
KL divergence function
Therefore, by minimizing the sparse term penalty factor, that is, formula 10, \(\widehat {\rho }_{j}\) is closed to ρ. At this point, the reconstruction error is updated to formula 11:
$$ \text{error}=\sum_{i=1}^{n}\left \| \tau(W_{2}\tau(W_{1}x_{i}+b)+c)-x_{i} \right \|_{2}^{2} + \beta \sum_{j=1}^{d} KL(\rho \parallel \widehat{\rho}_{j}) $$
where β is the weight of the sparse penalty factor.
The training sparse autoencoder minimizes the reconstruction error by the back-propagation algorithm, that is, formula 11.
The deep sparse autoencoder for community detection
Based on the deep sparse autoencoder shown in Fig. 2, the data are preprocessed first, and the similarity matrix S0∈R(n×n) is obtained by formula 1. The similarity matrix is used as the input of the deep sparse autoencoder, then, the number of layers T of the deep sparse autoencoder is set and the number of nodes per layer {d0,d1,d2,⋯,dT∣d0=n,d0>d1>d2>⋯>dT}. The similarity matrix S0∈R(n×n) is input into the sparse autoencoder with the hidden layer as d1 as the input data of the first layer. After the first layer of training, the dimensioned matrix \(\phantom {\dot {i}\!}S_{1} \in R^{(n \times d_{1})}\) is obtained, then, S is input into the second layer of the deep sparse autoencoder, and then, the dimension is reduced to obtain \(\phantom {\dot {i}\!}S_{2} \in R^{(n \times d_{2})}\), etc., until the last layer. The low-dimensional feature matrix \(\phantom {\dot {i}\!}S_{T} \in R^{(n \times d_{T})}\) is obtained, and finally, the community is obtained by K-means clustering. See algorithms 1 and 2 for the detailed process.
Algorithm 1 hop count threshold S, attenuation factor σ, and formula 1 are used to compute the similar degree matrix sim of A∈R(n×n). Algorithm 1 is used to obtain the similarity matrix by computing the similarity of x with other nodes in V.
Algorithm 2 uses the deep sparse autoencoder with L-BFGS in which the layer number is T to reduce the dimension for a similar degree matrix, and then, the feature is extracted, and the low-dimensional characteristic matrix \(\phantom {\dot {i}\!}S_{T} \in R^{(n \times d_{T})}\) is obtained. Algorithm2 is used to reduce the similarity matrix and obtain the characteristics.
The K-means algorithm is used in ST to obtain the cluster result Coms={C1,C2,⋯,Ck} and then return it.
After the K-means algorithm is used, the communities {C1,C2,⋯,Ck} are obtained, and then, the result is returned.
In the proposed algorithm, the inputs include the adjacent matrix A∈R(n×n) of the G=(V,E), k—the number of communities, S—the hop count threshold, σ—the attenuation factor, and T—the layer number of deep sparse autoencoder and nodes in every layer dt={t=0,1,2,⋯,T∣d0>d1>⋯>dT}.
Experiments and analyses
Since this experiment is a test of the community detection algorithm, the ground-truth communities are selected for verification, so that the accuracy of the algorithm can be analyzed and verified accurately.
The hardware environment of our experiment is as follows: processor Intel Xeon 2.10 GHz E5-2683 v4, memory 64GB 2400 MHz, operating system Windows Server 2012 R2 standard, IDE: MATLAB 2015 (R2015b).
This experiment used four real data sets: Strike [36], Football [37], LiveJournal [38], and Orkut [39]. Among them, Strike is a 24-striker relationship table on wood processing projects. The frequency of discussion for strike topics between two people is the rules, which are added. If the frequency is high (there are specific criteria for evaluation during the investigation, no detailed explanation will be given here), then, a connection is established. Football is the timetable for the American Football Cup (FBS) held by the American College Sports Association (NCAA) in 2006. In the NCAA relationship network, if two teams played games, the connection is established.
LiveJournal is a free online blogging community where users can add friends. LiveJournal can create groups. When collecting community information, the software classifies it according to cultural background, entertainment preferences, sports, games, lifestyle, technology, etc.
Orkut is a social service network launched by Google. Friend relationship and group of friends can be constructed.
On the social network, nearly 4 million points and 30 million edges were extracted, and 8 communities with the largest number of nodes were selected to conduct experiments as data sets. Detailed information on each experimental set is shown in Tables 1 and 2.
Table 1 Data set information
Table 2 Number of layers of deep sparse autoencoder in different data sets
Evaluation index
To determine whether the clustering result is accurate, it is necessary to evaluate the clustering results Coms={C1,C2,⋯,Ck}. The evaluation method selects Fsame [3] and NMI. Both methods are evaluated according to the real community GroundTruth={C1′,C2′,⋯,Cl′}, in where l is the true number for communities. Moreover, Q [39,40] was used to evaluate the quality of the community.
Evaluation standard F same :
The community evaluation standard Fsame is obtained by calculating the intersection of each real community and each cluster community and averaging these values. The formula is as follows:
$$ F_{\text{same}}=\frac{1}{2n}\left(\sum_{i=1}^{k} \max_{j}\left | C_{i} \cap C_{j}' \right | + \sum_{j=1}^{t} \max_{i} \left | C_{i} \cap C_{j}' \right |\right) $$
where in the graph G, the number of nodes is n.
Evaluation standard NMI :
The NMI is the normalized mutual information. The formula is as follows:
$$ NMI(C,C')=\frac{-2 \sum_{i=1}^{C} \sum_{j=1}^{C'} N_{ij} \log\left(\frac{N_{ij} N}{N_{i \cdot} N_{\cdot j}}\right)}{\sum_{i=1}^{C}N_{i \cdot} \log\left(\frac{N_{i \cdot}}{N}\right) + \sum_{j=1}^{C'}N_{\cdot j} \log\left(\frac{N_{\cdot j}}{N}\right)} $$
where N∈R(n×n) is the confusion matrix, the rows represent the real community, and the columns represent the communities found. Nij represents the point of overlap between the real community Ci and the discovery community Cj′. N·j represents the sum of all the elements in column j, and Ni represents the sum of all the elements in row i. If the community is found to be in full agreement with the real community [41], the NMI value is 1. If the community is found to be completely different from the real community, the NMI value is 0.
Evaluation standard Q :
Modularity Q is a measure of how well a community is found. The formula is as follows:
$$ Q= \sum_{i=1}^{k} \left(\frac{E_{i}^{in}}{m} - \left(\frac{2E_{i}^{in} + E_{i}^{out}}{2m}\right)^{2}\right) $$
where \(E_{i}^{in}\) is the inner edges number of the community \(C_{i}, E_{i}^{\text {out}}\) is the outer edge number of the community Ci, and m is the total edge number in the graph G.
Analysis experiments
The experiment consists of four parts, which are the volatility exploration experiment based on the DSACD, the comparison experiment with other algorithms, the parameter experiment, and the visualization experiment. The volatility exploration experiment is to show the fluctuation of our algorithm on different data sets, which can explain the stability for our algorithm on large data set. We compare DSACD with CoDDA and K-means based on three performance evaluation standards as Fsame, NMI, and modularity Q. For explaining the result of parameter selection, the parameter experiment is given. At last, we use the visualization experiment to show the clustering results.
Volatility exploration analysis
According to Algorithm 1, community discovery was performed on the four data sets, and the results were evaluated using Fsame, NMI, and Q. However, since the selection of the center point of the K-means algorithm is random, where the weight matrix of neurons in the hidden layer and the output layer for the depth sparse autoencoder is also initialized by random numbers, the proposed algorithm is random. To be able to react to the smoothing of the data, this paper investigated the fluctuation of the data, and the results are displayed in Fig. 6.
Clustering results for the Strike dataset (left) and the Football dataset (right)
The variance of each data set is shown in Table 3.
Table 3 Fluctuation variance of the proposed algorithm
By performing 100 experiments on the Strike data set and the Football data set, Fig. 6 and Table 3 show that the clustering results have volatility. Taking the NMI value as an example, the small data set [36] has a variance of 26.46, and the larger data set [37] has a variance of 0.96, showing that multiple experiments are needed in small data sets to reduce the impact of fluctuations. In addition, the variance will decrease with the increase in the data set. This result also indicates that the algorithm has higher stability on the large data set, and the repetition number of the different experimental sets can be flexibly changed.
Table 4 describes the comparison of parametric experimental cluster results. The table shows that proved deep sparse autoencoder for community detection can significantly improve the cluster results and quality.
Table 4 The comparison of parametric experimental cluster results
Algorithm comparison
In this experiment, the K-means algorithm and the similarity matrix were directly clustered, the CoDDA algorithm and the DSACD were compared, and the NMI value was used for evaluation. The hop count threshold of the CoDDA algorithm, the attenuation factor, and the value of the deep sparse autoencoder use the optimal values in Table 5. The Table 6 shows experimental results.
Table 5 Parameter comparison before and after in parameter experiments
Table 6 The analysis of community detection results
Note: the number of iterations of the Football and Strike datasets is 100, and the number of iterations of the LiveJournal dataset is 5.
As Table 6 shows, the DSACD has higher cluster accuracy and cluster quality, among which the selection of back-propagation algorithm, the CoDDA algorithm, and DSACD algorithm results achieve higher precision, which is consistent with the results of the paper [25]. To compare the differences, Table 7 lists several error values for the last iteration of the two back-propagation algorithms. The results show that the CoDDA algorithm reduced the error to 17 in the process of minimizing the reconstruction error, but the DSACD finally decreases to 7.9. Both algorithms provide better performance.
Table 7 Reconstruction error table of CoDDA and DSACD
Finally, the DSACD is compared with the LPA [3] in Table 8. Then, the DSACD is significantly better than the LPA.
Table 8 Comparison with other community discovery algorithm cluster results
However, due to the characteristics of deep learning, it requires more time during training, as shown in Table 9. In large data sets, computing time needs to grow exponentially. The weakness of the CoDDA is also shown, and the calculation time is much longer than the DSACD calculation time. The CoDDA requires up to a week or weeks to calculate a larger matrix [13]. In addition, although the calculation time of the DSACD is small, it requires a large amount of memory, and it requires at least 128 G of memory for the network of tens of thousands of nodes, but the CoDDA can normally be calculated on a normal configuration computer.
Table 9 Time comparison with other community detection algorithms
Parameter analysis
The deep sparse autoencoder for community detection(DSACD) contains three important parameters: the hop threshold (S) in the similarity matrix, the attenuation factor (σ), and the number of layers in the deep sparse autoencoder (T). These three parameters have a direct impact on the clustering results. This section sets up experiments to find the optimal parameters. The experimental procedure is shown as follows. First, a value is preselected for each parameter. Then, the experiment is performed according to the hop count threshold, the attenuation factor, and the layers order of the deep sparse autoencoder, and each experiment is repeated 5 or 100 times. Then, the optimal results are used as the peremeters in the next experiments. After the first experiments, the three optimal values obtained are reused as the experimental input values applied to adjust the obtained parameters. After the end of the second round, the optimal parameters obtained are output.
Each parameter value is initialized as s=1, sigma=0.1, and T=1. The selection of each parameter is random during initialization, and the minimum value is selected as the starting parameter.
The first round of results from the above table is shown in Fig. 7.
The first-round experimental results of the parameter experiment
Figure 7 shows that the proved deep sparse autoencoder for community detection has a better clustering effect than the K-means clustering algorithm, but the significance of deep learning in the Strike and Football datasets does not seem obvious. The clustering results of the proved deep sparse autoencoder for community detection and the clustering results of the similarity matrix are similar, and the gap is not obvious in the parameter experiment of the attenuation threshold or the layer number. However, after a round of experiments on big data sets, the advantages are already evident.
The second-round results are shown in Fig. 8.
The second-round experimental results of the parameter experiment
After the second-round parameter experiment, we find that the similarity matrix clustering result of the Strike dataset is the best. The proved deep sparse autoencoder for community detection does not improve the clustering result in the process of deep learning, but decreases the result, as shown in Fig. 8c). Therefore, in small datasets, the similarity matrix is utilized to process the adjacency matrix of the graph.
As shown in the Football dataset, the proved DSACD slightly improves the clustering results in the process of deep learning, and the highest value is obtained by the proved deep sparse autoencoder for community detection. The second round of results is significantly better than the first round of clustering quality.
Meanwhile, as shown in the LiveJournal dataset, the accuracy of the proved DSACD is significantly improved on the big data set. After deep learning, the NMI value is increased from 0.7111 to 0.8171, and the degree of improvement is approximately 13%. On the other hand, the NMI top value gradually increases during the course of parameter experiment and reflects the necessity and superiority of deep learning.
In Table 5, the parameter value of S, σ, and T before and after experiments are tested in four data sets. In Strike and Football, the average value of every parameter are detemined with the repeat experimental times are 100, but in LiveJournal, the detemined average value of every parameter only needs 5 repeat experimental times. The repeat experimental times are for comparing with the CoDDA in the same parameter standard.
Visualization results
This experiment is based on the real dataset (Ground-Truth), K-means algorithm, the hop-based clustering method (hop), CoDDA, and the DSACD, which are visual comparison. Intuitively, the cluster information between different communities is observed and evaluated. Figures 9 and 10 show the results. The same color represents the same community, and different colors represent different communities.
Comparison of the actual results of the strike dataset and different cluster results
Comparison of the true results of the football dataset and different clustering results
As seen from Fig. 9, the cluster results of the K-means algorithm are not accurate, and the green communities are basically clustered into yellow communities, which obviously does not conform to the real situation. Both the hop-based and the deep sparse autoencoder-based algorithms (DSACD) cluster accurate results, which are in good agreement with the real-world results [38], indicating the accuracy of the algorithm. Furthermore, because the hop-based algorithm (hop) and the deep sparse autoencoder algorithm (DSACD) are consistent, it shows that the algorithm has little meaning in deep learning of small data sets. The parameters are not properly selected or even lost information. In the data set, the hop count threshold and the attenuation factor should be focused on improving the cluster effect.
As Fig. 10 shows, the K-means algorithm cannot be accurately clustered in the 180-node dataset. Note that the K-means algorithm can be used in small maps but cannot be clustered in a slightly complex network because the adjacency matrix still carries much information that leads to unclear community boundaries. After adding the similarity matrix processing, the nearby nodes are connected, so the clustering quality is significantly improved. For the dataset, because the community structure is very obvious, the similarity matrix can obtain good clustering results, and the community structure is initially calculated but will still be confused in the community with close distance. After deep learning, the community clustering with obvious community structure is almost all successful, and there are still fewer nodes with cluster failure for clusters without community structure.
According to Fig. 11, there are 8 communities in the LiveJournal dataset, and a monster community appears for the K-means algorithm clustering. The hop-based processing [13] can successfully cluster nodes with obvious community structure, but for two communities with higher similarity, that is, the situation with numerous edges between communities cannot be correctly handled, and the connection between two communities with a close relationship can easily be clustered into a third community. As shown in Fig. 12, the yellow community section should be green. In addition, for nodes with fewer neighbors, the green node group on the left side of the figure should be pink, but there is no successful cluster. After learning the network features, the left green node group and the pink node group are successfully merged in the DSACD diagram. The clustering accuracy is further improved for the green node community, and finally, the community with the obvious community structure is clustered.
The comparison of real results about LiveJournal datasets and different cluster results
Comparison of the CoDDA and the DSACD about the LiveJournal dataset
The experiments in Section 3.5, which has compared the time and results of DSACD and CODDA. To visualize the results of the two algorithms, the LiveJournal dataset is extracted, and the results of the two algorithms are visually compared. For the deep sparse autoencoder for community detection, except for the four communities with a high coupling degree in the upper right and upper left, the other four communities are clustered accurately. The same situation appears in the CoDDA. Four communities are obviously clustered successfully. At the same time, some mistakes emerge in the course of clustering such as one community clustered into two communities and two communities clustered into one. Compared comprehensively, the effect of the two algorithms is close.
The clustering method based on the K-means algorithm is random, especially in small datasets because the boundary between two communities is not easy to judge. The clustering results will change due to the selection of initial points, especially the time selected of the influential points will directly affect the clustering results. For small data sets, the results need to be averaged many times, and the final results will converge at a certain value. With the increase in datasets, the size of community groups also expands, and the proportion of nodes in the border area relative to the entire map also decreases, so the clustering results tend to be stable.
In the community detection algorithm based on the deep sparse autoencoder and L-BFGS, parameters need to be selected. Among these parameters, the hop threshold of the small-scale network is smaller. Because the size of the community is small, the influence of the relationship between nodes is smaller. The appropriate hop threshold can be taken from 2 to 3, and the calculation requires less time. For a large-scale network, the threshold of hops will increase correspondingly. At this time, the community structure is obvious, and the scale is large, and the relationship between nodes is complex, so the corresponding influence will increase. The threshold of hops can be taken from 6 to 8. At the same time, large-scale datasets need to be reduced several times to improve the accuracy of the community feature extraction. However, regardless of any parameter, it should not be too large or too small; otherwise, it will lead to data redundancy or missing data. This situation also demonstrates the necessity of the parameter experiment and also illustrates the necessity of conducting a parametric experiment.
When compared with other algorithms or with itself, the DSACD has higher accuracy. The similarity matrix plays a dominant role in small data sets. On the large dataset, a further dimension reduction operation based on the deep sparse autoencoder is needed for feature extraction. In the process of training, the CoDDA or the DSACD can be used for back-propagation. The CoDDA has the characteristics that it does not need to calculate the Hessian matrix and saves memory, but it takes a long time. The DSACD is characterized by more accurate calculation, but it requires more memory. In large data sets, the program may crash due to insufficient memory. It is necessary to determine in advance whether the hardware configuration meets the requirements. The accuracy of the two algorithms is slightly higher than the accuracy of the CoDDA algorithm.
Through the visualization software [42], the results of the K-means clustering can hardly be separated from the community, resulting in the emergence of a giant community that is the monster community. After calculating the similarity matrix, the community structure appears. Finally, dimension reduction by the deep sparse automatic encoder can separate the similar communities more accurately and further improve the clustering accuracy.
Application on the indoor positioning system
In this section, an application test with the benchmarking data of our indoor positioning systems is designed. Figure 13 depicts a public place in our library, in which the total area is 100m2 and area measurement is 64m2. Four APs are installed in four corners, and 64 points are set in the place. There are 9379 records of RSSI collected by smartphone—MI note2. Our DSACD can be used to gather 64 communities, and then, the distance between every points and each AP can be obtained by the log-distance path loss model [43]. Formula 15 is a logarithmic distance ranging model for indoor wireless signal transmission.
$$ RSSI(d)_{dB}=RSSI(d_{0})-10\beta\lg\left(\frac{d}{d_{0}}\right)+\epsilon $$
The test scenario in our library
In formula 15, RSSI(d0) represents the signal intensity when distance is between AP and signal source, and ε is a random variable that obeys normal distribution (ε;N(0,σdB2)). β represents the path loss factor, and the indoor environment is usually set to 3 or 4.
Suppose the distance between the two APs and the signal source is d1 and d2, and the signal intensity difference among them is ΔdB.
$$ RSSI(d_{1})=RSSI(d_{0})-10\beta \lg\left(\frac{d_{1}}{d_{0}}\right)+\epsilon_{1} $$
$$ RSSI(d_{1})+\Delta dB=RSSI(d_{0})-10\beta\lg\left(\frac{d_{2}}{d_{0}}\right)+\epsilon_{2} $$
Since two APs are in the same localization area, let ε1=ε2. Formula 16 is subtracted from formula 17, and the result is as follows:
$$ \Delta dB=10\beta\lg\left(\frac{d_{2}}{d_{1}}\right) $$
Convert formula 18 to formula 19:
$$ d_{2}=10^{\frac{\Delta dB}{10\beta}}d_{1} $$
As shown in formula 19, ΔdB describes the distance relationship between the two APs. As shown in formula 20, FingerPrinti represents the signal intensity between the ith AP and m signal source. MinFingerPrinti represents the weakest signal intensity between the ith AP and jth signal source.
$$ FingerPrint_{i}=\left\{ RSSI_{0},RSSI_{1},\cdots,RSSI_{i},\cdots,RSSI_{m}\right\} $$
$$ MinFingerPrint_{i}=\min(FingerPrint_{i}) $$
The difference between FingerPrinti and MinFingerPrinti
$$ FingerPrint'_{i}=\left\{RSSI'_{0},RSSI'_{1},\cdots,0,\cdots,RSSI'_{m}\right\} $$
According to formulas 19 and 22, formula 23 is as follows:
$$ FingerPrint^{\prime\prime}_{i}=\left\{ 10^{\frac{RSSI'_{0}}{10\beta}}d_{1},10^{\frac{RSSI'_{1}}{10\beta}}d_{1},\cdots,d_{1},\cdots,10^{\frac{RSSI'_{m}}{10\beta}}d_{1} \right\} $$
The form of formula 23 after normalization is as follows:
$$ FingerPrint^{\prime\prime\prime}_{i}=\left\{ 10^{\frac{RSSI'_{0}}{10\beta}},10^{\frac{RSSI'_{1}}{10\beta}},\cdots,1,\cdots,10^{\frac{RSSI'_{m}}{10\beta}} \right\} $$
Formula 24 represents the distance fingerprint of ith AP, and these distance fingerprints constitute the "distance fingerprint map" of the location area.
The geometric meaning of fingerprint is the distance between the reference point and AP. Fingerprint localization model is divided into two stages: offline and online. During the offline stage, the localization area is divided into different clusters by DSACD, using the technique proposed in this article and the binary classification for APs by K-means algorithm in each subarea to select available APs. During the online stage, subareas where the object points exist are selected using NN algorithm. Then, the coordinate of the target point is calculated.
The DSACD are used in offline stage. The selected 64 test points as shown in Fig. 13, while the fingerprint database are divided 64 subareas with 64 reference points as the centroids. The operation process is as follows: the collected signals in the whole region are transformed into fingerprints, and then, the fingerprints are as inputs of DSACD to realize regional classification. These effective fingerprint components are extracted from all the fingerprints in the subregion, then the fingerprint is transformed into distance fingerprint according to the fingerprint transformation model, and finally, the fingerprint database of the subregion is formed.
Figure 14 indicates the average errors of the distance between every point and each AP. The average distance error between 64 points and 4 AP shows normal distribution that is according with the laws of nature; meanwhile, a loop occurs at every eight points that is according with the law of collection. It is shown in Fig. 13 that every column has 8 points. In addition, the 64 average errors show that nodes with distance error less than 0.5 from AP1 account for 26.6% of the total number of nodes, nodes with distance error between 1.5 and 2 from AP1 account for 15.63% of the total number of nodes, nodes with distance error less than 0.5 from AP2 account for 21.88%, nodes with distance error between 1.5 and 2 from AP2 account for 10.94%, nodes with distance error less than 0.5 from AP3 account for 21.88%, nodes with distance error between 1.5 and 2 from AP3 account for 15.63%, nodes with distance error less than 0.5 from AP4 account for 10%, and nodes with distance error between 1.5 and 2 from AP4 account for 21.88%. For the 4 APs, nodes with higher distance error accuracy are reaching 21.88% which are from AP2/AP3, nodes with lower distance error are reaching 21.88% which is from AP4. During the process of measuring, there is voltage signal interference near AP4, whose reasonableness is confirmed by the calculation results.
The average errors of location test points
In the collection environment, factors that have strong impacts for measured data are the temperature, angle, humidity, and crowd density. Sixty-four communities are gathered by DSACD, and then, the log-distance path loss model is used in every community to obtain the distance between every point and each AP. The achieved average errors can satisfy the necessary of location, which has a certain reference significance for the real-time of future research intelligent navigation positioning.
This paper proved a novel deep sparse autoencoder-based community detection (DSACD) and compares it with K-means, Hop, CoDDA, and LPA algorithm. Experiments show that for complex network graphs, dimensionality reduction by similarity matrix and deep sparse autoencoder can significantly improve clustering results.
Several issues persist and require further research. The similarity matrix calculation increases with the matrix size, which lead to a large memory consumption and high requirements for experimental equipment. Too many temporary variables in the back-propagation algorithm will also consume memory. Decomposition strategy for large matrix in similarity calculation should be expected in future studies.
The experiment used four real data sets are Strike [36], Football [37], LiveJournal [38], and Orkut [39]. Strike is a 24-striker relationship table on wood processing projects. The frequency of discussion for strike topics between two people is the rules, which are added. If the frequency is high (there are specific criteria for evaluation during the investigation, no detailed explanation will be given here), then, a connection is established. Football is the timetable for the American Football Cup (FBS) held by the American College Sports Association (NCAA) in 2006. In the NCAA relationship network, if two teams played games, the connection is established. Orkut is a social service network launched by Google.
S. Fortunato, Community detection in graphs. Phys. Rep.486(3-5), 75–174 (2010).
M. Belkin, P. Niyogi, Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput.15(6), 1373–1396 (2003).
Article MATH Google Scholar
U. N. Raghavan, R. Albert, S. Kumara, Near linear time algorithm to detect community structures in large-scale networks. Phys. Rev. E. 76(3), 036106 (2007).
S. Gregory, Finding overlapping communities in networks by label propagation. New J. Phys.12(10), 103018 (2010).
K. Guo, W. Guo, Y. Chen, Q. Qiu, Q. Zhang, Community discovery by propagating local and global information based on the mapreduce model. Inf. Sci.323:, 73–93 (2015).
Q. Zhang, Q. Qiu, W. Guo, K. Guo, N. Xiong, A social community detection algorithm based on parallel grey label propagation. Comput. Netw.107:, 133–143 (2016).
S. E. Schaeffer, Graph clustering. Comput. Sci. Rev.1(1), 27–64 (2007).
Y. Bengio, A. Courville, P. Vincent, Representation learning: a review and new perspectives. IEEE Trans. Pattern. Anal. Mach. Intell.35(8), 1798–1828 (2013).
J. Schmidhuber, Deep learning in neural networks: an overview. Neural Netw.61:, 85–117 (2015).
Y. Lecun, Y. Bengio, G. Hinton, Deep learning. Nature. 521(7553), 436 (2015).
C. -H. Chen, F. Song, F. -J. Hwang, L. Wu, A probability density function generator based on neural networks. Phys. A Stat. Mech. Appl.541:, 123344 (2019).
C. -H. CHEN, IEICE Trans. Fundam. Electron. Commun. Comput. Sci.103(1), 265–267 (2020).
S. Jing-Wen, W. Chao-Kun, X. Xin, Y. Xiang, Community detection algorithm based on deep sparse autoencoder. J. Softw.28(3), 648–662 (2017).
MathSciNet MATH Google Scholar
Z. Wang, Q. Zhang, A. Zhou, M. Gong, L. Jiao, Adaptive replacement strategies for moea/d. IEEE Trans. Cybern.46(2), 474–486 (2015).
Z. Wang, Q. Zhang, H. Li, H. Ishibuchi, L. Jiao, On the use of two reference points in decomposition based multiobjective evolutionary algorithms. Swarm Evol. Comput.34:, 89–102 (2017).
Z. Wang, Y. -S. Ong, H. Ishibuchi, On scalable multiobjective test problems with hardly dominated boundaries. IEEE Trans. Evol. Comput.23(2), 217–231 (2018).
Z. Wang, Y. -S. Ong, J. Sun, A. Gupta, Q. Zhang, A generator for multiobjective test problems with difficult-to-approximate pareto front boundaries. IEEE Trans. Evol. Comput.23(4), 556–571 (2018).
A. Cauchy, Méthode générale pour la résolution des systemes d'équations simultanées. Comp. Rend. Sci. Paris. 25(1847), 536–538 (1847).
R. Fletcher, C. M. Reeves, Function minimization by conjugate gradients. Comput. J.7(2), 149–154 (1964).
J. Vlček, L. Lukšan, Generalizations of the limited-memory BFGS method based on the quasi-product form of update. J. Comput. Appl. Math.241:, 116–129 (2013).
B. Molina, E. Olivares, C. E. Palau, M. Esteve, A multimodal fingerprint-based indoor positioning system for airports. IEEE Access. 6:, 10092–10106 (2018).
H. Wang, L. Dong, W. Wei, W. -S. Zhao, K. Xu, G. Wang, The WSN monitoring system for large outdoor advertising boards based on zigbee and mems sensor. IEEE Sensors J.18(3), 1314–1323 (2017).
S. Xia, Y. Liu, G. Yuan, M. Zhu, Z. Wang, Indoor fingerprint positioning based on Wi-Fi: an overview. ISPRS Int. J. Geo-Inf.6(5), 135 (2017).
P. Dai, Y. Yang, M. Wang, R. Yan, Combination of DNN and improved KNN for indoor location fingerprinting. Wirel. Commun. Mob. Comput.2019:, 1–9 (2019).
Q. V. Le, J. Ngiam, A. Coates, A. Lahiri, B. Prochnow, A. Y. Ng, in Proceedings of the 28th International Conference on International Conference on Machine Learning. On optimization methods for deep learning (OmnipressBellevue, 2011), pp. 265–272.
J. Xu, D. W. Ho, A new training and pruning algorithm based on node dependence and Jacobian rank deficiency. Neurocomputing. 70(1-3), 544–558 (2006).
L. T. Duarte, F. J. V. Zuben, R. R. F. Attux, R. Ferrari, C. M. Panazio, L. N. D. Castro, J. M. T. Romano, in IEEE Workshop on Machine Learning for Signal Processing. Mlp-based equalization and predistortion using an artificial immune network (IEEEMystic, 2005).
J. Yang, G. Xie, Y. Yang, X. Li, in 2018 International Conference on Control, Automation and Information Sciences (ICCAIS). A rotating machinery fault diagnosis method for high-speed trains based on improved deep learning network (IEEEHangzhou, 2018), pp. 440–444.
M. Xu, Y. Dong, Z. Li, M. Han, T. Xing, in 2018 37th Chinese Control Conference (CCC). A novel time series prediction model based on deep sparse autoencoder (IEEEWuhan, 2018), pp. 1678–1682.
R. Raina, A. Battle, H. Lee, B. Packer, A. Y. Ng, in Proceedings of the 24th International Conference on Machine Learning. Self-taught learning: transfer learning from unlabeled data (ACMNew York, 2007), pp. 759–766.
H. Lee, R. Grosse, R. Ranganath, A. Y. Ng, in Proceedings of the 26th Annual International Conference on Machine Learning. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations (ACMMontreal, 2009), pp. 609–616.
H. Lee, P. Pham, Y. Largman, A. Y. Ng, in Advances in Neural Information Processing Systems. Unsupervised feature learning for audio classification using convolutional deep belief networks (Neural Information Processing Systems Foundation, Inc. (NIPS)Vancouver, 2009), pp. 1096–1104.
G. W. Taylor, R. Fergus, Y. LeCun, C. Bregler, in European Conference on Computer Vision. Convolutional learning of spatio-temporal features (SpringerHeraklion, 2010), pp. 140–153.
X. Xue, M. Yao, Z. Wu, A novel ensemble-based wrapper method for feature selection using extreme learning machine and genetic algorithm. Knowl. Inf. Syst.57(2), 389–412 (2018).
S. Kullback, R. A. Leibler, On information and sufficiency. The annals of mathematical statistics. 22(1), 79–86 (1951).
B. S. Dohleman, Exploratory social network analysis with Pajek. Psychometrika. 71(3), 605 (2006).
L. Sz, Implementation of various criterias for datamining algorithms. https://github.com/luks91/datamining-criteria/tree/master/dataset/. Accessed 16 Nov 2014.
J. Yang, J. Leskovec, Defining and evaluating network communities based on ground-truth. Knowl. Inf. Syst.42(1), 181–213 (2015).
A. Clauset, M. E. Newman, C. Moore, Finding community structure in very large networks. Phys. Rev. E. 70(6), 066111 (2004).
Z. Liu, B. Xiang, W. Guo, Y. Chen, K. Guo, J. Zheng, Overlapping community detection algorithm based on coarsening and local overlapping modularity. IEEE Access. 7:, 57943–57955 (2019).
Q. Li, J. Zhong, Q. Li, C. Wang, Z. Cao, A community merger of optimization algorithm to extract overlapping communities in networks. IEEE Access. 7:, 3994–4005 (2019).
Program for Large Network Analysis. http://vlado.fmf.uni-lj.si/pub/networks/Pajek/. Accessed 21 Oct 2018.
J. Li, J. Tian, R. Fei, Z. Wang, H. Wang, Indoor localization based on subarea division with fuzzy C-means. Int. J. Distrib. Sensor Netw.12(8), 1550147716661932 (2016).
The authors are extending their kind gratitude for the new ideas in our discussion from Fangying Song.
This research was partially supported by the National Key Research and Development Program of China (2018YFB1201500), National Natural Science Foundation of China (under Grants nos. 61773313), and the Project of Technological Planning Project of Beilin District of Xi'an (GX1819).
Xi'an University of Technology, JinHua Road No.5, Xi'an, China
Rong Fei, Jingyuan Sha, Kan Wang & Shasha Li
College of Information and Communication, National University of Defense, Dü8 east zhangba road, Xi'an, China
Qingzheng Xu
Beijing Huadian Youkong Technology Co., Ltd., TianXiu Road No.10, Beijing, China
Bo Hu
Rong Fei
Jingyuan Sha
Kan Wang
Shasha Li
Rong Fei and Jingyuan Sha proposed the main idea about the deep sparse autoencoder on community detection, completed the simulation, and analyzed the result. Qingzheng Xu carried out the indoor location studies. Bo Hu participated in the compared community detection algorithms. Kan Wang and Shasha Li helped to modify the manuscript. The authors read and approved the final manuscript.
RONG FEI is an Associate Professor in Xi'an University of Technology. Her main research interests are Community detection, stochastic opposition algorithm, and location-based services.
Correspondence to Rong Fei.
The authors declare that there is no conflict of interests regarding the publication of this paper.
Fei, R., Sha, J., Xu, Q. et al. A new deep sparse autoencoder for community detection in complex networks. J Wireless Com Network 2020, 91 (2020). https://doi.org/10.1186/s13638-020-01706-4
Community detection
|
CommonCrawl
|
We're On The Other Side, But What's Next For Investors?
Seth Golden, May 9, 2020 December 19, 2020 , Daily Articles, 0
This past weekend was a non-Research Report weekend for Finom Group. Nonetheless we sent the following notes to our Premium, Contributor and Master Mind Options members so as to set the stage for the trading week ahead. We are now making these notes available to our Basic members and the general public. Upgrade your subscription today to receive our weekly Research Reports and State of the Market videos for just $5.99 monthly (Cancel anytime). Here is a link to help!
Last weekend found most publications offering random takes on the latest sales of airline stocks and lock of dip buying from Warren Buffett. The market largely brushed off the fear-driven narratives in favor of recognizing that reopening the economy in phases held a greater calculation of trough economic conditions and the resumption of economic growth for the back half of the year. Make no never-mind about it folks, this will prove an arduous time for the global economy, but there is likely no where to go but up from here. Having said that, we don't exonerate the potential pitfalls along the path to recovery, which will likely be hyperbolized in the daily media narratives to come.
Analysts are watching for any signs that the bottom of the economic downturn is near. Goldman Sachs Group Inc. analysts have been tracking varied measures such as gas demand, Starbucks mobile application downloads and traffic in restaurants as measured on the reservation website OpenTable for signs of a recovery. Gas demand, though it has fallen tremendously, started improving over the past week, while other data tracking flow through workplaces and transit showed small gains as some states have reopened.
"There are some small, early signs that life is resuming some form of normalcy," the analysts said in a research note Thursday. "We expect these small signs of recovery to continue as the country gradually reopens and consumers resume their daily activities."
That lead in there was from Saturday's edition of the business section within the Wall Street Journal. To build upon that, there was a good deal of bullish breadth in the market this past trading week in fitting with some of the signs of life in the economy. While we analyze the closing values, rate of change and market internals they are not able to predict the future for us. They can, however, inform us as to the probabilities going forward.
It's very clear to see, from the chart above that the breadth of the market has improved from the previous week and from the lows in March. Based on the percentage of stocks trading above their 50-DMA, we are forced to recognize that there are more stocks participating in this relief rally. The relief rally began, in earnest 6 weeks ago and has had small pullbacks along the way. Each pullback thus far has proven a buying opportunity. The support for the market has not been so much in the way of broad market buying activity, but a combination of the favorable equity risk premium, fiscal and monetary policy enactment and outsized short positioning in a low liquidity regime.
Recall the following notes within our April 19th Research Report below:
"Moreover, while Strategas Research points out that 99% of stocks have moved above their 20-DMA, the percentage of stocks making new 20-day highs has not even reached 50 percent. Getting this above 55% would provide further evidence of broad-market healing that is necessary to support a sustained price rally, as evidenced in the chart/table below from Ned Davis Research."
Has this breadth strength been achieved as we reiterate and re-emphasize market breadth internals vary? Well, let's take a look while recognizing that Ned Davis Research said we need to be aiming for the percentage of stocks trading at 20-day highs equaling at least 55 percent. Why 55%: The market breadth has improved, but breadth thrusts are still elusive!!
That big spike last week took the percentage of S&P 500 stocks trading at 20-days all the way up to 54, but not 55. Nonetheless, we can see there was modest improvement in this breadth indicator this past trading week. Truth be told, one can look at most breadth indicators for the week that was; by and large the majority showed improvement and with a risk-on slant. What do we mean by that?
We've been wanting to see certain sectors of the market begin to show greater performance than in past months to give us further reason to believe that the relief rally was something more than just a significant snapback after the fastest bear market in history. For the last couple of weeks, the market has delivered that which we desired to see in not just improving breadth but also sector performance.
What we deem to be offensive positioning would be found in sectors like Consumer Discretionary, Energy and Technology. While we'd also like to see Financials perform better and they still give us reason to remain cautiously optimistic, eventually the Financials will have to add support for the market to sustain a bullish trend. But here's the other thing that may have gone under the radar during the whole relief rally to-date. The fact is that small caps have been the leader. The main reason this goes under the radar is due to the volatile percentage moves within the Russell 2000 (RUT) which can easily give up some 3-5% on a day where the S&P 500 is down maybe 1-2 percent. But here are the facts about what's been the leading market since the March 23rd low:
Since the March lows, RUT has rebounded the most among the top indexes.
1. IWM +37.61%
2. NDX +37.55%
3. SPY +33.67%
4. DIA +33.59%
If you've been following markets over the last several weeks, all you've likely heard is how the large mega-cap tech stocks have been keeping the market afloat. The five largest stocks in the U.S. (Microsoft, Apple, Amazon, Alphabet, and Facebook) have done extraordinarily well this year and since the March lows, but to say they've been doing all the lifting would be inaccurate. Take for example the fact that of the 2,052 U.S. stocks with market caps above $500 million, 101 (4.9%) have hit 52-week highs so far in May and none of them are named MAGA or Facebook stocks. The table below from Bespoke Investment Group lists the 19 U.S. stocks with market caps above $20 billion that have hit 52-week highs so far this month. Looking at the list, only one of the names listed (PayPal) has a market cap above $100 billion and just four others have market caps above $50 billion.
While mega-cap stocks may not be the ones hitting 52-week highs presently, one part of the argument behind which factors are driving the market that is partially accurate is that tech stocks are leading. Breaking out the table above by sectors, 8 of the 19 names listed are from the Technology sector. Behind Technology, the next most heavily represented sector is Health Care with 6 stocks in the table. However, when we expand the universe to all stocks hitting 52-week highs with market caps above $500 million, Technology doesn't even top the list. With 49 stocks from the Health Care sector hitting 52-week highs this month, it tops the list followed by Technology with 26. Behind these two, no other sector accounts for even 10 names.
In speaking of new 52-week highs, this particular breadth reading comes to mind. We also want to see the number of 52-week highs minus the number of 52-week lows improve. It's not the most consequential indicator, in fact it ranks toward the bottom of our list of breadth indicators at Finom Group, but for the sake of argument and building upon the aforementioned favorable breadth readings…
The NH-NL chart of the S&P 500 shows that this breadth indicator has done very little since early April. Nonetheless, on a week-over-week basis, it rose from -2.0 to 7 at the end of this past trading week. Improvement! And for every good chart or breadth reading, as we like to say, there is one that proves concerning or "not so good".
The Nasdaq A/D chart above suggests… well it pretty much validates that the index is quite overbought. The argument to suggest the overbought condition is relative to the recent oversold condition might prove valid, but given current levels, we would suggest treading cautiously with Nasdaq trading in the interim. While this chart does raise a yellow flag of sorts the NYSE and Nasdaq TRIN readings recognize that there has been rather strong buying demand this past week.
Arms Index or TRIN = (advancing issues / declining issues) / (composite volume of advancing issues / composite volume of declining issues). Generally, an Arms/TRIN of < than 1.00 indicates buying demand; > 1.00 indicates selling pressure.
So when we look at the closing value of TRIN from Friday, here is what we find: (buying demand)
Now, here are some relevant statistics to consider going forward. And when you see them, they might make the hair on the back of your neck stand on end at first. After the initial reaction and computing, however, we think the statistics will bring a certain degree of confidence on how to go about capital allocation when the market consolidates for price.
Only 5 times in the last 70 years have we seen the S&P 500's 7-week ROC produce the kinds of gains we've currently seen and each time the low in the stock market had already been achieved.
See below statistic:
The week that just ended was May 8th 2020, eleven years to the day of the last time the 7-week gain was 20% or greater!
The S&P 500 is up 27.11% the last 7 weeks.
That's the single best 7-week stretch the index has seen since 1950.
Ironically, the 7 weeks prior saw the index fall -28.54%, the 3rd worst 7-week decline since 1950.
Despite the aforementioned historical data, there still remains a good deal of consideration, concern and rhetoric amplifying the re-test thesis. This comes, especially, after Double Line Capital's Jeffrey Gundlach outlined his short on the S&P 500 at 2,862. Keep in mind that Gundlach is a bond fund manager who is typically, if not always, found without favor for equity markets.
"I'm certainly in the camp that we are not out of the woods. I think a retest of the low is very plausible," Gundlach said on CNBC's "Halftime Report." "I think we'd take out the low."
"Actually I did just put a short on the S&P at 2,863. At this level, I think the upside and downside is very poor. I don't think it could make it to 3,000, but it could. I think downside easily to the lows or beyond … I'm not nearly where I was in February when I was very, very short."
Gundlach likely cares very little for the historical data noted above and likely also deems the current situation unique in many aspects. We certainly can't deny the unique variables at play in the market and economy presently residing over the recession and bear market. Having said that, human behavior is pretty constant in the market, regardless of what is taking place in the economy. One could also make the argument that to the degree monetary and fiscal policy has been implemented, it lends that much more favor to the market over time. And beyond the rhetoric or even individual investor biases held close to the brokerage account positions at play, more technical analysis suggests a retest of the initial low is historically improbable.
Back on April 15th we offered the following in our article titled Are Bears Hibernating Ahead of Next Leg Lower or…?
Williams %R, or just %R, is a technical analysis oscillator showing the current closing price in relation to the high and low of the past N days (for a given N). Its purpose is to tell whether a stock or commodity market is trading near the high or the low, or somewhere in between, of its recent trading range.{\displaystyle \%R={high_{Ndays}-close_{today} \over high_{Ndays}-low_{Ndays}}\times -100}
In Williams' most recent notes, the technician denotes that the major averages have exacted a 50% retracement from the March 23rd lows. As Williams pointed out, "The history of the last nine bear markets going back to 1972 shows that once we have had a 50% rally, back where we are now in the S&P 500, lows are not re-tested. And a rally less than 50% may give way to a re-test. Anything greater than 50%, we just don't retest."
Mad Money host Jim Cramer recognized Larry Williams' latest notes and historic data regarding 50% bear market retracements in his more recent episode.
Based on everything we know about past bear markets and have witnessed in the current bear market, the facts suggest that even as the macro picture remains uncertain, the technical market picture offers strong probabilities. The low may have proven to be the 2,191 level set in March, but a full retest doesn't seem to be in the cards based on the historic data. This is not to suggest that consolidation of the relief rally won't take place going through earnings season and an economic shutdown that is likely to last through all of April.
Here's what we also know regarding the Williams %R indicator for bear markets:
If the Williams %R makes a major low that coincides with the ATR (Average Trading Range) peaking and a 55% retracement of a bear market decline, the S&P 500 has never, ever fully re-tested the initial bear market low. As we can see from the chart of the two indicators above, that is exactly what happened in late March of 2020. What this outlines is not how much consolidation the market will perform, but what investors are best found doing when it does, BUY THE DIP!
Beyond the charts, breadth and massive short positioning that seems ever growing in the last month and with the S&P 500 largely trading sideways…
… there is the whole common sense and logic aspect of the economic shutdown cause by the pandemic. We see a good many analysts and market participants hyperbolizing over how the economy will reopen and if it will reopen this year. We also hear a lot of commentary questioning whether schools will reopen. Finom Group's chief market strategist Seth Golden offered his views on the subject recently. He bottom-lined it by saying there is no probability suggesting that the government would deprive children of an education for 2 consecutive school years. And he's right, that has never happened before as a civilized nation of states. Logically, we can put all the pieces of the argument together to rationalize that an economy can't get back to work with unsupervised kids at home while parents are at work. The two factors are connected without fail. Seth does concede that many school districts may choose to delay the start of the school year until after Labor Day which would offer that much more time for drug treatments for COVID-19 to materialize, while herd immunity also advances daily. But that's really about the best concession Seth is willing to offer at this time. Society works from the ground floor up, and the ground floor is education!
Moreover, we think the market has already, also figured this out. It's too illogical to believe the government would deny education for two consecutive years. To the extent POTUS openly declares the harm to the economy by remaining shuttered, the same proclamation would likely ring true if school closures for a second year was proving an increasing probability.
When it is all said and done, tough decisions have to be made. The mortality rate is not high enough to keep the economy closed any more than it is high enough to withhold a child's education, which is understood as part of our individual civil liberties. And that is even before we think about "at risk groups".
In the latest narrative from Peter Tchir, he discusses the risk groups and the data concerning COVID-19. Peter is among the few to openly espouse his bullishness on the markets and belief that the reopening process will prove safer and stronger than most think.
Nursing Homes or Long Term Care Facilities
In Virginia 480 out of 827 deaths have occurred in LTC facilities. 58%.
In New Jersey 4,691 out of 8,952 deaths have occurred in LTC facilities. 52%.
In Massachusetts 2,837 out of 4,702 deaths have occurred in LTC facilities. 60%.
In MA, there were 1,297 deaths reported in the past week. A tragic loss of life, with 66% of those deaths occurring in nursing homes.
I could not find this data for every state, but across the nation, LTC facilities have born the brunt of the virus. The loss of life is depressing.
I do not know what can be done to fix the tragic loss of life in these facilities, but there is clearly a problem that needs to be addressed. While I cannot help with fixing the issues that have caused this level of loss at these facilities, the fact that it is so concentrated, helps in terms of getting the reopening working.
Nursing homes and LTC facilities need to be addressed to save lives, but in terms of reopening, it is clear that they need to remain as distanced as possible until a solution to address the needs of these facilities is found.
I do not mean to sound callous, my heart breaks at this loss of life, but it is important that 50% of deaths in many states are occurring in relatively closed ecosystems that can be distanced during the reopening.
I don't even differentiate between young and young and healthy, partly because I cannot find good data at that level of granularity and partly because youth allow seems a powerful factor.
The States of Connecticut and MA have reported 0 deaths under the age of 20! The combined population of these two states is 10.5 million people and not a single loss of life to anyone under the age of 20.
Texas the state with the second largest economy last year, and 29 million people has had 2 confirmed deaths under the age of 20 (Texas only reports details after a final investigation, so that could be revised higher).
According to the CDC, the flu has caused 174 pediatric deaths this year. Again, according to the CDC 61,000 people died in America due to the flu in the 2017-2018 flu season. The age make-up, as tracked by this chart on the flu resembles the age breakout of COVID-19 that we are experiencing.
Under the age of 40, MA and CT have reported 19 and 23 deaths respectively. Not last week, but since the start of the virus. Texas, which only reports details after a full investigation as 21 deaths under the age of 40 (out of 453 deaths that have been fully investigated).
This is great news! The youth of this country are getting COVID-19 but not dying at alarming rates. This group is not immune to the flu either. Also, we haven't even addressed comorbidity yet, which means these numbers, which seem very low to me, are overstating the risk.
In MA, where comorbidity is only reported on cases that have completed their investigation, the comorbidity rates is 98.3%!
New York State provides granular data on comorbidity. 89.5% of deaths involved comorbidity. Admittedly, hypertension is the largest factor and that is more common than I'd like, but the data is telling a consistent story.
We should all strive to be healthier. That is one takeaway, but this dual risk factor is also important as we think about risk groups and the reopening.
There is a clear progression in every state on deaths as you get older. Some of that may be linked to overall being less healthy. I'm not sure. But the state data is clear. The death rate even for the 40 to 50 is generally low (42 in MA, 15 in CT, 20 in Texas). New York State is much worse at 748, but 92.3% (693 out of 748).
I for one am in this "everyone" else category. At the lower end of the age bracket, which is good, but need to work much harder on my cholesterol and cardiovascular health.
I will participate to some extent in the reopening, but will be cautious (or at least what I think is cautious). I carry a mask constantly and have 2 unopened ones, just in case. I don't think I've been within 6 feet of someone (unless separated by plexiglass) more than a few times in 2 months.
The Choice Should Be Yours
Everyone is going to have to decide how to participate in the re-opening, but I think the data supports the anecdotal evidence – that younger, healthy people are likely to try and return to normal as soon as possible. The rest of us, will follow suit as we see fit.
Two weeks ago, I barely saw the data presented in this level of detail. It is still not presented this way in the mainstream media, but as people dig into the data, expect the economy to bounce back faster than most expect. It won't be a full recovery by any means, but for now, I expect the reopening to be more robust and safer than most people seem to expect.
Data Dependent
I will be closely watching for signs that I am wrong. One sign that I will not be watching is confirmed cases. Yes, confirmed cases are going to go higher as we reopen. Part of that will be attributable to the reopening and part will be that we are just ramping up testing in many parts of the country, so officially categorizing people who have been sick for some time as 'new' cases. I know the media will latch on to the confirmed cases, possibly to support the no reopen view, but hopefully, as the data demonstrates, the groups most likely to participate in the reopening can get the virus and not have significant risk.
Hospital bed usage will be a key metric to watch. Many of the state sites I visited have very useful information on the availability of medical care in their state. I, for one, was impressed at the level of organization many of the states have demonstrated, and that is only based on public information, and I assume they have more details behind the scenes.
Changes in mortality rates, information about antibodies, a vaccine, will all influence my outlook. I'm optimistic that more progress on the medical front will occur and that the data will continue to support a rational, voluntary approach to reopening.
The reopening is not without risk, but there seems to be reasonably safe approaches to reopening.
I did not argue the other side of the issue, that with well over 20,000,000 people unemployed, there are great risks from not reopening. I think there is evidence to support the reopening as it is, but if you factor in the potential problems of not getting the economy sorted, it seems even more worth the risk.
There is nothing to stop anyone from maintaining shelter in place, but don't be surprised if the reopening is more successful than many expect. Be Safe! There is nothing that says we can't be safe, respect our risk profile, and reopen.
I remain bullish on markets, in a large part, because I expect the reopening to be successful based on the data I have. The issues surrounding the virus and the reopening transcend the market, but if I'm right, America and the economy will win.
Investor Takeaways
For the week to come we expect some choppy price action, but we don't deny that breadth improvement suggests the greater probabilities for upward price movement, through the early part of the trading week. The S&P 500 is likely to achieve the 61.8 fibonacci level at 2,934/5. (SPX chart below with Fibs)
A close above 2,934 sets the stage for the benchmark index to challenge the relief rally high of 2,959 from two weeks ago. Nothing is set in stone and headlines could derail breadth momentum, but until then, the bulls have the wind at their sails.
The VIX closed at its lowest level since late February this past week, declining to below 28 by the end of the trading week. Having said that, are infamous 10-day ROC has found levels typically associated with a near-term bounce in the VIX. As such, Finom Group recognizes the potential for the VIX to decouple from its typical inverse relationship with the S&P 500.
The bottom chart identifies the 10-day ROC whereby it doesn't like to stay below the -20 level for very long before popping back up above the 0-bar, even if ever so briefly. Keep in mind our verbiage outlining the potential for the VIX to rise absent the S&P 500 declining. The inverse relationship between the 2 indexes is only viable 85% of the time, not 100% of the time.
Analysts, strategists and economists will continue to focus on the relative strength of the relief rally and general valuation in an environment where earnings are forecast to decline between 20%-30% in 2020. The "disconnect" mantra will continue to litter the investing landscape. But again make no mistake about it folks, in real-time, the fast moving stock market and usually slower moving economy seem disconnected, but in reality they aren't. One leads, the other lags, which throws off real time perceptions and reinforces the usual underperformance from fund managers and strategists alike. Stay on plan, review your process daily, stick to what works, remain flexible and with a cautiously optimistic outlook that recognizes the unwavering American spirit of ingenuity.
The Bid-Ask Spread — It Matters!
David Moadel, December 19, 2017 December 19, 2017 , Daily Articles, 0
Remember folks, only trade stocks and options with a tight bid-ask spread. The last thing you need is to...
Short Volatility For "Complete Morons": So Says Jim Cramer
Do you recall the evil of the market back in early February? You know, those evil folks that caused...
Weekly State of the Market: 5/1/2020
Seth Golden, May 1, 2020 May 8, 2020 , Daily Articles, 0
Welcome to this week's State of the Markets with Wayne Nelson and Seth Golden. Please click the following link to review the SOTM video. In...
Weekend Technical Market Recap: What a Rally!
In our weekend edition of Daily Technical Market Recap, Wayne discusses Friday's market moves as well as what we...
Look at Tech Breadth! Another October Swoon
Seth Golden, October 15, 2019 October 15, 2019 , Daily Articles, 0
One of the more "confined/restrained" market performance days I've seen in recent history. The S&P 500 (SPX) fell slightly...
Holllllllly Fed Day…Batman!
Seth Golden, March 21, 2018 March 21, 2018 , Daily Articles, 0
One thing after another has been holding the major averages in a pretty narrow range as of late. Most...
Economists Weigh In On Market & Economic Path
Seth Golden, March 9, 2020 March 9, 2020 , Daily Articles, 0
"A collapse in stock markets, co-ordinated policy statements and emergency interest rate cuts: the events of the past week...
FANG Stocks Rollover In Unstable Markets
After a near 700-point rally on the Dow Jones Industrial Average Monday many were of the opinion that the...
Weekly State of the Market: 5/8/2020Sentiment, Bearish Positioning, Unemployment, But History...
registered 20 hours, 7 minutes ago
|
CommonCrawl
|
Energy Conservation and Work
When only conservative forces are acting on a body, then its energy is conserved. Its energy is made of kinetic energy and potential energy, and the sum of these two types of energies is known as the mechanical energy of the body. Gravity, electricity, and magnetism are all examples of important forces that are also conservative. The conservation of energy is useful for understanding the motion of a particle under their influence.
But there are other forces, such as friction and drag, which are not conservative, as they cannot be written in terms of a potential energy. What do we do in those cases? Is energy still a useful quantity?
Over the last couple of centuries the following empirical observation has been made: when a body is slowed down by friction or drag (or other dissipative forces), heat is transferred to the stopping medium. If the body itself is made of many smaller parts then heat may also be transferred to itself in the process. If this heat quantity is included in the balance of energy, then the energy of the system as a whole, body + medium, is conserved.
Thus, conservation of energy is still a useful notion, but we must include heat in the balance of energy. Importantly, while the mechanical energy is associated with the body under consideration, heat is the energy quantity transferred to the medium the body is interacting with or to the body's own internal components.
But, whether the forces are conservative or dissipative, the kinetic energy of a body is something that we can measure directly. Conservation of energy tells us that the change in the kinetic energy is either associated with a change in potential energy or the transfer of heat, either one is important.
So, let us define a quantity which we call work that is the change in a body's kinetic energy,
$$W = \frac{1}{2} m v^2_f - \frac{1}{2} m v^2_i$$
Where \( m \) is the mass of the body, and \( v_i \) and \( v_f \) are the initial and final velocities, respectively. When the kinetic energy of a body increases, the work is positive and we say that work was done on the body. Alternatively, when the body's kinetic energy decreases, the work is negative and we say that work was done by the body.
The notion of work can be directly related to the forces acting on a body. Work can also be defined as the time integral over the force acting on a body in the direction of the body's velocity,
$$W = \int^{t_f}_{t_i} \vec{F} \cdot \vec{v} dt$$
Where the integral is over the total force acting on the body at every point along its motion from the initial moment \( t_i \) to the final moment \( t_f \). The dot product, \( \vec{F} \cdot \vec{v} \) is defined as:
$$\vec{F} \cdot \vec{v} = |F||v| \cos \theta_{Fv}$$
where \( |F| \) and \( |v| \) are the magnitudes of the force and velocity, respectively, and \( \theta_{Fv} \) is the angle between these two vectors. We can see that these two definitions of work above are equivalent using Newton's second law:
$$W = \int^{t_f}_{t_i} \vec{F} \vec{v} dt = \int^{t_f}_{t_i} (m \vec{a}) \cdot \vec{v} dt = m \int^{t_f}_{t_i} \frac{d \vec{v}}{dt} \cdot \vec{v} dt$$
$$W = m \int^{t_f}_{t_i} \frac{1}{2} \frac{d}{dt} |v^2| dt$$
In the second line above, the chain rule was used, will in the third line, the fundamental theorem of calculus was used. This result is very important as it tells us how to relate the forces acting on a body to the change in its kinetic energy, or in other words, the work.
Another way to understand work, which is closely related to the integral definition, but without reference to the velocity of the body:
$$W = \int^{\vec{x}_f}_{\vec{x}_i} \vec{F} \cdot d \vec{x}$$
Where \( \vec{x}_i \) and \( \vec{x}_f \) are the initial and final positions of the body, respectively. You can prove that this is an equivalent definition to the previous definitions by using the relation \( \vec{v} = d \vec{x} / dt \), and the chain rule of calculus.
|
CommonCrawl
|
Monday–Friday, March 13–17, 2017; New Orleans, Louisiana
Session X22: Room Termperature Multiferroic BiFeO3
Sponsoring Units: DCMP GMAG
Chair: Randy Fishman, Oak Ridge National Laboratory
Room: New Orleans Theater A
X22.00001: Neutron Scattering Measurements on bulk BiFeO$_{\mathrm{\mathbf{3}}}
Invited Speaker: Je-Geun Park
Of a long list of multiferroic materials, BiFeO$_{\mathrm{3}}$ is arguably one of the most interesting multiferroic materials as it displays rare room-temperature multiferroic behavior: T$_{\mathrm{N}}=$650 K and T$_{\mathrm{C}}=$1050 K. Hence BiFeO$_{\mathrm{3}}$ has been extensively investigated for potential applications. It also has a very interesting incommensurate magnetic phase transition with an extremely long period of 650 {\AA}. In this talk, I will present our latest results [1-4] mainly obtained from high-resolution neutron scattering experiments on this fascinating material. Using the vast amount of the data, I will sketch a coherent picture of the rare room-temperature multiferroic behavior and, most importantly, a full spin Hamiltonian of BiFeO$_{\mathrm{3}}$.\\ \\$[1]$ Jaehong Jeong, et al., Phys. Rev. Lett. 108, 077202 (2012)\newline [2] Sanghyun Lee et al., Phys. Rev. B Rapid Comm. 88, 060103R (2013)\newline [3] Jaehong Jeong, et al., Phys. Rev. Lett. 113, 107202 (2014)\newline [4] [Review] Je-Geun Park, et al., J. Phys.: Condens. Matter 26, 433202 (2014) [Preview Abstract]
X22.00002: Magnetic and electric control of multiferroic properties in monodomain crystals of BiFeO$_{3}$
Invited Speaker: Masashi Tokunaga
One of the important goals for multiferroics is to develop the non-volatile magnetic memories that can be controlled by electric fields with low power consumption. Among numbers of multiferroic materials, BiFeO$_{3}$ has been the most extensively studied because of its substantial ferroelectric polarization and magnetic order up to above room temperature [1]. Recent high field experiments on monodomain crystals of BiFeO$_{3}$ revealed the existence of additional electric polarization normal to the three-fold rotational axis [2]. This transverse component is coupled with the cycloidal magnetic domain, and hence, can be controlled by external magnetic fields. Application of electric fields normal to the trigonal axis modifies volume fraction of these multiferroic domains, which involves change in resistance of the sample, namely exhibits the bipolar resistive memory effect [3]. In this talk, I will introduce the effects of magnetic and electric fields on magnetoelectric and structural properties observed in monodomain crystals of BiFeO$_{3}$. REFERENCES: [1] G. Catalan and J. F. Scott, Adv. Mater. 21, 2463 (2009). [2] M. Tokunaga \textit{et al.}, Nature Commun. \textbf{6}, 5878 (2015). [3] S. Kawachi \textit{et al}. Appl. Phys. Lett. \textbf{108}, 162903 (2016). [Preview Abstract]
X22.00003: Unidirectional THz radiation propagation in BiFeO3.
Invited Speaker: Toomas Room
The mutual coupling between magnetism and electricity present in many multiferroic materials permit the magnetic control of the electric polarization and the electric control of the magnetization. These static magnetoelectric (ME) effects are of enormous interest: The ability to write a magnetic state current-free by an electric voltage would provide a huge technological advantage. However, ME coupling changes the low energy electrodynamics of these materials in unprecedented way -- optical ME effects give rise to unidirectional light propagation as recently observed in low-temperature multiferroics. The transparent direction can be switched with dc magnetic or electric field, thus opening up new possibilities to manipulate the propagation of electromagnetic waves in multiferroic materials. We studied the unidirectional transmission of THz radiation in BiFeO3 crystals, the unique multiferroic compound offering a real potential for room temperature applications. The electrodynamics of BiFeO3 at 1THz and below is dominated by the spin wave modes of cycloidal spin order. We found that the optical magnetoelectric effect generated by spin waves in BiFeO3 is robust enough to cause considerable nonreciprocal directional dichroism in the GHz-THz range even at room temperature. The supporting theory attributes the observed unidirectional transmission to the spin-current-driven dynamic ME effect. Our work demonstrates that the nonreciprocal directional dichroism spectra of low energy excitations and their theoretical analysis provide microscopic model of ME couplings in multiferroic materials. Recent THz spectroscopy studies of multiferroic materials are an important step toward the realization of optical diodes, devices which transmit light in one but not in the opposite direction. [Preview Abstract]
X22.00004: Giant spin-induced polarization and optical-diode effect by electromagnons in BiFeO$_{\mathrm{3}}$
Invited Speaker: Jun Hee Lee
Type-$I$ multiferroics where spin and electric polarization order at distinct temperatures were believed to have smaller couplings between them compared to type-\textit{II} multiferroics such as TbMnO$_{\mathrm{3}}$. However, we recently discovered unexpectedly huge couplings between spin and electric polarization in representative type-$I$ multiferroic BiFeO$_{\mathrm{3}}$. This hidden coupling leads to record-high spin-induced ferroelectric polarizations (\textasciitilde 3.0 $\mu $C/cm$^{\mathrm{2}})$ [1] which is one or two order larger than in any other multiferroics. Also, the spin-polarization couplings in \textit{dynamic} region [2] generates strong electromagnons resulting in sizable one-way optical transparency at the spin-wave excitations [3]. Overall, we show how our theoretical results revive studies in revealing hidden but huge spin-polarization couplings and their dynamic interactions with light in type-$I$ multiferroics. [1] J. H. Lee and R. Fishman, Physical Review Letters, 115, 207203 (2015). [2] \quad J. H. Lee, I. Kezsmarki, and R. Fishman$, $New Journal of Physics 18, 043205 (2016). [3] R. Fishman, J. H. Lee \textit{et al}., Physical Review B, 92, 094422 (2015). *This work has been done by collaborations with R. Fishman (ORNL) and I. Kezsmarki (U of Budapest). [Preview Abstract]
X22.00005: Electric-field control of magnetism and magnons in the room temperature multiferroic BiFeO$_3$
Invited Speaker: Rog\'{e}rio de Sousa
The ability to control magnetism using electric fields is of great fundamental and practical interest. It may allow the development of ideal magnetic memories with electric write and magnetic read capabilities, as well as logic devices based on magnons that dissipate much less energy. The application of an external $E$ field to bulk magnetoelectric bismuth ferrite (BiFeO$_3$ or BFO) was shown to lead to a giant shift of magnon frequencies that is linear in $E$ and $10^{5}$ times larger than any other known $E$-field effect on magnon spectra [1]. I will present a theory of this effect based on the combination of multiferroicity with strong spin-orbit interaction, and show that it enables $E$-field control of BFO's magnetic state [2]. The application of moderate external $E$ and $B$ fields at appropriate orientations enable competing magnetoelectric interactions to interfere in such a way that the system transitions from a cycloid to a homogeneous state at much lower field values than if only one type of field was applied. These results clarify the conditions required to make BFO a useful material in device applications, and shed light on experiments where BFO is interfaced with other magnetic and ferroelectric materials.\\ \\ \noindent[1] P. Rovillain {\it et al.}, Nat. Mater. {\bf 9}, 975 (2010).\\ \noindent[2] R. de Sousa, M. Allen, and M. Cazayous, Phys. Rev. Lett. {\bf 110}, 267202 (2013). [Preview Abstract]
|
CommonCrawl
|
Vital Surveillances: Dietary Exposure to Fumonisins and Health Risk Assessment in the Sixth China Total Diet Study — China, 2015–2020
Shuo Zhang1;
Shuang Zhou1, , ;
Bing Lyu1;
Nannan Qiu1;
Jingguang Li1;
Yunfeng Zhao1;
Yongning Wu1
View author affiliation
Introduction: Fumonisins are a group of widespread mycotoxins mainly existing in staple foods. Their toxicological effects on humans cause worldwide public health threat. During 2015–2020, the 6th China Total Diet Study (TDS) was conducted to study the dietary exposure to fumonisins in the Chinese adult population.
Methods: Fumonisins were analyzed by LC-MS/MS in 288 composite dietary samples collected from 24 provincial-level administrative divisions. After combining the national consumption data with analytical results, estimated daily intakes (EDIs) were assessed and compared with health-based guide values (HBGV).
Results: In the 6th China TDS, the highest fumonisin B (FBs) levels were found in staple foods/cereals among the 12 food categories. EDI of FBs was 104.9 ng/kg of body weight (bw)/day at the upper bound accounting 5.25% of the provisional maximum tolerable daily intake set by Joint Food and Agriculture Organization/World Health Organization Expert Committee on Food Additives. Among the 12 food categories, cereals and cereal products were the greatest contributor to FB exposure at 95%.
Conclusion: Although the estimated exposure to FBs in the 6th China TDS were well below the HBGV for FBs in general, it was 2 times higher than the exposure in the 5th China TDS. Furthermore, the exposure to FB3 has increased remarkable and is worth further attention in China.
Funding: This work was supported by the National Key Research and Development Program of China (2017YFC1600304 and 2017YFC1600500), the National Natural Science Foundation of China (31801456 and 31871723), Chinese Academy of Medical Science Research Unit Program (No. 2019-12M-5-024)
NHC Key Laboratory of Food Safety Risk Assessment, Food Safety Research Unit (2019RU014) of Chinese Academy of Medical Science, China National Center for Food Safety Risk Assessment, Beijing, China
Corresponding author: Shuang Zhou, [email protected]
[1] International Agency for Research on Cancer (IARC). Agents classified by the IARC Monographs. 2002. https://monographs.iarc.who.int/list-of-classifications.https://monographs.iarc.who.int/list-of-classifications
[2] Dall'Asta C, Mangia M, Berthiller F, Molinelli A, Sulyok M, Schuhmacher R, et al. Difficulties in fumonisin determination: the issue of hidden fumonisins. Anal Bioanal Chem 2009;395(5):1335 − 45. http://dx.doi.org/10.1007/s00216-009-2933-3CrossRef
[3] Riley RT, Torres O, Matute J, Gregory SG, Ashley-Koch AE, Showker JL, et al. Evidence for fumonisin inhibition of ceramide synthase in humans consuming maize-based foods and living in high exposure communities in Guatemala. Mol Nutr Food Res 2015;59(11):2209 − 24. http://dx.doi.org/10.1002/mnfr.201500499CrossRef
[4] Joint FAO/WHO Expert Committee on Food Additives. Evaluation of certain mycotoxins in food: fifty-sixth report of the Joint FAO/WHO Expert Committee on Food Additives. WHO Technical Report Series No 906, Geneva (Switzerland). 2002. https://apps.who.int/iris/bitstream/handle/10665/42448/WHO_TRS_906.pdf. [2021-5-19].https://apps.who.int/iris/bitstream/handle/10665/42448/WHO_TRS_906.pdf
[5] European Commission. Commission regulation (EC) No 1881/2006 of 19 December 2006 setting maximum levels for certain contaminants in foodstuffs. Off J Eur Union 2006;L365:6 − 24.
[6] United States Food and Drug Administration. Guidance for industry: fumonisin levels in human foods and animal feeds. 2001. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/guidance-industry-fumonisin-levels-human-foods-and-animal-feeds. [2021-5-19]https://www.fda.gov/regulatory-information/search-fda-guidance-documents/guidance-industry-fumonisin-levels-human-foods-and-animal-feeds
[7] World Health Organization. Meeting report of the fifth international workshop on total diet studies. Seoul (Korea). 2015. https://apps.who.int/iris/bitstream/handle/10665/208752/20150514_KOR_eng.pdf. [2021-5-19].https://apps.who.int/iris/bitstream/handle/10665/208752/20150514_KOR_eng.pdf
[8] European Food Safety Authority, Food and Agriculture Organization of the United Nations (FAO). Towards a harmonised total diet study approach: a guidance document. EFSA J 2011;9(11):2450. http://dx.doi.org/10.2903/j.efsa.2011.2450CrossRef
[9] Carballo D, Tolosa J, Ferrer E, Berrad H. Dietary exposure assessment to mycotoxins through total diet studies. A review. Food Chem Toxicol 2019;128:8 − 20. http://dx.doi.org/10.1016/j.fct.2019.03.033CrossRef
[10] Wu YN, Li XW. The fourth China total diet study. Beijing: Chemical Industry Press. 2015.
[11] Wu YN, Zhao YF, Li JG. The fifth China total diet study. Beijing: Science Press. 2019.
[12] WHO. GEMS/Food-EURO. Second workshop on reliable evaluation of low-level contamination of food-Report of a workshop in the frame of GEMS/Food-EURO. Kulmbach (Germany): WHO, 1995.
[13] Codex Alimentarius Commission (CAC). General standard for contaminants and toxins in food and feed. CXS 193-1995. 2019. www.fao.org/fao-who-codexalimentarius/sh-proxy/en/?lnk=1&url=https%253A%252F%252Fworkspace.fao.org%252Fsites%252Fcodex%252FStandards%252FCXS%2B193-1995%252FCXS_193e.pdf. [2021-5-19].
[14] EFSA Panel on Contaminants in the Food Chain (CONTAM), Knutsen HK, Barregård L, Bignami M, Brüschweiler B, Ceccatelli S, et al. Appropriateness to set a group health-based guidance value for fumonisins and their modified forms. EFSA J 2018;16(2):e05172. http://dx.doi.org/10.2903/j.efsa.2018.5172CrossRef
[15] López P, de Rijk T, Sprong RC, Mengelers MJB, Castenmiller JJM, Alewijn M. A mycotoxin-dedicated total diet study in the Netherlands in 2013: Part II – occurrence. World Mycotoxin J 2016;9(1):89 − 108. http://dx.doi.org/10.3920/WMJ2015.1906CrossRef
FIGURE 1. Contamination levels of FBs (FB1+FB2+FB3) in participating provinces from the 6th China TDS. Abbreviations: FB=fumonisin B; TDS=total diet study; PLADs=provincial-level administrative divisions.
FIGURE 2. Regional distribution of EDI of FBs (FB1+FB2+FB3) in the 6th China TDS. The blue and red regions represent the participating PLADs. In the 24 participating provinces, their color intensity represents their levels of EDI respectively.
Abbreviations: EDI=estimated dietary intake; FB=fumonisin B; TDS=total diet study; PLADs=provincial-level administrative divisions.
TABLE 1. Contamination levels of fumonisins (µg/kg) and the positive rate of detection in the 6th China TDS, 2015−2020.
Food category FB1 FB2 FB3 FBs
Positive, % 95.8 25.0 20.8 95.8
Mean 5.33 0.58 1.68 7.59
Median 2.57 0.02 0.02 2.74
Positive, % 58.3 0.0 12.5 58.3
Positive, % 62.5 4.2 4.2 62.5
Positive, % 8.3 20.8 0.0 25.0
Positive, % 4.2 0.0 0.0 4.2
Beverages and water
Median 0.01 0.01 0.010 0.03
Total samples (N=288)
Note: for samples in which toxins were not detected, values were assumed to be half the LOD and for samples in which toxin levels were below the LOQ, values were assumed to be half the LOQ. There are 24 samples for each food category.
Abbreviations: FB1=fumonisin B1; FB2=fumonisin B2; FB1=fumonisin B1; TDS=total diet study; LOD=limit of detection; LOQ=limit of quantification.
TABLE 2. Estimated dietary intake (μg/kg bw/day) of FBs in food categories with their percentage of PMTDI from the 6th TDS of general Chinese population.
Food category EDI (ng/kg bw/day) Percentage of PMTDI
LB UB LB UB
Cereals 97.78 98.55 4.89 4.93
Legumes 0.85 0.90 0.04 0.04
Potatoes 0.96 1.01 0.05 0.05
Meats 1.61 1.67 0.08 0.08
Eggs 0.02 0.04 0.00 0.00
Aquatic Foods 0.59 0.61 0.03 0.03
Dairy Products 0.00 0.03 0.00 0.00
Vegetables 0.74 1.02 0.04 0.05
Fruits 0.00 0.04 0.00 0.00
Sugar 0.00 0.00 0.00 0.00
Beverages & water 0.04 0.82 0.00 0.04
Alcohol 0.19 0.20 0.01 0.01
Total 102.78 104.91 5.14 5.25
Note: Provisional Maximum Tolerable Daily Intake (PMTDI) set by JECFA is 2 μg/kg bw/day for FBs (FB1+FB2+FB3). % of PMTDI=EDI/PMTDI×100%.
Abbreviations: FB=fumonisin B; TDS=total diet study; EDI=estimated dietary intake; LB=lower bound; UB=upper bound.
Dietary Exposure to Fumonisins and Health Risk Assessment in the Sixth China Total Diet Study — China, 2015–2020
1. NHC Key Laboratory of Food Safety Risk Assessment, Food Safety Research Unit (2019RU014) of Chinese Academy of Medical Science, China National Center for Food Safety Risk Assessment, Beijing, China
Shuang Zhou, [email protected]
Fumonisins are secondary metabolites of Fusarium and Aspergillus species, which commonly infected crops and can contaminate the whole food chain. Fumonisin B (FB) is a group of fumonisin analogues. FBs as a group are clearly the most relevant toxin among fumonisin analogues and include fumonisin B1 (FB1), fumonisin B2 (FB2), and fumonisin B3 (FB3). FB1 is the most abundant and potent of these. As being possibly carcinogenic in humans (Group 2B) (1), FB1 has been shown to cause a variety of diseases in animals, including hepatotoxic, nephrotoxic, hepatocarcinogenic, and cytotoxic effects in mammals (2) with high potential impact on human health (3). To protect human health from the risk of FBs, the Joint Food and Agriculture Organization (FAO)/World Health Organization (WHO) Expert Committee on Food Additive (JECFA) has set a provisional maximum tolerable daily intake (PMTDI) for the group of fumonisins (B1 and its analogues B2 and B3), at 2 µg/kg of body weight (bw)/day (4). Numerous countries have issued maximum levels for fumonisins in food and animal feed (5–6). In order to assess the risk of FB dietary intake in China, we applied a total diet study (TDS) approach. The TDS is an effective method that has been recommended by the WHO to estimate the dietary intakes of certain food chemicals (7). Unlike surveillance based on raw food commodities, TDS uses representative samples prepared as ready-to-eat dishes for the general population and combines consumption data to achieve a more accurate assessment (8). As a useful strategy, TDS has been conducted in several countries and regions for mycotoxin exposure assessment (9).
China National Center for Food Safety Risk Assessment conducted the 6th China TDS in 2015−2020. This article aims to present the results of the exposure to fumonisins of the general Chinese population and evaluate the risk with regards to the international health-based guide values.
The protocol of the 6th TDS followed a similar procedure to the previous 4th and 5th TDSs in China. Collection of consumption data and food sampling were described in previous work (10-11). In the 6th China TDS, the number of provincial-level administrative divisions (PLADs) were increased to 24 (Supplementary Table S1). Each PLAD comprised of 3 or 6 survey sites according to population size (6 survey sites for PLADs with more than 50 million population, 3 survey sites for PLADs with less than 50 million population). Since approximately two-thirds of the Chinese population reside in rural areas, we randomly selected rural counties and urban cites with a ratio of 2∶1 for each PLAD.
The dietary survey adopted multiple survey methods. For the survey for households, the measuring of weight plus a three-day accounting method was applied. For the survey for individuals, verbal interviews were conducted every 24 hours over 3 days. Samples of various food items were purchased at local markets, grocery stores, and local farms of each survey site. Thirteen dietary sample categories, such as cereals, legumes, potatoes, meats, eggs, aquatic products, dairy products, vegetables, fruits, sugars, beverages and water, alcohols, and condiments, were included in the TDS. For each PLAD, various food cooked according to local customs and condiments were added into the other 12 sample categories at the calculated amount during cooking procedure. Thus, in total, 288 dietary samples were prepared in the 6th China TDS.
FB1, FB2, and FB3 in food samples were analyzed via an isotope dilution UPLC-MS/MS method (11). Briefly, this analysis involved an extraction using acetonitrile/water solvent mixture for food samples, followed by purification with MultiSep 211 Fum solid phase extraction column. The chromatographic separation and mass spectrometry parameters are described Supplementary Table S2. The method validation was also well described (11).
The exposure of the Chinese adult populations was assessed by combining consumption data with analytical results. When calculating the estimated daily intakes (EDI), the management for results below the limit of detection (LOD) and/or limit of quantification (LOQ), so called left-censored data, was applied based on the Global Environment Monitoring System/Food Contamination Monitoring Assessment Programme (GEMS/Food) guidelines for low-level contamination of food (12). In this study, the proportion of left-censored data exceeded 60%. Thus, scenarios for lower bound (LB) and upper bound (UB) were applied. To estimate the lowest (LB scenario) possible EDI, a value of zero was assigned for results below the LOD, and the LOD was assigned for results below the LOQ. To estimate the highest (UB scenario) possible EDI, a value of the LOD were assigned for results below LOD, and LOQ were assigned for results below LOQ.
The EDI (in ng/kg bw/day) of each fumonisin was calculated as follows:
$$ {\rm{EDI}} = \frac{{\mathop \sum \nolimits_{i = 1}^p {T_i} \times {F_i}}}{{bw}} $$
where $ {T}_{i} $ represents the concentration of each mycotoxin in a dietary sample from each food category i (i = 1,…, p) (ng/g), $ {F}_{i} $ is the consumption of each food category i in a day (g/d), and bw is the standard body weight (kg) of 63 kg. IBM SPSS Statistics (version 22.0, IBM Corp., New York, US) was used for data processing and analysis.
Among the total 288 dietary samples, the occurrence data of individual toxins and total of FBs (FB1+FB2+FB3) are shown in Table 1, respectively. The concentrations and distribution of FBs in participant PLADs are demonstrated in Figure 1.
The frequency of detection for FBs was 32.6% (Table 1). Among 12 food categories, cereals had the highest incidence of 95.8%, with an average FBs level of 7.59 µg/kg. Shandong, Hebei, Sichuan, Jiangsu, and Shanxi had the highest level of FBs (Figure 1).
The average food consumption level was 2,439 g/day in the 6th TDS. Among the 12 food categories, beverages and water contributed most to the total consumption (40.7%), followed by cereals and vegetables making up 29.3% and 14.2%, respectively.
The EDI of each individual fumonisin was calculated according to the formula in Methods section. The EDI of total fumonisins was the sum of EDI of FB1, FB2, and FB3 (FBs). In the 6th TDS, the average EDI of total FBs was 102.78–104.91 ng/kg bw/day (Table 2). Cereals were the predominant contributor, making up 97.8%–98.6% of the overall EDI. Shangdong had the highest total EDI of FBs at 597.40–605.27 ng/kg bw/day (LB-UB) and the highest EDI of FB1 at 516.0–516.4 ng/kg bw/day (LB-UB) (Figure 2). However, the highest EDI of FB2 and FB3 were in Hebei at 87.18–87.62 ng/kg bw/day and 183.49–183.93 ng/kg bw/day (LB-UB), respectively.
Regarding to the contamination level of fumonisins, China has not yet set a maximum limit for fumonisins. Codex Alimentarius Commission (CAC) has set the maximum level for FB1+FB2 in raw maize grain (4,000 µg/kg) and maize flour and maize meal (2,000 µg/kg) (13). In our study, the average contamination level of fumonisins (FB1+FB2+FB3) in cereals were 7.59 µg/kg, much lower than CAC's regulation level. Even in Shandong Province, the highest aggregated FB levels (41.56 µg/kg) were still much lower than CAC's limit. Relatively high contamination levels of FBs were found sporadically, such as in alcohol in Sichuan (16.55 µg/kg) and in meats in Shanxi (10.30 µg/kg). Among the three types of FBs, FB1 was most frequently detected and abundant. FB2 and FB3 shared similar incidence. Commonly believed, FB3 often co-exists with FB1 and FB2, and its concentration usually does not exceed that of FB1 and FB2, usually accounting for an additional 10%–15% to FB1 levels (14). Thus, FB3 was usually considered as a minor important mycotoxin. However, in our study, the relative level of FB3 was 21%–25% in cereals, legumes and potatoes, and even higher than FB2. The relative amount of FB1 compared to FB2 and FB3, is related to climatic factors, such as water activity and temperature (14). Therefore, these results provide some information for food safety surveillance and establishing China's maximum limit for fumonisins in the future. First, in addition to FB1 and FB2, FB3 should be included. Second, besides cereals and their products, potatoes and meats need to be considered as candidate food categories in surveillance plan or maximum limit scope.
The dietary exposure to FBs at 104.91 ng/kg bw/day (upper bound) accounted 5% of the PMTDI set by JECFA. It indicated the risk of dietary exposure to fumonisins in China was at a safe level. The highest EDI of FBs was found in Shandong Province, accounting for approximately 30% of the PMTDI. Together with Hebei and Jiangsu, these three provinces with highest EDIs for FBs are located in the east of North China Plain (Figure 2). Whereas, comparing to the 5th China TDS (EDI, 50 ng/kg bw/day) (2), the exposure level doubled, indicating a trend of dramatic increase. Among the 3 types of FBs, FB1, FB2, and FB3 contributed 70.6%, 7.6%, and 21.8% of the overall dietary exposure, respectively. The contribution of FB3 to EDI of total fumonisins has also exceeded FB2 and should not be overlooked. This is the first result revealed dietary exposure to FB3 from TDS. FB1 and FB2 were investigated in most TDSs, but FB3, as a considered least important fumonisin among the three, was seldom included. The Netherlands TDS (15) included FB3, but the sensitivity of the method (LOD=3.3 µg/kg) was not enough to detect the existence of fumonisins in ready-to-eat dishes.
This study was subject to some limitations. As for TDS in the study, as well as other TDSs, uncertainties were existed in exposure assessment, such as analytical methods, consumption statistics, and especially sample representativeness. Mycotoxin contamination occurred sporadically and could be affected by temperature, humidity, geographic location and storage duration. For such a large-scale study, big uncertainty could be caused by limited sample numbers and heterogeneous distribution of toxins.
In the 6th China TDS, exposure estimates for FBs were generally out of concern with 5.25% of PMTDI for the general population. However, it still needs to be noted that the population in relative high exposure regions or the high consumers of certain food categories may be associated with higher risk. Cereals were the predominant source and contributed over 90% to the dietary exposure to fumonisins. The remarkable increase of EDI of fumonisins and considerable contribution from FB3 in the 6th China TDS were well worth further attention.
Acknowledgements: The 24 provincial-level CDCs.
Conflicts of interest: The authors declare that there are no conflicts of interest.
[1] International Agency for Research on Cancer (IARC). Agents classified by the IARC Monographs. 2002. https://monographs.iarc.who.int/list-of-classifications.
[2] Dall'Asta C, Mangia M, Berthiller F, Molinelli A, Sulyok M, Schuhmacher R, et al. Difficulties in fumonisin determination: the issue of hidden fumonisins. Anal Bioanal Chem 2009;395(5):1335 − 45. http://dx.doi.org/10.1007/s00216-009-2933-3.
[3] Riley RT, Torres O, Matute J, Gregory SG, Ashley-Koch AE, Showker JL, et al. Evidence for fumonisin inhibition of ceramide synthase in humans consuming maize-based foods and living in high exposure communities in Guatemala. Mol Nutr Food Res 2015;59(11):2209 − 24. http://dx.doi.org/10.1002/mnfr.201500499.
[4] Joint FAO/WHO Expert Committee on Food Additives. Evaluation of certain mycotoxins in food: fifty-sixth report of the Joint FAO/WHO Expert Committee on Food Additives. WHO Technical Report Series No 906, Geneva (Switzerland). 2002. https://apps.who.int/iris/bitstream/handle/10665/42448/WHO_TRS_906.pdf. [2021-5-19].
[6] United States Food and Drug Administration. Guidance for industry: fumonisin levels in human foods and animal feeds. 2001. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/guidance-industry-fumonisin-levels-human-foods-and-animal-feeds. [2021-5-19]
[7] World Health Organization. Meeting report of the fifth international workshop on total diet studies. Seoul (Korea). 2015. https://apps.who.int/iris/bitstream/handle/10665/208752/20150514_KOR_eng.pdf. [2021-5-19].
[8] European Food Safety Authority, Food and Agriculture Organization of the United Nations (FAO). Towards a harmonised total diet study approach: a guidance document. EFSA J 2011;9(11):2450. http://dx.doi.org/10.2903/j.efsa.2011.2450.
[9] Carballo D, Tolosa J, Ferrer E, Berrad H. Dietary exposure assessment to mycotoxins through total diet studies. A review. Food Chem Toxicol 2019;128:8 − 20. http://dx.doi.org/10.1016/j.fct.2019.03.033.
[14] EFSA Panel on Contaminants in the Food Chain (CONTAM), Knutsen HK, Barregård L, Bignami M, Brüschweiler B, Ceccatelli S, et al. Appropriateness to set a group health-based guidance value for fumonisins and their modified forms. EFSA J 2018;16(2):e05172. http://dx.doi.org/10.2903/j.efsa.2018.5172.
[15] López P, de Rijk T, Sprong RC, Mengelers MJB, Castenmiller JJM, Alewijn M. A mycotoxin-dedicated total diet study in the Netherlands in 2013: Part II – occurrence. World Mycotoxin J 2016;9(1):89 − 108. http://dx.doi.org/10.3920/WMJ2015.1906.
|
CommonCrawl
|
Home > Journals > Adv. Differential Equations > Volume 14 > Issue 11/12 > Article
November/December 2009 Grow-up rate of a radial solution for a parabolic-elliptic system in $\mathbf{R}^2$
Takasi Senba
Adv. Differential Equations 14(11/12): 1155-1192 (November/December 2009). DOI: 10.57262/ade/1355854788
We consider radial and positive solutions to a parabolic-elliptic system in $\mathbf{R}^2$. This system was introduced as a simplified version of the Keller-Segel model. The system has the critical value of the total mass. If the total mass of a solution is more than the critical value, the solution blows up in finite time. If the total mass of a solution is less than the critical value, the solution exists globally in time. Recently, some properties of solutions whose total mass is equal to the critical value have been investigated. In this paper, we construct a grow-up solution whose total mass is equal to the critical value. Furthermore, we show that the grow-up rate of the solution is equal to $O((\log t)^2)$.
Takasi Senba. "Grow-up rate of a radial solution for a parabolic-elliptic system in $\mathbf{R}^2$." Adv. Differential Equations 14 (11/12) 1155 - 1192, November/December 2009. https://doi.org/10.57262/ade/1355854788
Published: November/December 2009
First available in Project Euclid: 18 December 2012
zbMATH: 1182.35054
MathSciNet: MR2560872
Digital Object Identifier: 10.57262/ade/1355854788
Primary: 25B40 , 35B35 , 35K55 , 35K57 , 92C17
Rights: Copyright © 2009 Khayyam Publishing, Inc.
Adv. Differential Equations
Vol.14 • No. 11/12 • November/December 2009
Khayyam Publishing, Inc.
Takasi Senba "Grow-up rate of a radial solution for a parabolic-elliptic system in $\mathbf{R}^2$," Advances in Differential Equations, Adv. Differential Equations 14(11/12), 1155-1192, (November/December 2009)
|
CommonCrawl
|
Berger, R., Gerber, C., Gimzewski, J. K., Meyer, E. & Güntherodt, H. J. Thermal analysis using a micromechanical calorimeter. Applied Physics Letters 69, 40–42 (1996).
Joachim, C. & Gimzewski, J. K. Analysis of low-voltage I (V) characteristics of a single C60 molecule. EPL (Europhysics Letters) 30, 409 (1995).
Gerber, C., Gimzewski, J., Reihl, B., SCHLITTLER, R. & ,. APPARATUS AND METHOD FOR SPECTROSCOPIC MEASUREMENTS. (1995).
Gerber, C., Gimzewski, J., Reihl, B., SCHLITTLER, R. & ,. CALORIMETRIC SENSOR. (1995).
Joachim, C., Gimzewski, J. K., Schlittler, R. R. & Chavy, C. Electronic transparence of a single C 60 molecule. Physical review letters 74, 2102 (1995).
Joachim, C., Gimzewski, J. K., Schlittler, R. R. & Chavy, C. Electronic Transparence of a Single ${\mathrm{C}}_{60}$ Molecule. Phys. Rev. Lett. 74, 2102–2105 (1995).
, et al. Erratum: A femtojoule calorimeter using micromechanical sensors [Rev. Sci. Instrum. 65, 3793 (1994)]. Review of Scientific Instruments 66, 3083–3083 (1995).
Jung, T., Schlittler, R., Gimzewski, J. K. & Himpsel, F. J. One-dimensional metal structures at decorated steps. Applied Physics A 61, 467–474 (1995).
Welland, M. E. & Gimzewski, J. K. Perspectives on the limits of fabrication and measurement. PHILOS T ROY SOC A 353, 279–279 (1995).
Gimzewski, J. K. Photons and Local Probes 189–208 (Springer Netherlands, 1995).
Berndt, R. & Gimzewski, J. K. Photon emission in scanning tunneling microscopy: interpretation of photon maps of metallic systems. SPIE MILESTONE SERIES MS 107, 376–376 (1995).
Gimzewski, J. K. & Humbert, A. Scanning tunneling microscopy of surface microstructure on rough surfaces. SPIE MILESTONE SERIES MS 107, 249–249 (1995).
Welland, M. E. & Gimzewski, J. K. Ultimate limits of fabrication and measurement. (Kluwer Academic, 1995).
Gimzewski, J. K. & Welland, M. E. Ultimate Limits of Fabrication and Measurements. NATO ASI Series 292, (1995).
Joachim, C. & Gimzewski, J. K. CONTACTING A SINGLE C60 MOLECULE. Proceedings of the NATO Advanced Research Workshop: (Humboldt-Universität zu Berlin, 1994).
, et al. A femtojoule calorimeter using micromechanical sensors. Review of Scientific Instruments 65, 3793–3798 (1994).
Reihl, B. et al. Low-temperature scanning tunneling microscopy. Physica B: Condensed Matter 197, 64–71 (1994).
Gimzewski, D., Parrinello, P., Reihl, D. & ,. Molecular recording/reproducing method and recording medium. (1994).
Dumas, P. et al. Nanostructuring of porous silicon using scanning tunneling microscopy. Journal of Vacuum Science & Technology B 12, 2067–2069 (1994).
Berndt, R. & Gimzewski, J. K. Photon Emission from C60 in a Nanoscopic Cavity. Proceedings of the NATO Advanced Research Workshop: (Humboldt-Universität zu Berlin, 1994).
Galaxy, I. Photothermal spectroscopy with femtojoule sensitivity using a micromechanical device. Nature 372, 3 (1994).
Berndt, R. & Gimzewski, J. K. Atomic and Nanometer-Scale Modification of Materials: Fundamentals and Applications 327–335 (Springer Netherlands, 1993).
Dumas, P. et al. Direct observation of individual nanometer-sized light-emitting structures on porous silicon surfaces. EPL (Europhysics Letters) 23, 197 (1993).
Berndt, R., Gimzewski, J. K. & Johansson, P. Electromagnetic interactions of metallic objects in nanometer proximity. Physical review letters 71, 3493 (1993).
Berndt, R. & Gimzewski, J. K. Isochromat spectroscopy of photons emitted from metal surfaces in an STM. Annalen Der Physik 505, 133–140 (1993).
Gimzewski, J. K., VATEL, O. & Hallimaoui, A. Ph. DUMAS*, M. GU*, C. SYRYKH*, F. SALVAN*,* GPEC, URA CNRS 783, Fac. de Luminy, 13288, Marseille Cedex 9, France. Optical Properties of Low Dimensional Silicon Structures: Proceedings of the NATO Advanced Research Workshop, Meylan, France, March 1-3, 1993 244, 157 (1993).
|
CommonCrawl
|
Astronomy Stack Exchange is a question and answer site for astronomers and astrophysicists. Join them; it only takes a minute:
Detection of exo-planets
One method used for detecting exo-planets is to look for a slight dip in the parent star's luminosity as the planet transits the stellar disc. Intuitively, it seems to me that if planetary systems in our galactic neighborhood are randomly oriented, there would have to be a very large proportion of them in which transits can never happen from Earth's viewpoint. Perhaps, however, the assumption of random orientation is incorrect, and there is some alignment of the axes of rotation of planetary systems, which would facilitate detection of planets in some preferred plane (the galactic plane?).
In popular presentations concerning the search for exo-planets, I have never seen this issue addressed. What observations and/or assumptions are used in arriving at a realistic estimate of the number of exo-planets in our region of the galaxy?
(There are related questions in this forum, but I haven't found one that asks about the possible alignment of the axes of rotation.)
exoplanet planetary-transits
ClydeClyde
It isn't usually an issue because most experiments are simply concerned with finding exoplanets. They are rarely designed in such a way that it is easy to estimate population statistics because of all sorts of biases that go into selecting the targets. Unfortunately the search for exoplanets has turned into a sport where discovery is everything.
If one assumes random orientation of orbits (and that is all it is, an assumption) then the probability of a transit scales roughly as $$P \simeq \frac{R_p+ R_s}{a}$$ where $R_p$ and $R_s$ are the radius of the planet and hot star respectively and $a$ the planet's orbital radius (with small modifications for non-circular orbits). The larger this is, the more likely a transit is to occur. Hence large exoplanets orbiting close to large stars are more likely to transit. In principle then, this effect can be corrected for when calculating the statistics and frequency of exoplanets.
So how good is the random orbital inclination assumption? I honestly think nobody knows at the moment. I have done work on the possible alignment of spin axes within the low-mass stars of clusters (Jackson & Jeffries 2010) finding consistency with the random hypothesis. More recent work using asteroseismology suggests that there may be alignment for more massive stars (Corsaro et al. 2017). However, even if the spin axes (and therefore presumably the majority of planet orbits) of stars in clusters line up, there is no obvious reason why each cluster should have the same angular momentum vector When the clusters eventually disperse into the field then they would, presumably, form a pseudo-random distribution?
Except, what if the Galactic tides or a large-scale Galactic magnetic field played a role in shaping the angular momentum direction of the clouds that formed the clusters. Might it be possible for some alignment to persist to old age? Corsaro et al. argue that interactions within a cluster are not sufficient to "scramble" the angular momenta after star formation has finished. Close interactions between stars become much less likely after they emerge from a cluster into the field. An intriguing piece of work by Rees & Zijlstra (2013) found that there was evidence for a non-random distribution of orientation for bipolar planetary nebulae towards the Galactic bulge. This suggested that the orbital angular momenta of binary systems responsible for the bipolar shape of the nebulae were oriented in the Galactic plane. The result is highly statistically significant but as far as I know has not been followed up despite its obvious implications for estimations of transit yields from exoplanetary surveys.
I think that there will be a much better answer to this question once we have all-sky exoplanet searches of the quality of the Kepler satellite (the main Kepler survey was in one particular direction). It should become very obvious if there are changes in the planet yields as a function of position of the sky (although you also have to control for the types of star being observed) associated with any large-scale alignment. Maybe there is enough information in the Kepler K2 fields that are taken at positions around the ecliptic - I have not seen any analysis. However, such data will surely become available with the launch of NASA's all-sky TESS satellite in 2018.
Rob JeffriesRob Jeffries
The assumption of random orientations is a reasonable one. One reason that exoplanets weren't detected in the 1980s was the expectation that most solar systems would be like ours, with large planets at a great distance, making transits rare, infrequent and hard to detect.
Hot Jupiters changed that. Most of the planets that Kepler detects are very close to their host star. This means that no great coincidence is required for the inclination of the axis of rotation relative to the solar system. An axial inclination of between 80 and 90 degrees would allow for a transit in many of the systems discovered.
This is taken into account when estimating the number of stars with planets, with the conclusion that nearly all sun-like stars have planetary systems. Kepler can only detect a fraction of these, but it surveys so many stars that it has found a good number of planetary systems. But most of the stars observed haven't shown a transit. Extrapolating from its discoveries, we have to conclude that the main reason that we don't detect planets around the other stars is due to the inclination of the exoplanetary systems.
For analysis of the probabilities involved in transiting exoplanets, you can consult Transit Probabilities for Stars with Stellar Inclination Constraints
James KJames K
$\begingroup$ What is the evidence for your first sentence? $\endgroup$ – Rob Jeffries Jun 3 '17 at 9:34
$\begingroup$ Thank you for you fast and informative answer. I'd been wondering about this for some time but didn't know whom to ask until I found this cool website. $\endgroup$ – Clyde Jun 3 '17 at 10:09
$\begingroup$ The linked article assumes random inclinations. "We begin by reviewing the transit probability for a single star under the assumption that the planet's orbital inclination is randomly and evenly distributed over all possible orientations." It seems a reasonable assumption, at least for the purposes of modelling. $\endgroup$ – James K Jun 3 '17 at 15:30
$\begingroup$ It is just an assumption that everyone makes (including me in my work) because there is no other game in town. The assumption is not necessarily "a good one", it is something that is forced upon us. $\endgroup$ – Rob Jeffries Jun 3 '17 at 17:28
$\begingroup$ Ok, "reasonable one" is better. It's not forced on anyone, you can model the distribution of spin axes any way you want, providing it fits the data. The random model is simple, and the papers you cite don't seem to suggest that it is very wrong. So it is a reasonable model for estimating the population of stars with planetary systems. $\endgroup$ – James K Jun 3 '17 at 20:23
Thanks for contributing an answer to Astronomy Stack Exchange!
Not the answer you're looking for? Browse other questions tagged exoplanet planetary-transits or ask your own question.
Star rotational axes and Solar system orientations
the length of second, minute or hour, what defines the time of exo planetary bodies
Planets orbiting Alpha Centauri
Ocean floors on ocean planets?
How do we know the order of the new Trappist-1 planets?
James Webb detection capabilities
Extra-solar identification of binary planets
Helium discovered on an exo planet
How to calculate the angle formed between 2 planets?
Low mass cut-off for the Dharma Planet Survey's detection of habitable planets around 40 Eridani A?
A few questions regarding the transit of planets
|
CommonCrawl
|
Can we add an uncountable number of positive elements, and can this sum be finite?
I always have trouble understanding mathematical operations when dealing with an uncountable number of elements. Any help would be great.
real-analysis sequences-and-series summation
fierydemonfierydemon
$\begingroup$ Somewhat related: Back in Leibniz's time, the concept of an integral was first conceived as a sum of infinitely many infinitesimal quantities. The idea of infinities being countable or uncountable had not yet been invented then, but if it had, an integral would have been described as a sum of uncountably many infinitesimals, one for each possible value of the integration variable. It later turned out that it is difficult to make this intuition rigorous, but I suspect non-standard analysis might have some way to do it. $\endgroup$ – Henning Makholm Aug 29 '15 at 19:06
$\begingroup$ You can find several older related posts. For example here (and in the linked questions) or here (an in the linked questions).. $\endgroup$ – Martin Sleziak Sep 2 '15 at 12:30
$\begingroup$ In particular, this post shows that an uncountable sum cannot be finite if all summands are non-zero: The sum of an uncountable number of positive numbers $\endgroup$ – Martin Sleziak Sep 2 '15 at 12:33
$\begingroup$ @Henning Makholm integral is a sum of COUNTABLE number of infinitelimals. $\endgroup$ – Anixx Jul 17 '17 at 14:32
$\begingroup$ @Anixx: No -- the sum of countably many infinitesimals would itself be infinitesimal (each partial sum is less than every $\frac1n$, so their limit would also be). $\endgroup$ – Henning Makholm Jul 17 '17 at 14:47
Suppose $\{s_i : i\in\mathcal I\}$ is a family of positive numbers.$^\dagger$ We can define $$ \sum_{i\in\mathcal I} s_i = \sup\left\{ \sum_{i\in\mathcal I_0} s_i : \mathcal I_0 \subseteq \mathcal I\ \&\ \mathcal I_0 \text{ is finite.} \right\} $$ (If both positive and negative numbers are involved, then we have to talk about a limit rather than about a supremum, and then the definition is more complicated and we have questions of conditional convergence and rearrangements.)
Now consider \begin{align} & \{i\in\mathcal I : s_i \ge 1\} \\[4pt] & \{i\in\mathcal I : 1/2 \le s_i < 1 \} \\[4pt] & \{i\in\mathcal I : 1/3 \le s_i < 1/2 \} \\[4pt] & \{i\in\mathcal I : 1/4 \le s_i < 1/3 \} \\[4pt] & \quad \quad \quad \vdots \end{align} If one of these sets is infinite, then $\sum_{i\in\mathcal I} s_i=\infty$. But if all are finite, then $\mathcal I$ is at most countably infinite.
Thus the sum of uncountably many positive numbers is infinite.
I don't know whether by some arguments about rearrangements one could somehow have some sensible definition of a sum of numbers not all having the same sign that could give us a somehow well defined sum of uncountably many numbers and get a finite number.
$^\dagger$ In the initial edition of this answer, I said "Let $S$ be a set of positive numbers and then went on to say $$ \sum S = \left\{ \sum S_0 : S_0\subseteq S\ \&\ S_0\text{ is finite.} \right\} $$ However, Dustan Levenstein pointed out in comments that "this definition fails to allow for the same number to occur twice in a sum". Rather than "twice", I'd say "more than once", since a number might even occur an uncountably infinite number of times.
Ilmari Karonen
Michael HardyMichael Hardy
$\begingroup$ I don't think such definition would be actually useful for anything because already the conditionally convergent countable sums are pathological and absolutely aren't good generalization of finite addition. $\endgroup$ – Blazej Aug 29 '15 at 18:06
$\begingroup$ @AyushKhaitan Blazej's comment is referring to the case of including negative numbers. Michael Hardy's definition is perfectly sensible for the case of summing up a system of only positive numbers. $\endgroup$ – Dustan Levenstein Aug 29 '15 at 18:22
$\begingroup$ Although, tiny quibble: this definition fails to allow for the same number to occur twice in a sum. $\endgroup$ – Dustan Levenstein Aug 29 '15 at 18:22
$\begingroup$ You misread Michael's statement. He said if any of those intersections is infinite, then the sum is automatically infinite. If they're all finite, then there are only countably many (positive) elements of $S$. $\endgroup$ – Dustan Levenstein Aug 29 '15 at 18:27
$\begingroup$ I've drastrically edited the answer to allow for some terms to occur more than once. ${}\qquad{}$ $\endgroup$ – Michael Hardy Aug 29 '15 at 18:39
We have the next proposition
Proposition 1. Let $X$ be an at most countable set, and let $f\colon X\to\mathbf R$ be a function. Then the series $\sum_{x\in X} f(x)$ is absolutely convergent if and only if $$\sup\left\{\sum_{x\in A}|f(x)|:A\subseteq X, A\text{ finite}\right\}<\infty.$$
Inspired by this proposition, we may now define the concept of an absolutely convergent series even when the set $X$ could be uncountable.
Definition 2. Let $X$ be a set (which could be uncountable), and let $f\colon X\to\mathbf R$ be a function. We say that the series $\sum_{x\in X} f(x)$ is absolutely convergent if and only if $$\sup\left\{\sum_{x\in A}|f(x)|:A\subseteq X, A\text{ finite}\right\}<\infty.$$
Note that we have not yet said what the series $\sum_{x\in X} f(x)$ is equal to. This shall be accomplished by the following proposition.
Proposition 3. Let $X$ be a set (which could be uncountable), and let $f\colon X\to\mathbf R$ be a function such that the series $\sum_{x\in X} f(x)$ is absolutely convergent. Then the set $\{x\in X:f(x)\ne0\}$ is at most countable.
Because of this, we can define the value of $\sum_{x\in X} f(x)$ for any absolutely convergent series on an uncountable set $X$ by the formula $$\sum_{x\in X} f(x):=\sum_{x\in X:f(x)\ne0} f(x),$$ since we have replaced a sum on an uncountable set $X$ by a sum on the countable set $\{x\in X:f(x)\ne0\}$. (Note that if the former sum is absolutely convergent, then the latter one is also.) Note also that this definition is consistent with the definitions for series on countable sets.
Remark. The definition of series on countable sets that are use is
Definition 4. Let $X$ be a countable set, and let $f\colon X\to\mathbf R$ be a function. We say that the series $\sum_{x\in X}f(x)$ is absolutely convergent iff for some bijection $g\colon\mathbf N\to X$, the sum $\sum_{n=0}^\infty f(g(n))$ is absolutely convergent. We then define the sum of $\sum_{x\in X}f(x)$ by the formula $$\sum_{x\in X}f(x)=\sum_{n=0}^\infty f(g(n)).$$
Cristhian GzCristhian Gz
Let $H$ be a positive unlimited integer of nonstandard analysis. Then, for example, the sum
$$ \sum_{n=1}^H n = \frac{H(H+1)}{2}$$
is a sum of uncountably many positive numbers... but it's a hyperfinite nonstandard sum, so it exists by the usual methods of nonstandard analysis. The sum is unlimited, though. Other sums can be finite: e.g.
$$ \sum_{n=1}^H \frac{1}{n!}$$
is a finite nonstandard real number that is infinitesimally close to $e$.
That said, IMO, thinking of hyperfinite sums from nonstandard analysis as being sums of uncountably many elements isn't a particularly fruitful line of thought. (also, the sum only works for internal sequences of elements anyways; you can't take an arbitrary uncountable collection)
I bring this up mainly to show that uncountable sums can make sense in some contexts, even if you can't really do much in a standard setting. Each summation operator one might define can have its own sorts of pecularities.
HurkylHurkyl
Here is the more general definition of the sum of any function $f:I\rightarrow E$, where $I=$any non empty set and $E$ is a real or complex topological vector space.
Definition 1 We say that the family $f=(f(i))_{i\in I}$ is summable in $E$ of sum $\omega\in E$ if and only if, for all neighborhood $V$ of $\omega$, there exists a finite subset $A\subset I$ of $I$ such that for all finite parts $K$ of $I$ containing $A$, we have $\sum_{i\in L}f(i)-\omega\in V$. f is said to be absolutely summable if and only if $\|f\|=(\|f(i)\|)_{i\in I}$ is summable in $\mathbb{F}=\mathbb{R}$ or $\mathbb{C}$.
Remark 2 This definition does not guarantee the uniqueness of the sum of a given family. The topological vector space $E$ mus be separable for the uniqueness to hold, in which case we write $$\sum_{i\in I}f(i)=\omega.$$
Proposition 3 If $E$ is a normed space, then a family $f=(f(i))_{i\in I}$ is summable of sum $\omega$ if and only if for any $0<\varepsilon\in \mathbb{R}$ there exists a finite part $A$ of $I$ such that for all finite part $K$ of $I$ with $A\subset K$, we have: $$\|\sum_{i\in K}f(i)-\omega\|<\varepsilon.$$
Remark 4 If $f=(f(i))_{i\in I}$ is a family in $E$, we can definie $\mathcal{F}(I)=\{A\subset I:~A~\text{is finite}\}$. Embedding $\mathcal{F}(I)$ with the direction ``$\leq$'' defined by $A\leq B$ in $\mathcal{F}$ iff $A\subset B$, we make $\mathcal{F}(I)$ a directed set (or Riesz space). The if we definite $S:\mathcal{F}(I)\rightarrow E$, by $$S(K)=\sum_{i\in K} f(i),$$ we get a net in $E$. We have the following:
Proposition 5 $f$ is summable if and only if the set $S(K)$, defined above, is convergent.
Notice that here we have defined the sum of any kind of number, countable or uncountable. But it is proved that in case $E=\mathbb{F}$ and $I=\mathbb{N}$, the summability coincide with the absolute summability, which in turns coincide with the commutative convergence of the series $\sum_{n=1}^{+\infty}f(n)$, and the value of the sum of the family is the same as the value of this series.
TighanaTighana
$\begingroup$ This is a very elaborate Answer but it is hard to spot the true conclusion to be drawn: No, it is not possible for an uncountable set of positive "elements" to have a finite sum. In particular generalizing to complex numbers seems to obscure the facts; complex numbers are not ordered as positive and not positive. $\endgroup$ – hardmath Jun 3 '17 at 15:23
$\begingroup$ Yes, you are right. It is not possible. Because for an absolutely summable family in a normed sapce $E$, the $\{i\in I: f(i)\neq 0\}$ is at most countable. $\endgroup$ – Tighana Jan 6 '18 at 7:01
Not the answer you're looking for? Browse other questions tagged real-analysis sequences-and-series summation or ask your own question.
Uncountably infinite series
The sum of an uncountable number of positive numbers
Does uncountable summation, with a finite sum, ever occur in mathematics?
use of $\sum $ for uncountable indexing set
Why "countability" in definition of Lebesgue measures?
How to prove that this set is countable?
Reference for series on arbitrary infinite sets
Concerning the definition of zero measure set
Showing the Cumulative distribution only has countable plateaus
uncountable small set of reals
Can this sum ever converge?
How I can explain this contradiction regarding the number of elements in a set
Divergence of the sum of some elements of some uncountable set
Infinite sum of elements in a finite field
Interchange finite and infinite sum
A question on the "sum" of an uncountable "number" of positive quantities
Countability of set of positive reals with bounded sum for all finite subsets
Any uncountable not-compact topological space has uncountable number of compact and noncompact subsets
Show that the range of $\{x_n\}$ contains at most a finite number of elements
sum of finite numbers of elements from the uncountable set of positive reals
|
CommonCrawl
|
Evaluate $\dfrac{dy}{dx}$ if $y \,=\, e^{x+3\log_{e}{(x)}}$
Algebraic functions
In this differentiation problem, an algebraic equation is given in terms of two variables $x$ and $y$. The value of $y$ is equal to the Napier constant $e$ raised to the power of the sum of the variable $x$ and three times the natural logarithm of $x$. It is given that the derivative of $y$ should be calculated with respect to $x$. The differentiation of the variable $y$ can be done in two mathematical approaches. So, let's learn each method with step by step procedure.
Differentiation by the formulas
In this method, a derivative rule is used for the differentiation of the variable $y$ by differentiating the give algebraic equation $y \,=\, e^{\displaystyle \, x+3\log_{e}{(x)}}$.
Prepare the equation for the differentiation
Check the given algebraic equation to know whether the given algebraic equation is possible to simplify. The left hand side expression is a variable and it is not possible to simplify it. The right hand side expression in the equation is a mathematical expression and let's try to simplify it before finding the differentiation
$y \,=\, e^{\displaystyle \, x+3\log_{e}{(x)}}$
In the right hand side expression of the equation, the base of the exponential function is $e$ and two expressions are connected by the summation at exponent position. It is possible to split the whole exponential expression into the product of two exponential functions as per the product rule of exponents.
$\implies$ $y$ $\,=\,$ $e^{\displaystyle \, x} \times e^{\displaystyle \, 3\log_{e}{(x)}}$
The multiplying factor $3$ can be shifted to the exponent position of the variable $x$ in the logarithmic function by the power rule of logarithms.
$\implies$ $y$ $\,=\,$ $e^{\displaystyle \, x} \times e^{\displaystyle \, \log_{e}{\big(x^3\big)}}$
According to the fundamental rule of logarithms, the mathematical constant $e$ raised to the power of natural logarithm of $x$ cubed is equal to $x$ cubed.
$\implies$ $y$ $\,=\,$ $e^{\displaystyle \, x} \times x^3$
Find the derivative by the Product rule
Let's start differentiating the given algebraic equation with respect to $x$.
$\implies$ $\dfrac{d}{dx}{\,(y)}$ $\,=\,$ $\dfrac{d}{dx}{\,\Big(e^{\displaystyle \, x} \times x^3\Big)}$
$\implies$ $\dfrac{dy}{dx}$ $\,=\,$ $\dfrac{d}{dx}{\,\Big(e^{\displaystyle \, x} \times x^3\Big)}$
This equation expresses that the derivative of $y$ with respect to $x$ can be calculated by finding the differentiation of the product of the natural exponential function in $x$ and $x$ cubed. Due to the multiplication of the functions, the derivative can be evaluated by using the product rule of differentiation.
$\,\,=\,$ $e^{\displaystyle \, x} \times \dfrac{d}{dx}{\,\big(x^3\big)}$ $+$ $x^3 \times \dfrac{d}{dx}{\,\big(e^{\displaystyle \, x}\big)}$
The derivative of $x$ cubed is evaluated by the power rule of derivatives and the derivative of natural exponential function can be calculated from the derivative rule of natural exponential function.
$\implies$ $\dfrac{dy}{dx}$ $\,=\,$ $e^{\displaystyle \, x} \times 3x^{\, 3-1}$ $+$ $x^3 \times e^{\displaystyle \, x}$
Simplify the mathematical expression
The differentiation of the variable $y$ with respect to $x$ is evaluated successfully. It is time to simplify the mathematical expression in the right hand side of the equation.
$\implies$ $\dfrac{dy}{dx}$ $\,=\,$ $e^{\displaystyle \, x} \times 3x^{\, 2}$ $+$ $x^3 \times e^{\displaystyle \, x}$
In both terms of the right hand side expression of the equation, the natural exponential function in terms of $x$ is commonly appearing. Hence, it can be taken out commonly from the terms.
$\implies$ $\dfrac{dy}{dx}$ $\,=\,$ $e^{\displaystyle \, x} \times \big(3x^{\, 2}+x^3\big)$
In the second factor of the right hand side expression, the factor $x$ squared is common in both terms. Hence, it can also be separated by taking the factor common from them.
$\implies$ $\dfrac{dy}{dx}$ $\,=\,$ $e^{\displaystyle \, x} \times \big(3x^{\, 2}+x^{\,2+1}\big)$
$\implies$ $\dfrac{dy}{dx}$ $\,=\,$ $e^{\displaystyle \, x} \times \big(3x^{\, 2}+x^{\,2} \times x^{\,1}\big)$
$\implies$ $\dfrac{dy}{dx}$ $\,=\,$ $e^{\displaystyle \, x} \times \big(3x^{\, 2}+x^{\,2} \times x\big)$
$\implies$ $\dfrac{dy}{dx}$ $\,=\,$ $e^{\displaystyle \, x} \times x^{\, 2} \times \big(3+x\big)$
$\,\,\,\therefore\,\,\,\,\,\,$ $\dfrac{dy}{dx}$ $\,=\,$ $e^{\displaystyle \, x} x^{\, 2} \big(3+x\big)$
Differentiation by the Limits
The given algebraic equation is $y \,=\, e^{\displaystyle \, x+3\log_{e}{(x)}}$ and it can simplified as follows.
$\implies$ $y \,=\, e^{\displaystyle \, x} \times e^{\displaystyle \, 3\log_{e}{(x)}}$
$\implies$ $y \,=\, e^{\displaystyle \, x} \times e^{\displaystyle \, \log_{e}{(x^3)}}$
$\implies$ $y \,=\, e^{\displaystyle \, x} \times x^3$
$\,\,\,\therefore\,\,\,\,\,\,$ $y \,=\, e^{\displaystyle \, x} x^3$
Differentiation by the First Principle
According to the fundamental definition of the derivative, the derivative of a function can be defined in limit form as follows. This first principle can be used to find the differentiation of the variable $y$.
$\dfrac{d}{dx}{\,f(x)}$ $\,=\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{f(x+h)-f(x)}{h}}$
Take $y \,=\, f(x)$, then
$(1).\,\,\,$ $f(x) \,=\, e^{\displaystyle \, x} x^3$
$(2).\,\,\,$ $f(x+h) \,=\, e^{\displaystyle \, (x+h)} (x+h)^3$
Now, substitute them in the first principle of the differentiation to start the procedure of the differentiation.
$\implies$ $\dfrac{d}{dx}{\,\Big(e^{\displaystyle \, x} x^3\Big)}$ $\,=\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{e^{\displaystyle \, (x+h)} (x+h)^3-e^{\displaystyle \, x} x^3}{h}}$
Now, let's try the direct substitution method to find the limit of the algebraic function in rational form.
$\implies$ $\dfrac{d}{dx}{\,\Big(e^{\displaystyle \, x} x^3\Big)}$ $\,=\,$ $\dfrac{e^{\displaystyle \, (x+0)} (x+0)^3-e^{\displaystyle \, x} x^3}{0}$
$\implies$ $\dfrac{d}{dx}{\,\Big(e^{\displaystyle \, x} x^3\Big)}$ $\,=\,$ $\dfrac{e^{\displaystyle \, x} x^3-e^{\displaystyle \, x} x^3}{0}$
$\implies$ $\dfrac{d}{dx}{\,\Big(e^{\displaystyle \, x} x^3\Big)}$ $\,=\,$ $\dfrac{\cancel{e^{\displaystyle \, x} x^3}-\cancel{e^{\displaystyle \, x} x^3}}{0}$
$\implies$ $\dfrac{d}{dx}{\,\Big(e^{\displaystyle \, x} x^3\Big)}$ $\,=\,$ $\dfrac{0}{0}$
It is evaluated that the derivative of the given function is indeterminate. It clears that the direct substitution method is not recommendable to find the differentiation of the simplified function. Therefore, we have to find the limit of the function in an alternative method.
Simplify the function in rational form
$\dfrac{d}{dx}{\,\Big(e^{\displaystyle \, x} x^3\Big)}$ $\,=\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{e^{\displaystyle \, (x+h)} (x+h)^3-e^{\displaystyle \, x} x^3}{h}}$
Now, let's try to simplify the right hand side expression of the equation for calculating the derivative of the simplified function by the limits.
$\,\,=\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{e^{\displaystyle \, (x+h)} \times (x+h)^3-e^{\displaystyle \, x} \times x^3}{h}}$
Observe the both terms in the numerator of the rational expression, there is a natural exponential function commonly in both terms of the function. So, split the natural exponential function in the first term of the numerator.
$\,\,=\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{e^{\displaystyle \, x} \times e^{\displaystyle \, h} \times (x+h)^3-e^{\displaystyle \, x} \times x^3}{h}}$
Now, take the common factor out from the terms in the numerator of the rational function.
$\,\,=\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{e^{\displaystyle \, x} \times \Big(e^{\displaystyle \, h} \times (x+h)^3-x^3\Big)}{h}}$
$\,\,=\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \Bigg(e^{\displaystyle \, x} \times \dfrac{e^{\displaystyle \, h} \times (x+h)^3-x^3}{h}\Bigg)}$
In this case, the mathematical constant $e$ raised to the power $x$ is constant. Hence, it can be taken out from the limit operation by the constant multiple rule of the limits.
$\,\,=\,$ $\displaystyle e^{\displaystyle \, x} \times \large \lim_{h \,\to\, 0}{\normalsize \dfrac{e^{\displaystyle \, h} \times (x+h)^3-x^3}{h}}$
In the first term of the numerator, two terms are connected by a plus sign in the second factor and the sum of terms has an exponent of three. It can be expanded by the cube of sum of two terms formula.
$\,\,=\,$ $\displaystyle e^{\displaystyle \, x} \times \large \lim_{h \,\to\, 0}{\normalsize \dfrac{e^{\displaystyle \, h} \times \Big(x^3+h^3+3xh(x+h)\Big)-x^3}{h}}$
$\,\,=\,$ $\displaystyle e^{\displaystyle \, x} \times \large \lim_{h \,\to\, 0}{\normalsize \dfrac{e^{\displaystyle \, h} \times \Big(x^3+h^3+3x^2h+3xh^2\Big)-x^3}{h}}$
The second term in the numerator is $x$ cubed. There is an $x$ cube term in the second factor of the first term. Hence, let us try to take it common from the terms for simplifying the expression further.
$\,\,=\,$ $\displaystyle e^{\displaystyle \, x} \times \large \lim_{h \,\to\, 0}{\normalsize \dfrac{e^{\displaystyle \, h} \times \Big(x^3+(h^3+3x^2h+3xh^2)\Big)-x^3}{h}}$
For our convenience, the expression in the second factor of the first term in the numerator can be split as two terms. Now, multiply the terms in the expression by its factor as per distributive property.
$\,\,=\,$ $\displaystyle e^{\displaystyle \, x} \times \large \lim_{h \,\to\, 0}{\normalsize \dfrac{e^{\displaystyle \, h} \times x^3+e^{\displaystyle \, h} \times (h^3+3x^2h+3xh^2)-x^3}{h}}$
$\,\,=\,$ $\displaystyle e^{\displaystyle \, x} \times \large \lim_{h \,\to\, 0}{\normalsize \dfrac{e^{\displaystyle \, h} \times x^3-x^3+e^{\displaystyle \, h} \times (h^3+3x^2h+3xh^2)}{h}}$
$\,\,=\,$ $\displaystyle e^{\displaystyle \, x} \times \large \lim_{h \,\to\, 0}{\normalsize \dfrac{e^{\displaystyle \, h} \times x^3-x^3 \times 1+e^{\displaystyle \, h} \times (h^3+3x^2h+3xh^2)}{h}}$
Now, the factor $x$ cubed is a common factor in both terms of the numerator. Hence, it can be taken out from them commonly.
$\,\,=\,$ $\displaystyle e^{\displaystyle \, x} \times \large \lim_{h \,\to\, 0}{\normalsize \dfrac{x^3 \times (e^{\displaystyle \, h}-1)+e^{\displaystyle \, h} \times (h^3+3x^2h+3xh^2)}{h}}$
The above rational expression can be split as sum of two terms.
$\,\,=\,$ $\displaystyle e^{\displaystyle \, x} \times \large \lim_{h \,\to\, 0}{\normalsize \bigg(\dfrac{x^3 \times (e^{\displaystyle \, h}-1)}{h}}$ $+$ $\dfrac{e^{\displaystyle \, h} \times (h^3+3x^2h+3xh^2)}{h}\bigg)$
According to the sum rule of limits, the limit of the sum of functions is equal to the sum of their limits.
$\,\,=\,$ $\displaystyle e^{\displaystyle \, x} \times \bigg(\large \lim_{h \,\to\, 0}{\normalsize \dfrac{x^3 \times (e^{\displaystyle \, h}-1)}{h}}$ $+$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{e^{\displaystyle \, h} \times (h^3+3x^2h+3xh^2)}{h}\bigg)}$
In the second term of the second factor, $h$ is a common factor in each term of the second factor in the numerator. The expression in the denominator is also $h$ and it can cancel the common factor. Hence, take $h$ common from all terms in the numerator.
$\,\,=\,$ $\displaystyle e^{\displaystyle \, x} \times \bigg(\large \lim_{h \,\to\, 0}{\normalsize \dfrac{x^3 \times (e^{\displaystyle \, h}-1)}{h}}$ $+$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{e^{\displaystyle \, h} \times h \times (h^2+3x^2+3xh)}{h}\bigg)}$
$\,\,=\,$ $\displaystyle e^{\displaystyle \, x} \times \bigg(\large \lim_{h \,\to\, 0}{\normalsize \bigg(x^3 \times \dfrac{(e^{\displaystyle \, h}-1)}{h}\bigg)}$ $+$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{e^{\displaystyle \, h} \times \cancel{h} \times (h^2+3x^2+3xh)}{\cancel{h}}\bigg)}$
In the first term of the second factor, $x$ cubed is a constant and it can be released from the limit operation by the constant multiple rule of the limits.
$\therefore\,\,\,$ $\dfrac{d}{dx}{\,\Big(e^{\displaystyle \, x} x^3\Big)}$ $\,=\,$ $\displaystyle e^{\displaystyle \, x} \times \bigg(x^3 \times \large \lim_{h \,\to\, 0}{\normalsize \dfrac{e^{\displaystyle \, h}-1}{h}}$ $+$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \Big(e^{\displaystyle \, h} \times (h^2+3x^2+3xh)\Big)\bigg)}$
Evaluate the limit of each function
The simplification process is completed and it is the right time to evaluate the limit of each function.
$\,\,=\,$ $\displaystyle e^{\displaystyle \, x} \times \bigg(x^3 \times \large \lim_{h \,\to\, 0}{\normalsize \dfrac{e^{\displaystyle \, h}-1}{h}}$ $+$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \Big(e^{\displaystyle \, h} \times (h^2+3x^2+3xh)\Big)\bigg)}$
In the second factor, the limit of function is equal to one as per the natural exponential limit rule and the limit of the second factor can be evaluated by the direct substitution.
$\,\,=\,$ $\displaystyle e^{\displaystyle \, x} \times \bigg(x^3 \times 1$ $+$ $e^{\displaystyle \, 0} \times \Big(0^2+3x^2+3x(0)\Big)\bigg)$
$\,\,=\,$ $\displaystyle e^{\displaystyle \, x} \times \bigg(x^3$ $+$ $1 \times \Big(0+3x^2+0\Big)\bigg)$
$\,\,=\,$ $\displaystyle e^{\displaystyle \, x} \times \big(x^3+1 \times 3x^2\big)$
$\,\,=\,$ $\displaystyle e^{\displaystyle \, x} \times \big(x^3+3x^2\big)$
$\,\,=\,$ $\displaystyle e^{\displaystyle \, x} \times \big(x^{2+1}+3 \times x^2\big)$
$\,\,=\,$ $\displaystyle e^{\displaystyle \, x} \times \big(x^2 \times x^1+3 \times x^2\big)$
$\,\,=\,$ $\displaystyle e^{\displaystyle \, x} \times \big(x^2 \times x+3 \times x^2\big)$
$\,\,=\,$ $\displaystyle e^{\displaystyle \, x} \times x^2 \times (x+3)$
$\,\,\,\therefore\,\,\,\,\,\,$ $\dfrac{dy}{dx}$ $\,=\,$ $\displaystyle e^{\displaystyle \, x} x^2 (x+3)$
|
CommonCrawl
|
Bulletin of the London Mathematical Society (2)
London Mathematical Society Lecture Note Series (4)
Edited by Philip J. Rippon, The Open University, Milton Keynes, Gwyneth M. Stallard, The Open University, Milton Keynes
Book: Transcendental Dynamics and Complex Analysis
Published online: 06 July 2010
Print publication: 26 June 2008, pp ix-xx
Transcendental Dynamics and Complex Analysis
Edited by Philip J. Rippon, Gwyneth M. Stallard
Print publication: 26 June 2008
Buy the print book
After the pioneering work on complex dynamics by Fatou and Julia in the early 20th century, Noel Baker went on to lay the foundations of transcendental complex dynamics. As one of the leading exponents of transcendental dynamics, he showed how developments in complex analysis such as Nevanlinna theory could be applied. His work has inspired many others to take up this increasingly active subject, and will continue to do so. Presenting papers by researchers in transcendental dynamics and complex analysis, this book is written in honour of Noel Baker. The papers describe the state of the art in this subject, with new results on completely invariant domains, wandering domains, the exponential parameter space, and normal families. The inclusion of comprehensive survey articles on dimensions of Julia sets, buried components of Julia sets, Baker domains, Fatou components of functions of small growth, and ergodic theory of transcendental meromorphic functions means this is essential reading for students and researchers in complex dynamics and complex analysis.
Print publication: 26 June 2008, pp v-vi
Print publication: 26 June 2008, pp i-iv
EXTENSIONS OF A THEOREM OF VALIRON
KARL F. BARTH, PHILIP J. RIPPON
Journal: Bulletin of the London Mathematical Society / Volume 38 / Issue 5 / October 2006
Published online by Cambridge University Press: 20 September 2006, pp. 815-824
Print publication: October 2006
A classical theorem of Valiron states that a function which is holomorphic in the unit disk, unbounded, and bounded on a spiral that accumulates at all points of the unit circle, has asymptotic value $\infty$. This property, and various other properties of such functions, are shown to hold for more general classes of functions which are bounded on a subset of the disk that has a suitably large set of nontangential limit points on the unit circle.
ASYMPTOTIC TRACTS OF LOCALLY UNIVALENT FUNCTIONS
Journal: Bulletin of the London Mathematical Society / Volume 34 / Issue 2 / March 2002
Several results are proved, related to an old problem posed by G. R. MacLane, namely whether functions in the class [Ascr ] that are locally univalent can have arc tracts. In particular, a proof is given of an assertion of MacLane that if f ∈ [Ascr ] is locally univalent and has no arc tracts, then f′ ∈ [Ascr ].
|
CommonCrawl
|
» Core Mathematics
» Pure Maths
Completing the Square
Contents Toggle Main Menu 1 Completing the Square 2 Worked Examples 3 Video Examples3.1 Example 13.2 Example 2 4 Maxima and Minima of Quadratics4.1 Finding the Maximum and Minimum Values4.2 Worked Examples 5 Test Yourself 6 External Resources
A quadratic in the variable $x$ is an expression of the form $ax^2+bx+c$, where $a$, $b$ and $c$ are constant numbers.
Such a quadratic expression is called a complete square if it takes the form: \begin{align} x^2 +2ax +a^2 &= (x+a)(x+a)\\ &= (x+a)^2 \end{align}
Of course, most quadratic expressions are not complete squares, but we will now see how to write such an expression in a form involving a complete square. The method is called completing the square.
The simplest case is when the coefficient of $x^2$ is $1$, i.e. the expression is of the form $x^2+2ax+b$. In this case, the completed square form is \[(x+a)^2 + (b-a^2)\]
We can show that this is equivalent to the original expression by expanding the brackets
\begin{align} (x+a)^2 + (b-a^2) &= x^2 + 2ax + a^2 + b - a^2 \\ &= x^2 + 2ax + b \end{align}
In short, the method is: halve the $x$ term to get $a$; gather it into a pair of brackets $(x+a)^2$; and then subtract $a^2$ from $b$, leaving you with $(x+a)^2 + (b-a^2)$.
If the coefficient of $x^2$ is some constant $a \neq 1$, then take a factor of $a$ out from each term and proceed as above. \[ax^2 + bx + c = a \left( x^2 + \frac{b}{a}x + \frac{c}{a} \right)\]
The coefficient of $x$ is $2$. The constant term in the completed square part is half of that, i.e. $1$. Hence, the completed square form of the expression is
\[(x+1)^2 - 1^2 + 5 = (x+1)^2 + 4\]
Express $3x^2-12x+4$ in completed square form.
This time the coeffiecient of $x^2$ is not $1$. Start by factoring $3$ out of the expression.
\[3x^2-12x+4 = 3\left(x^2 -4x + \frac{4}{3}\right)\]
Now, proceed as above.
\begin{align} 3\left(x^2 -4x +\dfrac{4}{3}\right) &= 3\left((x-2)^2-2^2+\frac{4}{3}\right) \\ &= 3 \left( (x-2)^2 - \frac{8}{3} \right) \\ &=3(x-2)^2-8 \end{align}
Prof. Robin Johnson writes $x^2+2x-3$ in completed square form, then shows how this can then be written as a product of two linear factors.
Prof. Robin Johnson solves the quadratic equation $2x^2-3x-4=0$ by completing the square.
Maxima and Minima of Quadratics
A quadratic curve has either a maximum or a minimum point, depending on the coefficient of the $x^2$ term.
If the coefficient of the $x^2$ term is positive, the curve has a single minimum point and looks like this:
Conversely if the coefficient of the $x^2$ term is negative, the curve has a single maximum point and looks like this:
Finding the Maximum and Minimum Values
It's possible to find the minimum or maximum of a quadratic curve without differentiating by instead finding the completed square form of the curve's equation.
The general completed square form of a quadratic equation is $q(x) = a(x+r)^2 + s$. Note that $(x+b)^2$ is always positive, so $q(x)$ takes its minimum (or maximum, if $a \lt 0$) value when $x=-r$, and that value is $s$.
Given the quadratic $q(x)=3-4x-x^2$, find its maximum value and the value of $x$ at which this maximum is attained.
On completing the square, we have $q(x) = -(x+2)^2 + 7$ .
Hence we see that the maximum value is $7$ and this occurs at $x=-2$.
Given the quadratic $x^2-6x+7$,find its minimum value and the value of $x$ at which this minimum is attained.
On completing the square, we have $q(x) = (x-3)^2-2$.
Hence we see that the minimum value is $-2$ and this occurs at $x=3$.
Test yourself: Numbas test on completing the square
Completing the square workbook at mathcentre.
Completing the square - maxima and minima workbook at mathcentre.
Pure Maths
|
CommonCrawl
|
Network topology dynamics of circulating biomarkers and cognitive performance in older Cytomegalovirus-seropositive or -seronegative men and women
Svetlana Di Benedetto1,2,
Ludmila Müller1,
Stefanie Rauskolb3,
Michael Sendtner3,
Timo Deutschbein4,
Graham Pawelec2 &
Viktor Müller1
Cytokines are signaling molecules operating within complex cascade patterns and having exceptional modulatory functions. They impact various physiological processes such as neuroendocrine and metabolic interactions, neurotrophins' metabolism, neuroplasticity, and may affect behavior and cognition. In our previous study, we found that sex and Cytomegalovirus (CMV)-serostatus may modulate levels of circulating pro- and anti-inflammatory cytokines, metabolic factors, immune cells, and cognitive performance, as well as associations between them.
In the present study, we used a graph-theoretical approach to investigate the network topology dynamics of 22 circulating biomarkers and 11 measures of cognitive performance in 161 older participants recruited to undergo a six-months training intervention. For network construction, we applied coefficient of determination (R2) that was calculated for all possible pairs of variables (N = 33) in four groups (CMV− men and women; CMV+ men and women). Network topology has been evaluated by clustering coefficient (CC) and characteristic path length (CPL) as well as local (Elocal) and global (Eglobal) efficiency, showing the degree of network segregation (CC and Elocal) and integration (CPL and Eglobal). We found that networks under consideration showed small-world networks properties with more random characteristics. Mean CC, as well as local and global efficiency were highest and CPL shortest in CMV− males (having lowest inflammatory status and highest cognitive performance). CMV− and CMV+ females did not show any significant differences. Modularity analyses showed that the networks exhibit in all cases highly differentiated modular organization (with Q-value ranged between 0.397 and 0.453).
In this work, we found that segregation and integration properties of the network were notably stronger in the group with balanced inflammatory status. We were also able to confirm our previous findings that CMV-infection and sex modulate multiple circulating biomarkers and cognitive performance and that balanced inflammatory and metabolic status in elderly contributes to better cognitive functioning. Thus, network analyses provide a useful strategy for visualization and quantitative description of multiple interactions between various circulating pro- and anti-inflammatory biomarkers, hormones, neurotrophic and metabolic factors, immune cells, and measures of cognitive performance and can be in general applied for analyzing interactions between different physiological systems.
Aging is accompanied by chronic low-grade inflammation that has been repeatedly identified even in overtly healthy individuals and is characterized by elevated levels of circulating pro-inflammatory cytokines [1]. Cytokines represent signaling molecules having exceptional modulatory functions. They impact virtually every physiological process such as neurotransmitter metabolism, neuroendocrine interactions, and neuroplasticity, thereby not only affecting general health but also immunity and cognitive functioning [2,3,4]. The cytokine network, containing cytokines, their receptors, and their regulators, is present in the brain and in various other physiological systems, and is highly controlled throughout the lifespan [5, 6]. Cytokines and their receptors operate within multifactorial networks and may act synergistically or antagonistically in a time- and concentration-dependent patterns. These interactions allow cross-communication between different cell types, at different hierarchical levels, translating environmental signals into molecular signals [2, 7]. The pro-inflammatory profile becomes strategic throughout the lifespan [8,9,10,11] - an increase of cytokine secretion, also thought to be associated with the influence of CMV-infection, may be at least partly responsible for age-associated degenerative disorders [12,13,14,15,16]. Previous studies usually investigated individual roles of different cytokines, inflammatory mediators or metabolic factors in the age-related physiological alterations [17,18,19,20,21]. With growing numbers of biomarkers, however, it may become difficult to interpret results and translate them into useful information.
In our recent work [22], we assessed inflammatory status and cognitive performance in 161 older participants recruited to undergo a six-month training intervention. We demonstrated that sex and CMV-latency have influence on levels of circulating pro- and anti-inflammatory cytokines, receptor antagonist, soluble receptor, metabolic factors, and immune cells. We also found that CMV-latency has modulatory effects on associations between individual peripheral biomarkers [22]. Furthermore, we revealed an interaction between CMV-serostatus and sex associations with cognitive abilities: sex differences in fluid intelligence and working memory were noted only in CMV-negative individuals. Even more strikingly, the same group of elderly men also exhibited a lower inflammatory status in their peripheral circulation. Therefore, a well-balanced inflammatory and anti-inflammatory equilibrium appeared apparently to be decisive for optimal physiological functions and for optimal cognitive functioning.
Pro-inflammatory cytokines often act as negative regulatory signals modulating the action of hormones and neurotrophic factors. An unbalanced cytokine state may also affect the neuroendocrine system (and vice versa) impairing interplay between them, and contributing to disrupted homeostasis [23]. Therefore, in the present study, we additionally considered such hormones as cortisol and dehydroepiandrosterone (DHEA) as well as neurotrophines and their regulators (insulin-like growth factor-1, IGF-1, and IGF-binding protein, IGFBP-3), to gain a more comprehensive image of these processes. Furthermore, we extended the number of inflammation-related metabolic factors and included measures of C-reactive protein (CRP) in our present analyses. Finally, instead of focusing on four latent factors representing the main cognitive abilities (as we did in the previous study), we included in our present analysis all 11 individual cognitive performance scores assessed within the cognitive battery of elderly individuals. Increasing complexity arose when attempting to analyze dynamic interconnections between all these factors and to investigate the modulatory impact of CMV-latency and sexual dimorphism. In an effort to better understand the relationships between the multiple circulating and functional biomarkers and to compare them regardless of their physiological hierarchical assignments, we applied a graph-theoretical approach and described constructed networks in terms of network topology and modular organization of network elements.
As stated by Bhavnani et al., network analyses offer two main advantages for studying complex physiological interactions: (i) they do not require a priori assumptions about the relationship of nodes within the data, such as the categorized assumption of hierarchical clustering; and (ii) they allow the simultaneous visualization of several raw values (such as cytokine or/and cell values, functional attributes), as well as aggregated values, and clusters in a uniform visual representation [24]. This allows not only the more rapid generation of hypotheses based on complicated multivariate interactions, but also the validation, visualization, and confirmation of the results, obtained with other methodological approaches. Moreover, this enables a more informed methodology for selecting quantitative methods to compare the patterns obtained in the different sets of data regardless of their physiological hierarchical levels [24].
The purpose of the present study was to visualize and to quantitatively describe by means of a graph-theoretical approach the complex multiple interactions among diverse pro- and anti-inflammatory mediators, immune cell populations, hormones, neurotrophic and metabolic factors as well as cognitive performance in older CMV-seropositive and -negative men and women. Moreover, we aimed to design a new strategy for quantitative investigations of the network topology dynamics in circulating biomarkers and measures of cognitive performance by applying the coefficients of determination (R2) calculated for all possible pairs of variables in four groups of participants. In order to characterize the segregation and integration properties of the individual networks of CMV-positive or -negative men and women, we analyzed such network topology measures as clustering coefficient, characteristic path length, local and global efficiency [25, 26]. With the aim of statistically comparing the network topology dynamics and to identifying the networks with optimal features of segregation and integration, we applied a rewiring procedure. To the best of our knowledge, simultaneous network analyses of multiple inflammation-related peripheral biomarkers and cognitive performance of older Cytomegalovirus-seropositive and -seronegative men and women have not been previously accomplished.
For network analyses, the participants were separated into four groups according to their CMV-serostatus and sex (Fig. 1). For network construction, we applied coefficient of determination (R2) that was calculated for all possible pairs of variables in four groups (CMV− men and women; CMV+ men and women). Network topology has been evaluated by clustering coefficient (CC) and characteristic path length (CPL) as well as local (Elocal) and global (Eglobal) efficiency (for details see Methods section).
A schematic illustration of the study setup. Modified from [22]. CMV, Cytomegalovirus
Network composition and network topologies in real and control networks
Before analyzing network topology changes, we compared the topology in real and control (i.e., lattice and random) networks under different cost levels (the ratio of the number of actual connections to the maximum possible number of connections in the network) in the range between 10 and 60% of wiring costs. As shown in Additional file 1: Figure 1A, CC is greatest in lattice networks and lowest in random networks, whereas CC for the real networks lies in-between. CPL is shortest in random and longest in lattice networks, while the real networks are between these (see Additional file 1: Figure 1B). Correspondingly, Elocal was highest in lattice networks (at least for cost levels under 45%) and lowest in random networks (at least for cost levels under 20%), while Eglobal was highest in random and lowest in lattice networks essentially for all levels of wiring costs, with real networks always in between (see Additional file 1: Figure 2 for details).
Importantly, as shown in Fig. 2, networks under consideration are Small-Word Networks (SWNs) at all levels of wiring costs (σ > 1). As indicated by the other SW coefficient ω, which is lying at practically all levels of wiring costs in the positive range (see Fig. 2b), these networks are SWNs with more random characteristics. It can also be seen that the networks with costs lower than 25% showed rather unstable behavior that was stabilizing at the 25% level of costs and showed very similar results across all experimental groups for both SW coefficients σ and ω. Thus, for our main analyses, we decided to set the cost level to 25% that makes it possible to investigate sparse and at the same time stable network topology in all four groups of participants.
Small-world coefficients sigma (σ) and omega (ω) under different levels of the wiring costs. CMV, Cytomegalovirus; CMV− m, CMV-seronegative men; CMV+ m, CMV-seropositive men; CMV− f, CMV-seronegative women; CMV+ f, CMV-seropositive women
Network structure and network strengths
It can be seen that connectivity matrices (Fig. 3a) display a group-specific structure in all four participant groups. In the first step, we calculated network strengths as the sum of connections of node i (see also Methods section for more details). As shown in Fig. 3a, b, cognitive nodes exhibit high strengths, which are mostly due to the strong connections between the cognitive nodes themselves, especially in the female groups. In the male groups, the cognitive nodes are also strongly connected to the other systems such as cytokines (especially, in the network of CMV− males), metabolic variables (particularly, in the network of the CMV+ males) and immune cells.
Connectivity structure of the network and network strengths in the four groups. a Connectivity matrices. b Network strengths. CMV, Cytomegalovirus; CMV− m, CMV-seronegative men; CMV+ m, CMV-seropositive men; CMV− f, CMV-seronegative women; CMV+ f, CMV-seropositive women; IL, interleukin; IL-1β, interleukin 1 beta; TNF, tumor necrosis factor; CRP, C-reactive protein; IL-1RA, interleukin 1 receptor antagonist; sTNF-R, soluble tumor necrosis factor receptor; CHOL, cholesterol; HDL, high-density lipoprotein; LDL, low-density lipoprotein; TRIG, triglyceride; CREA, creatinine; DHEA, dehydroepiandrosterone; IGF-1, insulin-like growth factor-1; IGFBP-3, IGF-binding protein 3; Gf, fluid intelligence; EM, episodic memory; WM, working memory; Speed, perceptual speed
Networks of CMV− and CMV+ men and women differ in their structure
Networks of the four experimental groups also display group-specific structure (Fig. 4). Individual nodes (or variables) are represented as multicolored circles coding for affinity to a particular group of variables. The size of the circle depends on the sum of connections and indicates the node's strength. The thickness of the connections corresponds to their connection strength. The nodes are numbered clock-wise beginning from the pro-inflammatory cytokine IL-1β displayed in blue. The CMV-negative male group (top, left) is characterized by multiple strong connections between pro-inflammatory cytokine nodes (IL-1β, TNF, IL-18) and cognitive nodes (episodic memory and fluid intelligence).
Network structure differences in CMV− and CMV+ men and women. CMV, Cytomegalovirus; CMV− m, CMV-seronegative men; CMV+ m, CMV-seropositive men; CMV− f, CMV-seronegative women; CMV+ f, CMV-seropositive women; IL, interleukin; IL-1β, interleukin 1 beta; TNF, tumor necrosis factor; CRP, C-reactive protein; IL-1RA, interleukin 1 receptor antagonist; sTNF-R, soluble tumor necrosis factor receptor; CHOL, cholesterol; HDL, high-density lipoprotein; LDL, low-density lipoprotein; TRIG, triglyceride; CREA, creatinine; DHEA, dehydroepiandrosterone; IGF-1, insulin-like growth factor-1; IGFBP-3, IGF-binding protein 3; Gf, fluid intelligence; EM, episodic memory; WM, working memory; Speed, perceptual speed
Less strong but numerous connections are also present for anti-inflammatory cytokines and the cognitive nodes. Interestingly, this is the only group, in which pro- and anti-inflammatory cytokines show no direct connections to each other. The nodes of perceptual speed are strongly connected with immune cell nodes (lymphocytes and neutrophils). No other groups of participants display such strong direct connections between immune biomarkers and cognition – except the network of CMV+ men (bottom, left) with only one strong connection between CRP and fluid intelligence. The network of the CMV+ men shows strong connections between metabolic factors and perceptual speed. The network of CMV− women (top, right) displays strong connections between pro-inflammatory IL-6 and triglycerides as well as between anti-inflammatory sTNF-R and creatinine. The network of the CMV+ women (bottom right) shows a strong connection between leukocytes and pro-inflammatory IL-6. Unexpectedly, neurotrophins in the CMV− men have relatively strong connections to urea, but only one weak connection to the pro-inflammatory factor CRP. In contrast, all three of the other networks display multiple connections to both pro- and anti-inflammatory cytokines. Concerning connections between neurotrophins and cognitive nodes, we can see quite heterogeneous picture: with some connections in CMV-seronegative and -positive men, and with only one connection in the CMV-seronegative and -positive women. In general, the networks of all groups of participants show strong (but differently manifested) connections between the cognitive nodes themselves (Fig. 4).
Networks topology differences between CMV− and CMV+ men and women
To be able to statistically compare the four different networks at a given cost level, we used rewiring procedure with replacement of a non-existing edge through an existing one and consecutive determination of network topology metrics each time. In total, there were about 50,000 rewired networks, for which mean and standard deviation (SD) of the network topology metrics were determined. In accordance with the empirical rule, we achieved a 99.7% confidence interval (CI) for the mean: CI = mean ± 3 × SD. As shown in Fig. 5a, mean CC was highest and CPL shortest in CMV− males and in total, higher (shorter) in males than in females. Correspondingly, local and global efficiency were both highest in CMV− males and in total higher in males than in females. CMV-seronegative and -seropositive females did not show any significant differences. This indicates that segregation and integration properties of the network were notably stronger in males (especially, in CMV− males) than in females. Inspection of separate nodes in the networks showed that these network topology differences were in particular stronger for cytokines and cognitive variables or nodes (Fig. 5b).
Network topology differences. a Results of rewiring analyses for whole network. b Results of rewiring analyses for individual nodes. CC, clustering coefficient; CPL, characteristic path length; Elocal, local efficiency; Eglobal, global efficiency; CMV, Cytomegalovirus; CMV-, CMV-seronegative; CMV+, CMV-seropositive; m, male; f, female; NEG, CMV-seronegative; POS, CMV-seropositive
Modular organization of the networks of CMV− and CMV+ men and women
Modularity analyses showed that the networks under consideration exhibited in all cases highly differentiated modular organization with 4 and 5 modules for males and for females, respectively. This is indicated by high modularity values or Q statistics (Fig. 6), which ranged between 0.397 and 0.453, and were considerably higher as compared with random networks (with Q-values close to 0). Nodes sharing the same module are displayed in Fig. 6b and d in the same color. As shown in Fig. 6a and c, cognitive nodes occupied two modules in all networks (with exception of CMV+ females, in which all cognitive nodes were located in one large module), whereby perceptual speed nodes occupied a separate module. Moreover, the community structure in CMV-negative males was organized in 4 modules (A-B, left), whereby all pro-inflammatory cytokines were located in the same module shared (B, blue) with cognitive variables or nodes (reflecting general intelligence and memory features). In addition, two of the three anti-inflammatory cytokines (namely, IL-10 and sTNF-R) shared the same module (B, left, red) with metabolic factors as well as with monocytes, with the exception of urea, which was located in a separate module (B, yellow) together with hormones und neurotrophins. Finally, perceptual speed nodes shared a common module (B, left, green) with IL-1RA and immune cells (namely, leukocytes, lymphocytes, and neutrophils). Interestingly, in CMV− females (A-B, right), the two modules occupied by cognitive (B, right, blue) and perceptual speed nodes (B, right, cyan) were separated from all the other nodes, which were partitioned into heterogeneous modules comprising different components (e.g., cytokines, metabolic variables, immune cells, and neurotrophins). The nodes of CMV+ men (C-D, left) and CMV+ women (C-D, right) also partitioned into 4 and 5 modules, respectively, showed heterogeneous modularity structures comprising nodes of both peripheral biomarkers and cognitive features.
Modular organization of the networks. a Modular assignment of nodes in CMV− men (left) and women (right). b Modular structure in CMV− men (left) and women (right). c Modular assignment of nodes in CMV+ men (left) and women (right). d Modular structure in CMV+ men (left) and women (right). Note that nodes sharing the same module are displayed in the same color. CMV, Cytomegalovirus; CMV− m, CMV-seronegative men; CMV+ m, CMV-seropositive men; CMV− f, CMV-seronegative women; CMV+ f, CMV-seropositive women; Q, modularity value
Z-P parameter space and nodes' specificity of the four networks
To define how the network nodes were positioned in their own module and with respect to other modules, we calculated the within-module degree (Zii) and participation coefficient (Pii) of the node i for the given networks. The within-module degree indicates how 'well-connected' node i is to other nodes in the module, whereas the participation coefficient reflects how 'well-distributed' the edges of the node i are among the other modules. Zi and Pi form together the so-called Z-P parameter space, with different regions indicating specific roles of the nodes (e.g., hubs, connectors, provincial nodes) in this parameter space [27]. As shown in Fig. 7a, the network of the CMV− males contains more hub nodes but far fewer connector nodes than the other three groups. This indicates that the modules in this participants' group are more autonomous and the information flow between the modules is either reduced or is realized through a small number of connector nodes. Interestingly, three of the four hubs are cognitive variables and the fourth one is IGFBP3. Thus, cognitive nodes, such as fluid intelligence, working memory, and perceptual speed, play a central role in the network of CMV− males driving or controlling the connections within the corresponding modules. Further, the networks of CMV− females (B) and CMV+ males (C) are characterized by high numbers of the non-hub connectors responsible for the connectivity between the modules. Thus, the modules in these two groups are apparently worse separated from each other than, for example, in the CMV− males. The network of the CMV+ females (D) contains two hubs and eight non-hub connectors, and thus demonstrates a modular structure with moderate number of hubs and connectors. Note also that all cognitive nodes in this group are provincial nodes and therefore play a secondary role in the network. In summary, it can be stated that the networks under consideration exhibit a different balance between intra- and inter-modular information flow with different numbers of hub and connector nodes playing a significant role for this balance and for network functioning. Which of these types of modular organization is more effective, remains to be investigated.
Z-P parameter space and node' specificity for networks in four groups. a Z-P parameter space for CMV-seronegative men, (b) Z-P parameter space for CMV-seronegative women, (c) Z-P parameter space for CMV-seropositive men, and (d) Z-P parameter space for CMV-seropositive women. Different regions separated by dotted lines contain: left – ultra-peripheral nodes; central – provincial nodes; top – hubs; top right – connector hubs; right – connectors. CMV, Cytomegalovirus; CMV− m, CMV-seronegative men; CMV+ m, CMV-seropositive men; CMV− f, CMV-seronegative women; CMV+ f, CMV-seropositive women
There is a growing body of evidence supporting the notion that the immune system is not hermetically self-regulated but functions in intimate interrelations with other physiological systems, including the nervous system [5, 28]. These interactions are present at the various levels of organization – at the local, as well as at the whole organism level – by sharing a common language of a wide range of cytokines, receptor molecules, hormones, neuropeptides, metabolic and neurotrophic factors allowing cross-communication [29, 30]. Particularly in the process of aging, this reciprocal cross-talk may under certain circumstances permit augmentation of maladaptive inflammatory loops, which could disturb homeostasis and contribute to the age-related functional alterations or even to pathological conditions [2, 31,32,33].
Several analytical techniques to investigate these interactions have been established so far, but our understanding of the interplay between different factors in such interrelated processes is still in its infancy. Despite some progress, there is a further need to place the data from different physiological and functional levels in a biological context with the aim of interpreting their multifaceted orchestration as a whole. Many studies highlight the role of different inflammatory cytokines in the low-grade inflammation, dubbed "inflammaging", and the importance of pro-inflammatory and anti-inflammatory homeostasis for cognitive health in aging [17, 18, 34,35,36]. Additionally, the interrelated effects of inflammatory factors and their influence on neuroimmune and neuroendocrine functions can be modified by the chronic immune activity required to control lifelong persistent CMV infection [2, 37]. In the present work, we propose a strategy for quantitative description of multiple interactions between different cytokines, receptor molecules, metabolic and neurotrophic factors, hormones, immune cells, and measures of cognitive performance with the help of a graph-theoretical approach. To the best of our knowledge, simultaneous network analyses of multiple inflammation-related mediators and cognitive performance in older CMV-seropositive and CMV-seronegative men and women have not been previously accomplished.
Aging is associated with modulatory effects on the immune system – resulting in the universal, multifactorial changes, known as immunosenescence. This leads to functional changes in the immune cells, which produce more inflammatory cytokines and less anti-inflammatory mediators. CMV-persistence is associated with constant chronic stimulation of the immune system that could further contribute to induction and accumulation of the specific immune cell phenotypes known to be generally associated with immunosenescence. The fact that CMV has considerable influence on immunosenescence was first described 20 years ago [38] and has continuously been supported by numerous studies since then [15, 16, 39,40,41,42,43,44]. In the large-scale immune profiling and functional analysis of normal aging, it was impressively shown that the immune system alterations (determined as a number of significantly affected analytes) caused specifically by CMV were comparable to the differences seen between the sexes [45]. A lifelong persistent infection influences immune aging and can significantly modify the course of cognitive aging by acting in combination with individual differences in cytokine release [37, 46,47,48]. The modulatory effect of CMV-latency and sex were also demonstrated in our previous study [22]. Therefore, for the network analyses in the present study, we separated the participants into four groups according to their CMV-serostatus and sex.
We found that the modulatory impact of CMV and sex was also reflected in the specific differences of the network structure and the network topology dynamics observed between the four groups. In particular, CMV− males were characterized through several strong connections between nodes of the pro-inflammatory cytokines IL-1β, TNF, IL-18 and cognitive nodes including variables of episodic memory and fluid intelligence. Currently available evidence shows that pro-inflammatory cytokines exert a dose-dependent physiological neuroprotective but can however also mediate pathological neurodegenerative effects under certain circumstances [18]. IL-1β and TNF were demonstrated to have such a dual function, acting on the one hand as pro-inflammatory factors and on the other as neuromodulators, subserving memory and other cognitive processes. In other words, they not only play a role in neuroinflammation, but (at their low concentrations) also in complex processes such as synaptic plasticity, neurogenesis, long-term potentiation and memory consolidation [34, 35].
Less strong but numerous connections were found between nodes of the anti-inflammatory cytokines and cognition in the network of CMV− males. This is partly in line with our previous findings on the positive association of episodic memory with the anti-inflammatory cytokine IL-10 in the CMV− elderly men and women [22]. IL-10 is known to have a neuroprotective role due to its inhibitory action on inflamed microglia [17]. The same CMV− male group also has significantly elevated levels of anti-inflammatory IL-10 and sTNF-R as well as reduced levels of pro-inflammatory cytokines in their peripheral circulation, as reported in our recent study [22]. Having this information in mind, we can speculate that strong connections between cognitive nodes and the nodes of (low-levels) pro-inflammatory cytokines on the one hand and numerous connections of cognition to the nodes of the (high-level) anti-inflammatory cytokines on the other, could possibly explain the cognitive advantage in the fluid intelligence and working memory found for this group of participants in our previous work [22]. Remarkably, this was the only group in which nodes of pro- and anti-inflammatory cytokines had no direct connections to each other. The other three groups, (two of which, CMV− females and CMV+ males, were characterized in our previous study by heterogeneously unbalanced levels of pro- and anti-inflammatory mediators and by an adverse metabolic environment) demonstrated, in contrast, various more or less strong connections between pro- and anti-inflammatory cytokines, which were probably important and necessary homeostatic responses to these unbalanced peripheral conditions. In our previous study, the network of CMV+ women (that shows multiple connections between nodes of pro- and anti-inflammatory cytokines), exhibited significantly higher levels of the anti-inflammatory factors sTNF-R and IL-1RA. We also found previously that in the CMV+ group, fluid intelligence, episodic and working memory were negatively associated with the anti-inflammatory factor IL-1RA, the level of which was assumed to be simultaneously increased as a reaction to the elevation of the pro-inflammatory cytokines in the periphery [22]. This phenomenon has also been reported by other investigators [33, 49, 50], showing that individuals with high levels of pro-inflammatory cytokines also tend to display elevated levels of anti-inflammatory factors. The network analyses in the present study allowed the visualization of these multiple and mutual connections between pro- and anti-inflammatory biomarkers, which were only assumed in our previous work [22].
Interestingly, the network of CMV− males demonstrated some direct connections between DHEA and cognitive nodes, and also to the nodes of anti-inflammatory and metabolic factors. The CMV+ males, in contrast, displayed multiple connections to cognitive nodes, but no connections to anti-inflammatory nodes, and were connected to the inflammatory cytokine IL-6. A completely different picture was seen in CMV− females with no connections of DHEA either to pro-inflammatory cytokines or cognition, whereas CMV+ women had multiple connections to nodes of cytokines and cognition. It is known that inflammatory reactions are, in general, under the influence of different mechanisms including neuroendocrine interactions. Pro-inflammatory mediators and cytokines may lead to the activation of the hypothalamic-pituitary-adrenal axis (HPA) that is in turn capable of modulating the process of inflammation [51,52,53,54,55]. DHEA and cortisol are multifunctional adrenocortical hormones with such immunomodulatory properties. They exert potent and broad influences throughout the body and brain and jointly impact on a variety of processes related to metabolic, immune, and cognitive functions [52]. Being especially abundant in the brain, DHEA exerts a protective effect against the deterioration of mental functioning with aging. Interestingly, both cortisol and DHEA in the CMV− males are non-hub connectors exhibiting numerous links to diverse modules in the modular organization of the network. This indicates that these nodes play a crucial role in communication between different subsystems. Inverse correlations between DHEA concentrations and neuroinflammatory-related diseases have repeatedly been found in the elderly [52, 56,57,58]. Similar to DHEA, the cortisol nodes in our study displayed very heterogeneous and group-specific picture concerning their connections. Whereas CMV− males showed connections from cortisol to the nodes of pro-inflammatory TNF, IGF-1, IGFBP-3, metabolic factors, and immune cells, the cortisol-node of CMV− females had only one connection to IL-18. In the CMV+ groups, men showed weak but multiple cortisol-connections to cognitive nodes, neurotrophins, pro- and anti-inflammatory factors. In the network of women, cortisol was connected only to the metabolic factors. The heterogeneous picture seen in these connections may partly be due to the fact that although the effect of cortisol has been typically shown to be immunosuppressive, at certain concentrations it can also induce a biphasic response during a later, delayed systemic inflammatory response [59] through augmentation of inflammation [53]. In other words, the regulation of inflammation by cortisol may vary from anti- to pro-inflammatory in a time- and concentration-dependent manner and this contributes to further complexity in interpreting results of these already complex interactions.
Pro-inflammatory cytokines are known to be involved in dynamic interactions with the main neurotrophic factor, IGF-1 and its regulator, IGFBP-3 by decreasing IGF-1 signaling and by enhancing the production of IGFBP-3. Conversely, IGF-1 is capable of depressing pro-inflammatory cytokine signaling by increasing anti-inflammatory IL-10 secretion and by directly depressing pro-inflammatory cytokine signaling [23, 60, 61]. Both IGF-1 and IGFBP-3 had relative strong connections to metabolic nodes in the CMV− men, but only one weak connection to CRP. In contrast, all three of the other networks displayed multiple connections to both pro- and anti-inflammatory cytokines – possibly due to their involvement in the dynamic interactions aiming to balance the pro- and anti-inflammatory equilibrium. Concerning the connections between neurotrophins and cognitive nodes, we can see a relative homogeneous picture: with some connections in the networks of CMV-negative and -positive men, and with only one connection in the networks of CMV-negative and -positive women. There is substantial evidence that IGF-1 deficiency represents a contributing factor for reduced cognitive abilities in aged humans [57, 62], and that supplementation with IGF-1 may reverse this deficit [60, 63,64,65,66]. Measures of circulating IGF-1, IGFBP-3 and their ratio, have been proposed for monitoring aged individuals and those at risk of cognitive and functional decline [62]. Thus, we can speculate that the relatively low number of connections between neurotrophins and cognitive nodes, seen in all four networks, might be due to the overall age-related decrease of these neurotrophic factors in peripheral circulation of elderly participants.
Our study has many strengths, including that it is one of the first studies to extensively characterize, prior to any physical, cognitive, and combine interventions, the network topology dynamics in multiple peripheral circulating biomarkers and markers of cognitive functioning. Applying a graph-theory approach allowed us not only to visualize biologically meaningful interconnections between nodes but also for the first time to compare the network topology metrics between different groups of CMV-seronegative and -positive men and women in a statistically sound manner. Inspection of separate nodes in the networks showed that these network topology differences were especially strong for cytokines and cognitive nodes. Modularity analyses showed that the networks under consideration exhibited highly differentiated modular organization in all cases. Moreover, we found that all four networks represented so-called small-world networks (SWNs) at all levels of wiring costs and were identified as SWNs with more random characteristics. We found that the network of the CMV− males contains more hub nodes but fewer connector nodes than the other three groups. This indicates that the modules in this participants' group are more autonomous and the information flow between the modules may be realized through a small number of connector nodes. Interestingly, three of the four hubs are cognitive variables and the fourth one is IGFBP-3. Thus, cognitive nodes, such as fluid intelligence, working memory, and perceptual speed play a central role in the network of CMV− males driving or controlling the connections within the corresponding modules.
This is the first study investigating the segregation and integration properties of the individual networks of CMV-seropositive and -negative older men and women by analyzing such network topology measures as clustering coefficient, characteristic path length, local and global efficiency. Using the rewiring procedure for network analyses, we compared network topology dynamics and found that mean clustering coefficient was highest and CPL shortest in the network of the CMV− males. The same network also manifested the highest local and global efficiency, allowing it to be identified as the network with optimal features of segregation and integration. In our previous study, the same group of participants displayed the most balanced inflammatory status in their peripheral circulation (with low levels of pro-inflammatory cytokines and high levels of anti-inflammatory biomarkers) as well as significantly higher cognitive performance in working memory and fluid intelligence [22]. Further studies, however, are required to confirm these findings and to better understand such complex relationships and network topology changes between different groups of older CMV-seropositive and -negative men and women.
There are several limitations to our study that should be acknowledged. The first one has already been mentioned in our previous publication and is "related to the fact that our pre-training cohort consisted of relatively healthy, non-obese, and well-educated Berlin residents with a comparatively low seroprevalence for CMV for this age. For this reason, the generalizability of some of our findings may be limited to the Berlin healthy aging population or to a similar European population in urban areas" [22]. The next limitation concerns the fact that we were not able to disentangle the potential effect of age on the circulating biomarkers and cognitive performance due to the fact that our pre-training cohort consisted exclusively of aged participants with a rather narrow age range from 64 to 79 years old. Another limitation is related to the exploratory character of our study of the network patterns and their relationships. We are well aware that our choice of variables in the present study, selected on the basis of their involvement in the known age-related functional alterations in the immune, nervous, and other central physiological systems, does not necessarily cover all potential players and, we therefore need further more extended network analyses to obtain a more comprehensive picture on their dynamic interactions.
Network analyses applying a graph-theoretical approach provide a useful strategy for visualization and quantitative description of multiple interactions between various circulating pro- and anti-inflammatory biomarkers, hormones, neurotrophic and metabolic factors, immune cells, and measures of cognitive performance and can be in general applied for analyzing interactions between different physiological systems. Applying this approach, we were able to confirm our previous findings that CMV-infection and sex modulate multiple circulating biomarkers and cognitive performance and that balanced inflammatory and metabolic status in elderly contributes to better cognitive performance. Analyzing the network topology dynamics of circulating biomarkers and cognitive performance in older CMV-seropositive and -seronegative men and women we were able to show that highly integrated and segregated networks have optimal neuroimmune and cognitive interactions.
The sample has already been described in [22]. It consisted of 161 older adults (Fig. 1) who had enrolled in a training study that included physical, cognitive, and combined training interventions. Male and female subjects were recruited from volunteer participant pools at the Max Planck Institute for Human Development and by advertisements in the metropolitan area of Berlin, Germany. All the volunteers lived independently at home, leading an active life. Participants were healthy, right-handed adults, aged 64–79 years. All volunteers completed a medical assessment prior to data collection. The medical examination was conducted at the Charité Sports Medicine, Charité Universitätsmedizin Berlin. Of the originally recruited 201 volunteers only 179 individuals met inclusion criteria for study participation after medical assessment. None of the participants had a history of head injuries, medical (e.g., heart attack), neurological (e.g., epilepsy), or psychiatric (e.g., depression) disorders. None of the volunteers had suffered from chronic inflammatory, autoimmune or cancer diseases, nor had clinically evident infections. Moderately elevated and controlled blood pressure was not considered as an exclusion criterion. All subjects completed the informed consent form to the study protocol, which was approved by the Ethics Committee of the German Society of Psychology, UL 072014.
Circulating biomarkers assessment
The assessment of circulating cytokines, receptor antagonist, soluble cytokine receptor, and CMV-serostatus has been described in detail [22]. The blood used for testing of peripheral biomarkers was collected during a medical examination in the timeframe between 11 am and 2 pm. For all analyses, the participants were separated into four groups according to their CMV-serostatus and sex (Fig. 1). The effective sample consisted of 29 CMV-negative males (mean age = 72.4, SD = 3.5, age range = 64.0–77.2), 30 CMV-negative females (mean age = 70.0, SD = 3.6, age range = 64.1–76.9), 50 CMV-positive males (mean age = 70.4, SD = 3.7, age range = 64.0–78.1), and 52 CMV-positive females (mean age = 70.2, SD = 3.6, age range = 63.9–77.1).
Cytokines TNF, IL-10, IL-6, and IL-1β
Serum levels of pro- and anti-inflammatory cytokines (TNF, IL-10, IL-6, and IL-1β) were determined using the high-sensitivity cytometric bead array (CBA) flex system (BD Biosciences, San Jose, CA, USA) that allows multiplex quantification in a single sample. All analyses were performed according to the manufacturer's instructions; to increase accuracy, an additional standard dilution was added. The fluorescence produced by CBA beads was measured on a BD FACS CANTO II Flow Cytometer and analyzed using the software FCAP Array v3 (BD Biosciences).
sTNF-R, IL-1RA, IL-18, cortisol, and DHEA levels, and CMV-serostatus
To gauge sTNF-R (80 kDA), IL-1RA, and IL-18 levels, we used the Sandwich Enzyme-linked Immunosorbent Assay (ELISA), a sensitive method allowing for the measurement of an antigen concentration in an unknown sample. All analyses were conducted according to the manufacturer's instructions. The levels of human circulating sTNF-R (80 kDA), IL-1RA, and IL-18 were determined using the Platinum ELISA kit for the quantitative detection of the three cytokines (ThermoFisher SCIENTIFIC Invitrogen, Vienna, Austria, catalog numbers: BMS211, BMS2080 and BMS267/2).
Serum levels of anti-Cytomegalovirus IgG were determined using a commercial ELISA kit (IBL International GMBH, Hamburg, Germany, catalogue number: RE57061) and according to the manufacturer's instructions. Samples were considered to give a positive signal if the absorbance value exceeded 10% over the cut-off, whereas a negative signal was absorbance lower than 10% below the cut-off.
Quantitative determination of Cortisol and DHEA in serum of participants was performed using Human Cortisol and Human DHEA (sulfate form) ELISA kits (Qarigo Biolabatories, catalog number: ARG81162 and ARG80837). The central mechanism of the competitive ELISA is a competitive binding process performed by sample antigen and add-in antigen. The amount of bound add-in antigen is inversely proportional to the concentration of the sample antigen. The analyses were performed according to the manufacturer's instructions.
All samples were assessed in duplicate at 450 or 450/620 nm using a Multiscan-FC Microtiter Plate Photometer. Protein concentrations were determined in relation to a four-parameter standard curve (Prism 8 GraphPad, San Diego, CA, USA) or calculated using Microsoft Excel 2011.
Levels of IGF-1 and IGFBP-3, CRP, metabolic factors, and immune cells
Serum levels of Insulin-like growth factor 1 (IGF-1) and Insulin-Like Growth Factor-Binding Protein 3 (IGFBP-3) were determined at the Endocrine Routine Laboratory (University Hospital of Würzburg). Measurement of IGF-1 (L2KIGF2) and IGFBP-3 (L2KGB2) was performed according to the manufacturer's instruction, using the Immulite 2000 system - an automated solid-phase, Electrochemiluminescence-Immunoassay (ECLIA) from Siemens Healthcare (Germany). Levels of C-reactive protein (CRP), cholesterol, LDL, HDL, triglyceride, lymphocytes, leukocytes, monocytes, and neutrophils were measured within the clinical diagnostics facility of Berlin, Labor28. Serum concentrations of cholesterols and triglyceride were measured using enzymatic colorimetric tests (Roche, Basel, Switzerland). Counts of the immune cells were determined by flow cytometry (Sysmex, Norderstedt, Germany).
Cognitive assessment was performed 3 months after blood collection, immediately before beginning of training. Participants were invited to a baseline session that lasted about 3.5 h, in which they were tested in groups of four to six individuals. The cognitive battery included a broad range of measures of learning and memory performance, processing speed, working memory, and executive functioning. The group received a standardized session protocol and started, after instructions, each task with practice trials to ensure that all participants understood the task. Responses were collected via button boxes, the computer mouse, or the keyboard. A detailed description of the tasks and scores used in the present study is included in the supplementary material.
Network construction and network properties
For network construction, we used a coefficient of determination (R2), ranging between 0 and 1, and indicating the extent to which one dependent variable is explained by the other. The coefficient of determination was calculated between all pairs of variables (N = 33) for the four experimental groups separately. Thus, the common network in each of the groups contained 33 nodes altogether, covering all possible interactions between the variables or nodes. To be able to construct sparse networks with relatively stable network topology, we first investigated ordered (lattice) and random networks containing the same number of nodes and edges as the real network. To do so, we randomized the edges in the real network to achieve a random network. As for the lattice network, we redistributed the edges such that they were laying close to the main diagonal and in the corner opposite to the main diagonal with increasing order of their weights. The lattice network reconstructed in such a way has the same number of nodes and edges as the initial real network but is characterized by ring or lattice topology incorporating nearest-neighbor connectivity [67]. Random networks were constructed 100 times, and the network topology measures determined each time were averaged for further analyses. To investigate the network topology of the real networks in topology space between regular and random networks with different wiring cost levels, we constructed real and control (i.e., lattice and random) networks in the range of costs between 10 and 60% with a step of 1% of wiring costs (the ratio of the number of actual connections to the maximum possible number of connections in the network). We then decided to set the cost level to 25%, which resulted in sparse and at the same time stable network topology.
Degrees and strengths
The degree of a node provides information about the number of links connected to that node, and the strength reflects the overall strength of that node's connections or weights. Thus, the strength could be considered as a weighted degree. Degree or strength of a node indicates the activity of that node, whereas the sum or mean of all degrees (strengths) represents the overall activity of the network. As R2 is a weighted symmetric measure, we obtained the node's strength (\( {S}_i^w \)) as the sum of weights of all connections (wij) to node i, and calculated the mean strength (S) across all nodes in the network:
$$ S=\frac{1}{N}\sum \limits_{i\in N}{S}_i^w=\frac{1}{N}\sum \limits_{i,j\in N}{w}_{ij} $$
Clustering coefficient and characteristic path length
For an individual node i, the clustering coefficient (\( C{C}_i^w \)) is defined as the proportion of the number of existing neighbor–neighbor connections to the total number of possible connections within its neighborhood. In the case of a weighted graph, the mean CC is calculated as follows [68]:
$$ CC=\frac{1}{N}\sum \limits_{i\in N}C{C}_i^w=\frac{1}{N}\sum \limits_{i\in N}\frac{2{t}_i^w}{k_i\left({k}_i-1\right)} $$
with \( {t}_i^w={\left({w}_{ij}{w}_{ih}{w}_{jh}\right)}^{1/3} \) being the number of weighted closed triangles around a node i; ki is the degree of the node i, and N is the number of nodes in the network, N = 33. The CC measures the cliquishness of a typical neighborhood and is thus a measure of network segregation.
The shortest path length or distance dij between two nodes i and j is normally defined as the minimal number of edges that have to be passed to go from i to j. As our networks are weighted graphs, the weight of the links must be considered. The input matrix is then a mapping from weight to length (i.e., a weight inversion), and the distance \( {d}_{ij}^w \) is the minimal weighted distance between the nodes i and j, but not necessarily the minimal number of edges. To calculate the characteristic path length (CPL) of a network, path lengths between all possible pairs of vertices or nodes in the network were determined [69] and then averaged among nodes:
$$ CPL=\frac{1}{N}\sum \limits_{i\in N}{L^w}_i=\frac{1}{N}\sum \limits_{i\in N}\frac{\sum_{j\in N,j\ne i}{d}_{ij}^w}{N-1} $$
whereby Liw is the shortest path length of a node i, and N is the total number of nodes in the network. CPL shows the degree of network integration, with a short CPL indicating higher network integration.
Local and global efficiency
Local efficiency (Elocal) is similar to the CC and is calculated as the harmonic mean of neighbor-neighbor distances [70]:
$$ {E}_{local}=\frac{1}{N_{G_i}\left({N}_{G_i}-1\right)}\sum \limits_{i\in N}{E}_{local(i)}^w=\frac{1}{N_{G_i}\left({N}_{G_i}-1\right)}\sum \limits_{i\in N}\frac{1}{L_{j,h}} $$
where \( {N}_{G_i} \) is the number of nodes in subgraph Gi, comprising all nodes that are immediate neighbours of the node i (excluding the node i itself), and \( {E}_{local(i)}^w \) is local efficiency of the node i determined as the reciprocal of the shortest path length between neighbours j and h. Thus, Elocal of node i is defined with respect to the subgraph comprising all of i's neighbours, after removal of node i and its incident edges (Latora and Marchiori, 2001). Like CC, Elocal is a measure of the segregation of a network, indicating efficiency of information transfer in the immediate neighbourhood of each node.
Global efficiency (Eglobal) is defined as the average inverse shortest path length and is calculated by the formula [70]:
$$ {E}_{global}=\frac{1}{N}\sum \limits_{i\in N}{E}_{global(i)}^w=\frac{1}{N}\sum \limits_{i\in N}\frac{\sum_{j\in N,j\ne i}{\left({d}_{ij}^w\right)}^{-1}}{N-1} $$
whereby \( {E}_{global(i)}^w \) is a nodal efficiency, \( {d}_{ij}^w \) is the minimal weighted distance between the nodes i and j, and N is the total number of nodes in the network. The nodal efficiency is practically the normalized sum of the reciprocal of the shortest path lengths or distances from a given node to all other nodes in the network. Nodal efficiency quantifies how well a given node is integrated within the network, and global efficiency indicates how integrated is the common network. Thus, like CPL, Eglobal is a measure of the integration of a network, but whereas CPL is primarily influenced by long paths, Eglobal is primarily influenced by short ones.
Small-Worldness (SW) coefficients
Using graph metrics determined for real and control (i.e., regular and random) networks, specific quantitative small-world metrics were obtained. The first small-world metric, the so-called small-world coefficient σ, is related to the main metrics of a random graph (CCrand and CPLrand) and is determined on the basis of two ratios γ = CCreal/CCrand and λ = CPLreal/CPLrand [71]:
$$ \sigma =\frac{\gamma }{\lambda }=\frac{C{C}_{real}/C{C}_{rand}}{CP{L}_{real}/ CP{L}_{rand}} $$
The small-world coefficient σ should be greater than 1 in the small-world networks (SWNs). The second SW metric, the so-called small-world coefficient ω, is defined by comparing the characteristic path length of the observed (real) and random networks, and comparing the clustering coefficient of the observed or real network to that of an equivalent lattice (regular) network [72]:
$$ \omega =\frac{CP{L}_{rand}}{CP{L}_{real}}-\frac{C{C}_{real}}{C{C}_{latt}} $$
This metric ranges between − 1 and + 1 and is close to zero for SWN (CPLreal ≈ CPLrand and CCreal ≈ CClatt). Thereby, negative values indicate a graph with more regular properties (CPLreal> > CPLrand and CCreal ≈ CClatt), and positive values of ω indicate a graph with more random properties (CPLreal ≈ CPLrand and CCreal < <CClatt). As suggested in [72], the metric ω compared to σ has a clear advantage, i.e., the possibility to define how much the network of interest resembles its regular or random equivalents.
Modularity analyses and Z-P parameter space
To investigate the modular organization of the network and the individual role of each node in the emerging modularity or community structure, we partitioned the networks into modules applying modularity optimization algorithm and determined indices of modularity (Q), within-module degree (Zi), and participation coefficient (Pi) using the Brain Connectivity Toolbox [73]. The optimal community structure is a subdivision of the network into non-overlapping groups of nodes in a way that maximizes the number of within-module edges, and minimizes the number of between-module edges. Q is a statistic that quantifies the degree to which the network may be subdivided into such clearly delineated groups or modules. It is given for weighted networks by the formula [74]:
$$ {Q}^w=\frac{1}{l^w}\sum \limits_{j\in N}\left[{w}_{ij}-\frac{k_i^w{k}_j^w}{l^w}\right]\cdot {\delta}_{m_i{m}_j,} $$
where lw is the total number of edges in the network, N is the total number of nodes in the network, wij are connection weights, \( {k}_i^w \) and \( {k}_j^w \) are weighted degrees or strengths of the nodes, and \( {\delta}_{m_i,{m}_j} \) is the Kronecker delta, where \( {\delta}_{m_i,{m}_j} \) = 1 if mi = mj, and 0 otherwise. High modularity values indicate strong separation of the nodes into modules. Qw is zero if nodes are placed at random into modules or if all nodes are in the same cluster. To test the modularity of the empirically observed networks, we compared them to the modularity distribution (N = 100) of random networks as described above [75].
The within-module degree Zi indicates how well node i is connected to other nodes within the module mi. As shown in Guimerà and Amaral [27], it is determined by:
$$ {Z}_i=\frac{k_i\left({m}_i\right)-\overline{k}\left({m}_i\right)}{\sigma^{k\left({m}_i\right)}}, $$
where ki(mi) is the within-module degree of node i (the number of links between i and all other nodes in mi), and \( \overline{k}\left({m}_i\right) \) and \( {\sigma}^{k\left({m}_i\right)} \) are the mean and standard deviation of the within-module degree distribution of mi.
The participation coefficient Pi describes how well the nodal connections are distributed across different modules [27]:
$$ {P}_i=1-\sum \limits_{m\in M}{\left(\frac{k_i\left({m}_i\right)}{k_i}\right)}^2, $$
where M is the set of modules, ki(mi) is the number of links between node i and all other nodes in module mi, and ki is the total degree of node i in the network. Correspondingly, Pi of a node i is close to 1 if its links are uniformly distributed among all the modules, and is zero if all of its links lie within its own module. Zi and Pi values form a so-called Z-P parameter space and are characteristic for the different roles of the nodes in the network [27]. These roles in the Z-P parameter space could be defined as follows: ultra-peripheral nodes (Pi < 0.05), provincial nodes (low Zi and Pi values), connector nodes (low Zi and high Pi values), hub nodes (high Zi and low Pi values), and hub connector nodes (high Zi and Pi values). In this context, hubs are responsible for intra-modular connectivity and contain multiple connections within a module, while connector nodes maintain inter-modular connectivity and are responsible for links between the modules.
In order to statistically compare the four different networks at a given cost level, we used a rewiring procedure with a step-by-step replacement of a non-existing edge through an existing one and consecutive determination network topology metrics each time. This procedure can specify the network stability and network topology alteration by very small changes in the network configuration. In a statistical sense, this procedure is similar to bootstrapping with replacement applied to time series. In total, there were about 50,000 rewired networks, on which mean and standard deviation (SD) of the network topology metrics were determined. Because the rewiring distribution showed a normal shape and a small bias, we were able to achieve a 99.7% confidence interval (CI) for the mean by using the empirical rule: CI = mean ± 3 × SD (P < 0.005).
The datasets for this study will not be made publicly available due to restrictions included in the consent statement that the participants of the study signed only allow the present data to be used for the research purposes within the Max Planck Institute for Human Development in Berlin.
CBA:
Cytometric bead array
Clustering coefficient
CMV:
CPL:
Characteristic path length
DHEA:
Dehydroepiandrosterone
E global :
Global efficiency
ELISA:
E local :
Local efficiency
Episodic memory
Gf:
Fluid intelligence
HDL:
High-density lipoprotein
IGF-1:
Insulin-like growth factor-1
IGFBP-3:
IGF-binding protein
IgG:
Immunoglobulin G
IL:
Interleukin
IL-1RA:
Interleukin 1 receptor antagonist
LDL:
Low-density lipoprotein
sTNF-R:
Soluble Tumor Necrosis Factor receptor
TNF:
WM:
Working memory
Franceschi C, Garagnani P, Vitale G, Capri M, Salvioli S. Inflammaging and 'Garb-aging'. Trends Endocrinol Metab. 2017;28(3):199–212.
Di Benedetto S, Müller L, Wenger E, Duzel S, Pawelec G. Contribution of neuroinflammation and immunity to brain aging and the mitigating effects of physical and cognitive interventions. Neurosci Biobehav Rev. 2017;75:114–28.
Beydoun MA, Dore GA, Canas JA, Liang H, Beydoun HA, Evans MK, et al. Systemic inflammation is associated with longitudinal changes in cognitive performance among urban adults. Front Aging Neurosci. 2018;10:313.
Procaccini C, Pucino V, De Rosa V, Marone G, Matarese G. Neuro-endocrine networks controlling immune system in health and disease. Front Immunol. 2014;5:143.
Dantzer R. Neuroimmune interactions: from the brain to the immune system and vice versa. Physiol Rev. 2018;98(1):477–504.
Gottesman RF, Albert MS, Alonso A, Coker LH, Coresh J, Davis SM, et al. Associations between midlife vascular risk factors and 25-year incident dementia in the atherosclerosis risk in communities (ARIC) cohort. JAMA Neurol. 2017;74(10):1246–54.
Alboni S, Maggi L. Editorial: cytokines as players of neuronal plasticity and sensitivity to environment in healthy and pathological brain. Front Cell Neurosci. 2015;9:508.
Ventura MT, Casciaro M, Gangemi S, Buquicchio R. Immunosenescence in aging: between immune cells depletion and cytokines up-regulation. Clin Mol Allergy. 2017;15:21.
Fülop T, Larbi A, Dupuis G, Le Page A, Frost EH, Cohen AA, et al. Immunosenescence and Inflamm-aging as two sides of the same coin: friends or foes? Front Immunol. 2017;8:1960.
De la Fuente M, Gimenez-Llort L. Models of aging of neuroimmunomodulation: strategies for its improvement. Neuroimmunomodulation. 2010;17(3):213–6.
Walker KA, Gottesman RF, Wu A, Knopman DS, Gross AL, Mosley TH Jr, et al. Systemic inflammation during midlife and cognitive change over 20 years: the ARIC study. Neurology. 2019;92(11):e1256–e67.
Du Y, Zhang G, Liu Z. Human cytomegalovirus infection and coronary heart disease: a systematic review. Virol J. 2018;15(1):31.
Garcia Verdecia B, Saavedra Hernandez D, Lorenzo-Luaces P, de Jesus Badia Alvarez T, Leonard Rupale I, Mazorra Herrera Z, et al. Immunosenescence and gender: a study in healthy Cubans. Immun Ageing. 2013;10(1):16.
Kilgour AH, Firth C, Harrison R, Moss P, Bastin ME, Wardlaw JM, et al. Seropositivity for CMV and IL-6 levels are associated with grip strength and muscle size in the elderly. Immun Ageing. 2013;10(1):33.
Pawelec G, Derhovanessian E. Role of CMV in immune senescence. Virus Res. 2011;157(2):175–9.
Pawelec G, McElhaney JE, Aiello AE, Derhovanessian E. The impact of CMV infection on survival in older humans. Curr Opin Immunol. 2012;24(4):507–11.
Lobo-Silva D, Carriche GM, Castro AG, Roque S, Saraiva M. Balancing the immune response in the brain: IL-10 and its regulation. J Neuroinflammation. 2016;13(1):297.
Perry RT, Collins JS, Wiener H, Acton R, Go RC. The role of TNF and its receptors in Alzheimer's disease. Neurobiol Aging. 2001;22(6):873–83.
Nakagomi A, Seino Y, Noma S, Kohashi K, Kosugi M, Kato K, et al. Relationships between the serum cholesterol levels, production of monocyte proinflammatory cytokines and long-term prognosis in patients with chronic heart failure. Intern Med. 2014;53(21):2415–24.
Lee BK, Glass TA, McAtee MJ, Wand GS, Bandeen-Roche K, Bolla KI, et al. Associations of salivary cortisol with cognitive function in the Baltimore memory study. Arch Gen Psychiatry. 2007;64(7):810–8.
Wersching H, Duning T, Lohmann H, Mohammadi S, Stehling C, Fobker M, et al. Serum C-reactive protein is linked to cerebral microstructural integrity and cognitive function. Neurology. 2010;74(13):1022–9.
Di Benedetto S, Gaetjen M, Muller L. The Modulatory Effect of Gender and Cytomegalovirus-Seropositivity on Circulating Inflammatory Factors and Cognitive Performance in Elderly Individuals. Int J Mol Sci. 2019;20(4).
O'Connor JC, McCusker RH, Strle K, Johnson RW, Dantzer R, Kelley KW. Regulation of IGF-I function by proinflammatory cytokines: at the interface of immunology and endocrinology. Cell Immunol. 2008;252(1–2):91–110.
Bhavnani SK, Victor S, Calhoun WJ, Busse WW, Bleecker E, Castro M, et al. How cytokines co-occur across asthma patients: from bipartite network analysis to a molecular-based classification. J Biomed Inform. 2011;44(Suppl 1):S24–30.
Müller V, Perdikis D, von Oertzen T, Sleimen-Malkoun R, Jirsa V, Lindenberger U. Structure and topology dynamics of hyper-frequency networks during rest and auditory oddball performance. Front Comput Neurosci. 2016;10:108.
Müller V, Jirsa V, Perdikis D, Sleimen-Malkoun R, von Oertzen T, Lindenberger U. Lifespan changes in network structure and network topology dynamics during rest and auditory oddball performance. Front Aging Neurosci. 2019;11:138.
Guimera R, Nunes Amaral LA. Functional cartography of complex metabolic networks. Nature. 2005;433(7028):895–900.
Müller L, Fulop T, Pawelec G. Immunosenescence in vertebrates and invertebrates. Immun Ageing. 2013;10(1):12.
Talbot S, Foster SL, Woolf CJ. Neuroimmunity. Annu Rev Immunol. 2016;34:421–47.
Morel PA, Lee REC, Faeder JR. Demystifying the cytokine network: mathematical models point the way. Cytokine. 2017;98:115–23.
Müller L, Pawelec G. Aging and immunity - impact of behavioral intervention. Brain Behav Immun. 2014;39:8–22.
Müller L, Pawelec G. As we age: Does slippage of quality control in the immune system lead to collateral damage? Ageing Res Rev. 2015;23(Pt A):116–23.
Kirk GD, Dandorf S, Li H, Chen Y, Mehta SH, Piggott DA, et al. Differential relationships among circulating inflammatory and immune activation biomediators and impact of aging and human immunodeficiency virus infection in a cohort of injection drug users. Front Immunol. 2017;8:1343.
McAfoose J, Baune BT. Evidence for a cytokine model of cognitive function. Neurosci Biobehav Rev. 2009;33(3):355–66.
Vitkovic L, Bockaert J, Jacque C. "Inflammatory" cytokines: neuromodulators in normal brain? J Neurochem. 2000;74(2):457–71.
Tangestani Fard M, Stough C. A review and hypothesized model of the mechanisms that underpin the relationship between inflammation and cognition in the elderly. Front Aging Neurosci. 2019;11:56.
Bennett JM, Glaser R, Malarkey WB, Beversdorf DQ, Peng J, Kiecolt-Glaser JK. Inflammation and reactivation of latent herpesviruses in older adults. Brain Behav Immun. 2012;26(5):739–46.
Looney RJ, Falsey A, Campbell D, Torres A, Kolassa J, Brower C, et al. Role of cytomegalovirus in the T cell changes seen in elderly individuals. Clin Immunol. 1999;90(2):213–9.
Derhovanessian E, Larbi A, Pawelec G. Biomarkers of human immunosenescence: impact of Cytomegalovirus infection. Curr Opin Immunol. 2009;21(4):440–5.
Di Benedetto S, Derhovanessian E, Steinhagen-Thiessen E, Goldeck D, Muller L, Pawelec G. Impact of age, sex and CMV-infection on peripheral T cell phenotypes: results from the Berlin BASE-II study. Biogerontology. 2015;16(5):631–43.
Fulop T, Larbi A, Pawelec G. Human T cell aging and the impact of persistent viral infections. Front Immunol. 2013;4:271.
Haeseker MB, Pijpers E, Dukers-Muijrers NH, Nelemans P, Hoebe CJ, Bruggeman CA, et al. Association of cytomegalovirus and other pathogens with frailty and diabetes mellitus, but not with cardiovascular disease and mortality in psycho-geriatric patients; a prospective cohort study. Immun Ageing. 2013;10(1):30.
McElhaney JE, Zhou X, Talbot HK, Soethout E, Bleackley RC, Granville DJ, et al. The unmet need in the elderly: how immunosenescence, CMV infection, co-morbidities and frailty are a challenge for the development of more effective influenza vaccines. Vaccine. 2012;30(12):2060–7.
Solana R, Tarazona R, Aiello AE, Akbar AN, Appay V, Beswick M, et al. CMV and Immunosenescence: from basics to clinics. Immun Ageing. 2012;9(1):23.
Whiting CC, Siebert J, Newman AM, Du HW, Alizadeh AA, Goronzy J, et al. Large-scale and comprehensive immune profiling and functional analysis of Normal human aging. PLoS One. 2015;10(7):e0133627.
Nikolich-Zugich J, Goodrum F, Knox K, Smithey MJ. Known unknowns: how might the persistent herpesvirome shape immunity and aging? Curr Opin Immunol. 2017;48:23–30.
Weltevrede M, Eilers R, de Melker HE, van Baarle D. Cytomegalovirus persistence and T-cell immunosenescence in people aged fifty and older: a systematic review. Exp Gerontol. 2016;77:87–95.
Villacres MC, Longmate J, Auge C, Diamond DJ. Predominant type 1 CMV-specific memory T-helper response in humans: evidence for gender differences in cytokine secretion. Hum Immunol. 2004;65(5):476–85.
Morrisette-Thomas V, Cohen AA, Fulop T, Riesco E, Legault V, Li Q, et al. Inflamm-aging does not simply reflect increases in pro-inflammatory markers. Mech Ageing Dev. 2014;139:49–57.
Tegeler C, O'Sullivan JL, Bucholtz N, Goldeck D, Pawelec G, Steinhagen-Thiessen E, et al. The inflammatory markers CRP, IL-6, and IL-10 are associated with cognitive function--data from the Berlin aging study II. Neurobiol Aging. 2016;38:112–7.
Wolkow A, Aisbett B, Reynolds J, Ferguson SA, Main LC. Relationships between inflammatory cytokine and cortisol responses in firefighters exposed to simulated wildfire suppression work and sleep restriction. Physiol Rep. 2015;3(11).
Kamin HS, Kertes DA. Cortisol and DHEA in development and psychopathology. Horm Behav. 2017;89:69–85.
Marques AH, Silverman MN, Sternberg EM. Glucocorticoid dysregulations and their clinical correlates. From receptors to therapeutics. Ann N Y Acad Sci. 2009;1179:1–18.
Alves VB, Basso PJ, Nardini V, Silva A, Chica JE, Cardoso CR. Dehydroepiandrosterone (DHEA) restrains intestinal inflammation by rendering leukocytes hyporesponsive and balancing colitogenic inflammatory responses. Immunobiology. 2016;221(9):934–43.
Wu Z, Li L, Zheng LT, Xu Z, Guo L, Zhen X. Allosteric modulation of sigma-1 receptors by SKF83959 inhibits microglia-mediated inflammation. J Neurochem. 2015;134(5):904–14.
Shields GS, Moons WG, Slavich GM. Inflammation, self-regulation, and health: an immunologic model of self-regulatory failure. Perspect Psychol Sci. 2017;12(4):588–612.
Willis EL, Wolf RF, White GL, McFarlane D. Age- and gender-associated changes in the concentrations of serum TGF-1beta, DHEA-S and IGF-1 in healthy captive baboons (Papio hamadryas anubis). Gen Comp Endocrinol. 2014;195:21–7.
Wilson CJ, Finch CE, Cohen HJ. Cytokines and cognition--the case for a head-to-toe inflammatory paradigm. J Am Geriatr Soc. 2002;50(12):2041–56.
Elenkov IJ. Neurohormonal-cytokine interactions: implications for inflammation, common human diseases and well-being. Neurochem Int. 2008;52(1–2):40–51.
Ashpole NM, Sanders JE, Hodges EL, Yan H, Sonntag WE. Growth hormone, insulin-like growth factor-1 and the aging brain. Exp Gerontol. 2015;68:76–81.
Junnila RK, List EO, Berryman DE, Murrey JW, Kopchick JJ. The GH/IGF-1 axis in ageing and longevity. Nat Rev Endocrinol. 2013;9(6):366–76.
Wennberg AMV, Hagen CE, Machulda MM, Hollman JH, Roberts RO, Knopman DS, et al. The association between peripheral total IGF-1, IGFBP-3, and IGF-1/IGFBP-3 and functional and cognitive outcomes in the Mayo Clinic study of aging. Neurobiol Aging. 2018;66:68–74.
Deijen JB, Arwert LI, Drent ML. The GH/IGF-I Axis and cognitive changes across a 4-year period in healthy adults. ISRN Endocrinol. 2011;2011:249421.
Arwert LI, Veltman DJ, Deijen JB, van Dam PS, Drent ML. Effects of growth hormone substitution therapy on cognitive functioning in growth hormone deficient patients: a functional MRI study. Neuroendocrinology. 2006;83(1):12–9.
Molina DP, Ariwodola OJ, Weiner JL, Brunso-Bechtold JK, Adams MM. Growth hormone and insulin-like growth factor-I alter hippocampal excitatory synaptic transmission in young and old rats. Age (Dordr). 2013;35(5):1575–87.
Bozdagi O, Tavassoli T, Buxbaum JD. Insulin-like growth factor-1 rescues synaptic and motor deficits in a mouse model of autism and developmental delay. Mol Autism. 2013;4(1):9.
Sporns O, Honey CJ, Kotter R. Identification and classification of hubs in brain networks. PLoS One. 2007;2(10):e1049.
Fagiolo G. Clustering in complex directed networks. Phys Rev E Stat Nonlinear Soft Matter Phys. 2007;76(2 Pt 2):026107.
Watts DJ, Strogatz SH. Collective dynamics of 'small-world' networks. Nature. 1998;393(6684):440–2.
Latora V, Marchiori M. Efficient behavior of small-world networks. Phys Rev Lett. 2001;87(19):198701.
Humphries MD, Gurney K, Prescott TJ. The brainstem reticular formation is a small-world, not scale-free, network. Proc Biol Sci. 2006;273(1585):503–11.
Telesford QK, Joyce KE, Hayasaka S, Burdette JH, Laurienti PJ. The ubiquity of small-world networks. Brain Connect. 2011;1(5):367–75.
Rubinov M, Sporns O. Complex network measures of brain connectivity: uses and interpretations. Neuroimage. 2010;52(3):1059–69.
Newman ME. Analysis of weighted networks. Phys Rev E Stat Nonlinear Soft Matter Phys. 2004;70(5 Pt 2):056131.
Bassett DS, Khambhati AN. A network engineering perspective on probing and perturbing cognition with neurofeedback. Ann N Y Acad Sci. 2017;1396(1):126–43.
We would like to express our very great appreciation to Elisabeth Wenger for her valuable, constructive, and helpful suggestions during the study. We thank Sandra Düzel for providing cognitive data, reading manuscript, and her constructive remarks. We thank Marcel Gaetjen for his excellent methodological support in applying of the CBA-flex system and for providing the FCAP-Array-v3 software. We are thankful to the students of the Structural Plasticity Group for their great contribution in collecting the data reported above. We would like to thank Nadine Taube, Kirsten Becker, and Anke Schepers-Klingebiel for managing all organizational issues. We thank Carola Misgeld for medical data assessment and blood collection. We are grateful to all participants of the study.
This research was supported by the Max Planck Society and is part of the BMBF-funded EnergI consortium (01GQ1421B).
Max Planck Institute for Human Development, Berlin, Germany
Svetlana Di Benedetto, Ludmila Müller & Viktor Müller
University of Tübingen, Tübingen, Germany
Svetlana Di Benedetto & Graham Pawelec
Institute of Clinical Neurobiology, Würzburg, Germany
Stefanie Rauskolb & Michael Sendtner
Department of Internal Medicine I, Division of Endocrinology and Diabetes, University Hospital of Würzburg, Würzburg, Germany
Timo Deutschbein
Svetlana Di Benedetto
Ludmila Müller
Stefanie Rauskolb
Michael Sendtner
Graham Pawelec
Viktor Müller
Conceptualization: SDB, LM, GP, MS, and VM; methodology: VM, SDB; SR, and TD; software: VM; validation: VM, SR, SDB; TD, formal analysis: SDB, and VM; investigation: SDB, TD, and SR; writing-original draft preparation: SDB; writing-review and editing: GP, LM, VM, and SDB. All authors read and approved the final version of manuscript.
Correspondence to Ludmila Müller.
All participants completed the informed consent form to the study protocol which was approved by the Ethics Committee of the German Society of Psychology, UL 072014.
The consent forms of all participants are held by the authors' institute. The data in this work have not been published elsewhere. All authors agree to submit this manuscript for publication in this journal.
(A) CC is greatest in lattice networks (blue) and lowest in random networks (green), whereas CC for the real networks (red) is in-between. In contrast, (B) CPL is shortest in random and longest in lattice networks, while the real networks are in-between. CMV, Cytomegalovirus; CMV- m, CMV-seronegative men; CMV+ m, CMV-seropositive men; CMV- f, CMV-seronegative women; CMV+ f, CMV-seropositive women. Figure S2. (A) Local efficiency was highest in regular networks (at least for the cost levels under 45%) and lowest in random networks (at least for the cost levels under 20%), while (B) global efficiency was highest in random (green) and lowest in lattice (blue) networks practically for all levels of wiring costs, with real (red) networks were always in-between. CMV, Cytomegalovirus; CMV- m, CMV-seronegative men; CMV+ m, CMV-seropositive men; CMV- f, CMV-seronegative women; CMV+ f, CMV-seropositive women.
Di Benedetto, S., Müller, L., Rauskolb, S. et al. Network topology dynamics of circulating biomarkers and cognitive performance in older Cytomegalovirus-seropositive or -seronegative men and women. Immun Ageing 16, 31 (2019). https://doi.org/10.1186/s12979-019-0171-x
Immunosenescence
Inflammatory markers
Neurotrophic and metabolic factors
|
CommonCrawl
|
hey guys, i follow you up to the point of intergrating the final line. How would I intergrate [x^-2.e^-x] dx thx
1. Elements – sequence-series outputting Write a program that prints the following elements: 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, (hereto referred to as elements) etc., up till a user-entered value. The program should not print more than 10 numbers per
asked by tefo sello on October 3, 2014
pre cal
Find a possible formula for the function graphed below. The x-intercept is marked with a point located at (5,0), and the y-intercept is marked with a point located at (0,−0.833333333333333). The asymptotes are y=−1 and x=6. Give your formula as a
asked by tor on July 28, 2016
A lattice point is a point with integer coordinates. How many lattice points $(x,y)$ with $-100\le x\le 100$ and $-100\le y\le 100$ are on the graph of the parametric equations \begin{align*} x&=30-40\cos t,\\ y&=-50 + 30\cos t? \end{align*}
asked by BABABA on March 4, 2017
A point on a string undergoes simple harmonic motion as a sinusoidal wave passes. When a sinusoidal wave with speed 24 m/s, wavelength 30 cm, and amplitude of 1.0 cm passes, what is the max speed of a point on the string?
asked by Willie on September 24, 2010
What is the freezing point of a solution of 1.17g of 1-naphthol, C10H8O, dissolved in 2.00ml of benzene at 20 degrees C? The density of benzene at 20 degrees C is 876g/ml. Kf for benzene is 5.12 degrees C/m, and benzene's normal freezing point is 5.53
asked by Jake on April 4, 2011
pH indicators change color at their _____. The pH at which the color change happens for a particular indicator molecule depends on its: A. pKa; concentration. B. pKa; Ka or Kb. C. equivalence point; Ka or Kb. D. equivalence point; concentration. Is the
asked by Laura on September 18, 2016
1. Which is a set of collinear points? J,H,I L,H,J J,G,L L,K,H 2. Use the diagram to identify a segment parallel to. 3. The meadure of angles A is 124. Classify the angle as acute, right, obtuse, or straight. Acute Straight Right Obtuse 4. Find the
asked by i need help on March 14, 2017
11. Runner A crosses the starting line of a marathon and runs at an average pace of 5.6 miles per hour. Half an hour later, Runner B crosses the starting line and runs at an average rate of 6.4 miles per hour. If the length of the marathon is 26.2 miles,
What is the final volume of 880. mL of hydrogen gas at 335 mm Hg if the pressure is increased to 735 mm Hg? Assume the temperature and the amount of gas are held constant.
asked by Tiffany22 on May 1, 2010
Bus. Math 123
Problem: Invoice-Nov. 27. Date goods recvd. ? Terms- 2/10 EOM. Last day of discount- ? Final day bill is due ? Complete above.
asked by Brian on July 29, 2017
Chemistry (Gas Laws) I REALLY NEED HELP
A 27.8 liter container is initially evacuated, and then 8.8 grams of water is placed in it. After a time, all the water evaporates, and the temperature is 130.8 degrees C. Find the final pressure.
asked by Christina on May 10, 2011
A hunter holds a 3kg rifle and fires a bullet of mass 5g with a muzzle velocity of 300m/s. What is the final kinetic energy of the bullet and the riffle?
asked by Benson mb on August 6, 2014
Inside an insulated container, a 250-g ice cube at 0°C is added to 200 g of water at 18°C. (i) What is the final temperature of the system? (ii) What is the remaining ice mass?
asked by Meder on April 27, 2013
A gas sample occupies a volume of 400.0ml at 298.15k at costant pressure. what is the final temperature when the volume occupied decreased by 39.0
I posted this last night but it didn't get answered. So I'm posting it again with hopes that, this time, it will be answered. Please solve for x: |x-6|>7 I got x=6 as a final answer, but that doesn't make sense because 6 is not greater than 7. Please
assuming that the calorimeter contains 5.00 * 100 g of water and that the initial temperature is 30.0 degree celcius. what will the final temperature be? (use the proper formula and units, and show your work below)
asked by alice on October 9, 2007
Two carts with masses of 4.3 kg and 3.9 kg move toward each other on a frictionless track with speeds of 5.6 m/s and 5.0 m/s, respec- tively. The carts stick together after colliding head-on. Find their final speed.
asked by Chris on January 26, 2015
A driver of a car traveling at 17.4 m/s applies the brakes, causing a uniform deceleration of 1.8 m/s2. How long does it take the car to accelerate to a final speed of 13.4 m/s? Answer in units of s
Two carts with masses of 4.3 kg and 3.0 kg move toward each other on a frictionless track with speeds of 5.7 m/s and 4.4 m/s, respectively. The carts stick together after colliding head-on. Find their final speed. Answer in units of m/s
asked by jessy on January 30, 2012
A sample of a monatomic ideal gas is originally at 20 °C. Calculate its final temperature of the gas if the pressure is doubled and volume is reduced to one-fourth its initial value.
asked by Anon on October 15, 2013
FIXED Essay. Please proofread.
I went back made the suggest changes. I will go back and add more specfic examples of traditions before I turn my final paper in. I need this asap (before Sunday) Thanxs for your help! Who Is An American?
asked by Soly on September 15, 2006
A driver of a car traveling at 17.4 m/s applies the brakes, causing a uniform deceleration of 1.8 m/s2 . How long does it take the car to accelerate to a final speed of 13.4 m/s? Answer in units of s.
asked by nbi on October 15, 2013
A dilution is made during the preparation of a reaction mixture. The compounds are: 2.00 ml of 0.325 KMnO4, 7.50 ml of 0.155 M Oxalic acid, and 10.00 ml of H2O. What are the final concentrations of the compounds?
asked by Yubellkis on February 4, 2017
A car drove a total 5km. Accelerated 10seconds then had a constant velocity. Entire trip was 45 seconds. Was was the initial acceleration and final velocity?
asked by Tim on January 31, 2011
asked by corele on September 14, 2011
A 15.0 g of sample of silver is heated to 100.0 0C and then placed in a constant-pressure calorimeter containing 25.0 g of water at 23.0 0C. The final temperature of the system was measured as 25.0 0C. What is the specific heat of silver?
asked by Luke on November 7, 2010
A gas is heated from 252 K to 296 K while its volume is increased from 23.5 L to 30.5 L by moving a large piston within a cylinder. If the original pressure was 0.96 atm, what would be the final pressure?
asked by Sarah Pick on May 8, 2014
hrm 240
i need help with this i have to have a 700-1000 word memo datailing the benefits availible to employees in the position focused on in my final project. that position is a animal rescuer. please help thanks
asked by sam on January 30, 2010
A sample of oxygen that occupies 1.8 × 10−6 mL at 582 mm Hg is subjected to a pressure of 1.51 atm.m. What will the final volume of the sample be if the temperature is held constant? Answer in units of mL
asked by reis on May 30, 2014
physics2
A 873 kg (1930 lb) dragster completed the 402.0 m (0.2498 mile) run in 4.950 s. If the car has constant acceleration, what would be its acceleration and final velocity?
asked by Hannah on December 11, 2011
A gas occupies 275 ml at 0°C and 610 Torr. What final temperature would be required to increase the pressure to 760 Torr, the volume being held constant?
asked by Chad on May 2, 2013
Biology-ASAP
In a reaction you use 400 ul of 12mM B-galactosidase. Add 2.3 ml of buffer and 0.3ml of ONPG to a cuvette. what is the final concentration of the cuvette? Answer below to one decimal place.
asked by Jane on November 12, 2017
asked by jamella on March 12, 2012
3813.75 Joules of work is being done on a 150 lb object while it is moved 75 ft across a horizontal plane, what is the final velocity if the coefficient of kinetic friction is 0.25 and the initial velocity is 1 ft / s?
asked by Adam on May 7, 2014
A student mixes two water solutions with an initial temperature of 25 degrees C to form a final substance with a mass of 65 grams at 30 degrees C. What is the heat change in kJ?
A piece of a newly synthesized material of mass 25.0 g at 80.0 ◦ C is placed in a calorimeter containing 100.0 g of water at 20.0 ◦ C. If the final temperature of the system is 24.0 ◦ C, what is the specific heat capacity of this material?
asked by Austin on October 2, 2012
Find the area and circumference of the circle (6 m). Use 3.14 for pi. Show your work, round your final answer to the nearest tenth, and label your answer with the correct units for each.
asked by Evelyn on March 25, 2019
A driver of a car traveling at 14.5 m/s applies the brakes, causing a uniform deceleration of 1.7 m/s 2 . How long does it take the car to accelerate to a final speed of 9.50 m/s? Answer in units of s
thermodynamics and kinetics
One mole of an ideal gas undergoes an isothermal reversible expansion, doing 1 kJ of work and doubling the volume. What is its final temperature? How much heat was added to the gas?
asked by Dorota on October 12, 2012
asked by Adam - Require some Help Please on May 7, 2014
What is the molarity of potassium iodide solution prepared by adding 5.810g of KI to sufficient water to make a final solution volume of 250.0mL?
asked by Lisa on February 7, 2013
A car entering the freeway accelerates from 10.2 m/s with a constant acceleration of 4.4 m/s2. What is the car's final velocity when it merges onto the highway and reaches its cruising speed 4.2 seconds later? A 28.7 m/s B 11.2 m/s C 18.5 m/s D 8.28 m/s
asked by Luis on March 25, 2014
A 43 sample of water absorbs 302 of heat. If the water was initially at 29.8, what is its final temperature? i been trying to solve this problem all day long, but i just don't understand how to do it. please help me. thanks..
asked by mari on February 5, 2010
What is the molarity of a calcium sulfate solution prepared by adding 27.89g of CaSo4 to sufficient water to make a final solution volume of 1.00L?
30.0 mL of 0.10 M Ca(NO3)2 and 15.0 mL of 0.20 M Na3PO4 solutions are mixed. After the reaction is complete, which of these ions has the lowest concentration in the final solution? A) Na+ B) NO3- C) Ca+2 D) PO4-3 Answer is C.. How do you solve it though?!
asked by Ash on April 3, 2013
The data below are the final exam scores of 10 randomly selected statistics students and the number of hours they studied for the exam. Calculate the correlation coefficient r.
asked by milly on September 28, 2011
A $10, 000 loan advanced on May 1 at 8 1⁄4 % requires two payments of $3000 on July 15 and September 15, and a third payment on November 15. What must the third and final payment be in order to settle the debt?
asked by ppp on April 15, 2017
A balloon holds 29.3 kg of helium. What is the volume of the balloon if the final pressure is 1.2 atm and the temperature is 22°C? using the equation pv=nrt would i put 29.3 kg in place of n? but n has to be in moles so how do you convert that? thanks
asked by hannah on February 14, 2011
A Bulldog is already moving when we first see it. It waddles -57 meters reaching a final velocity of -28 m/s after accelerating at -4 m/s2. WHAT WAS THE DOG'S INITIAL VELOCITY? Every time I try to use v^2= V0+2a(delta x) I end up getting a very big value.
asked by Jessica on August 30, 2014
If the heat from burning 5.800 g of C6H6 is added to 5691 g of water at 21 °C, what is the final temperature of the water? 2C6H6(l) +15O2 (g)----> 12CO2(g)+6H2O(l) +6542
asked by Akle on February 10, 2012
Health Manpower, Education 7 Financing Healthcare
Should the tuition paid by a medical student be considered the full and final payment on his/her education? Is society owned anything given the true cost is not paid for by students?
asked by kamla on September 17, 2007
To: Anonymous; Re: GRAMMAR TEST
I've removed your 50-question final grammar test. You posted none of your answers, so obviously you're looking for answers and not help. The Jiskha Forum's purpose is to HELP students with homework. We do not do assignments or tests.
asked by Ms. Sue on January 26, 2010
Can someone please help me solve this homework problem for my chemistry class.Thanks A 4.0*10^1kg sample of water absorbs 308KJ of heat. If the water was initially at 25.5 C, what is its final temperature?
asked by Eliza on September 16, 2012
Eng. Comp III
I'm not sure what I am supposed to do here I am to write a Bibliographic essay on my sources for my final research essay - what does this entail? I don't want you to do my homework for me,just give me a couple of examples please, Thank You, Sara
asked by Sara on October 3, 2013
asked by Jen on February 10, 2012
Find and show work the final pH when 35mL of 0.200 mol/l sulfuric acid are titrated with 18mL of 0.030 mol/L sodium hydroxide?
asked by S.S on May 26, 2010
Find the volume and surface area of a right cylinder with radius 7.5 cm and height 16 in. Make sure your final answer is in inches. Round your answer to the nearest hundredth.
asked by Andrea on August 15, 2012
Chemistry- balancing Redox
Can anyone please assist showing half reactions balancing, balancing of e- and the additions of H2O, H+ and then since its BASIC!!!! the final addition of OHs? Cr(OH)3+ ClO^-=>CrO4^2- +Cl2 THANKS!
asked by Steve on June 8, 2013
Pharmacy calculation
An Rx calls for hydrocortisone 1% mixed with mupirocin 2% in a 70%/30% combination for a total of 60 grams. What is the % strength of mupirocin in the final product? Can anyone out there help explain how to answer this question? Thanks in advance.
asked by Anonymous on May 12, 2016
A peice of ice weighting 5g at -20 centigrade is put into 10g of ice at 30C. Assuming no heat is lost to the outside.Calculate the final tempature of the mixture?
asked by Om on June 13, 2018
A 0.5 kg block initially at rest on a frictionless, horizontal surface is acted upon by a force of 5.0 N for a distance of 4.0 m. How much kinetic energy does the block gain? What is its final velocity?
asked by Bia on February 18, 2015
How would I calculate water potential? I have the formula but I can't figure out what to plug in. I only have a data table including initial and final mass of a concentration and the molarity of the concentration.
A sample of oxygen that occupies 2.5 × 10−6 mL at 508 mm Hg is subjected to a pressure of 1.2 atm. What will the final volume of the sample be if the temperature is held constant? Answer in units of mL
asked by Anna on February 25, 2015
which of the following should you not do in the close of a job interview? A. give a final explanation for a weakness that you have. B. show enthusiasm for the job. C. summarize your accomplishments and strengths. D. smile and relax
asked by Donna on October 10, 2011
physics;changes in temp
A 4.0 kg gold bar at 97C is dropped into 0.27 kg of water at 22C. What is the final temperature? Assume the specific heat of gold is 129 J/kg C. Answer in units of C.
asked by sarah on January 21, 2011
Ashley, Jonathan, Sarah, Carlos, and Tanya all made the finals of the National Math Fair Competition last year. Before the final round began each one had to shake hands with all the others. How many handshakes were there? I am not really sure but the
asked by gianina on December 14, 2009
If 150.0 g of Zinc at 100.0 C and 250.0g of liquid water at 10.0 C are mixed in an insulated container, what is the final temperature of the mixture? Czn= .39J/g C Cwater= 4.184J/g C Are we looking for the delta T? If we are, how do you find Q (joules)
asked by miyabi on November 30, 2015
Review the importance of the four major areas with which a writer must be concerned in order to appeal to a reader's visual sensibilities. What type of document design would be most effective for your final project? Explain your answer.
asked by dawn on March 3, 2008
Which identifies the Nazi paramilitary organization responsible for implementing the exterminations of the Final Solution? Gestapo Luftwaffe SS (Schutzstaffel) Brownshirts im confused i read thur the text but still can't find the answer but i believe it
asked by James on August 7, 2018
Which was a final action leading to U.S. participation in World War I? A. the bombing of Hawaii B. the addition of Mexico to the Allies C. unrestricted German attacks on submarines D.requests for support from England and France
asked by Iggy on February 23, 2015
Determine the final temperature when 10g of steam at 100C mixes with 500g of water at 25C. I have 4.184x 10(100-x) + 40700 x 10= 500 x (x-25) x 4.18. The book has 10/18 instead of 10 on left hand side of equation. Why??
asked by Nancy on August 18, 2012
Suppose that in your experiment you conducted in lab you neglected to account for pressure due to water vapor in your collected sample of hydrogen gas. How, if at all, would this affect your final value of atomic weight of the metal?
asked by Lillian Nguyen on November 12, 2009
final question Consider the function F(x)=3x^4-5x+3 / X^4+1 Find: Domain Vertical,horizontal,or slant asymptotes x-intercept(s) y-itercept(s) symmetry(respect to x, y-axis or orgin) F' (x) Critical numbers F" (x) Possible points of inflection
asked by marlyn on December 5, 2009
A 500g copper at 100celsius is placed in a 200g water at 25celsius which is contained in an aluminum calorimeter that has a mass of 400g. If the final temperature of the whole system is 30celsius, find the specific heat of the lead.
asked by bits on September 29, 2016
Which star shown on the luminosity and temperature of stars graph in the Earth Science Reference Tables is currently at the Sun's final predicted stage of development, and how do you know? a. Polaris b. Procyon B c. Sirius d. Rigel
asked by Angie on May 24, 2009
There is a rod that is 55 cm long and 1 cm in radius that carries a 8 uC charge that is ditributed uniformly over its entire length. What would be the magnitude of the electric filed 4.0 mm from the ord surface, not near either? So far I have
asked by Sammy on August 27, 2009
There are two particles of charge +Q located on the top left and bottom right corners of a square. What is the direction of the net electric field at the point on the bottom left corner of the square? And also what would be the general equation be for
asked by Joe on March 7, 2010
Did the individuals involved follow the legal guidelines for searches? Why or why not? Did the exclusionary rule apply in this situation? Why or why not? Provide one example in which the exclusionary rule would apply if the individuals involved had done
I'm unsure how to even start this problem or how I'm suppose to find out the distance at which destructive interference would occur... Two loudspeakers are 1.8 m apart. A person stands 3.0 m from one speaker and 3.5 m from the other. (a) What is the lowest
asked by AP Physics B on November 15, 2009
word question in algebra
oh i think i see instead of ft/miles because i changed the 13 miles to ft it would be 0.61 ft/ft An airplane covered 13 miles of its route while decreasing its altitude by 42,000 feet. Find the slope of the airplane's line of descent. Round to the nearest
asked by carla on April 13, 2007
CHECK MY CHEMISTRY WORK PLEASE ASAP
1. You have been given a sample of unknown molarity. Calculate the molarity of a solution which has been prepared by dissolving 8.75 moles of sodium chloride in enough water to produce a solution of 6.22l. 2. You have a sample which consists of 428g sodium
asked by David on January 10, 2017
In a student experiment, a constant-volume gas thermometer is calibrated in dry ice (−78.5°C ) and in boiling pentane (36.1°C). The separate pressures are 0.896 atm and 1.433 atm. Hint: Use the linear relationship P = A + BT, where A and B are
asked by shawn on December 13, 2014
A railroad car of mass 26500 kg is released from rest in a railway switchyard and rolls to the bottom of a slope Hi =8 m below its original height. At the low point, it collides with and sticks to another car of mass 20000 kg. The two cars roll together up
asked by micha on February 22, 2011
a wheel revolves about a fixed horizontal axis through O (the center). During a certain interval of time, the angular velocity (which initially is 20 rad per second clockwise) changes uniformly at such a rate that the angular displacement is 60 rad
asked by stan on January 31, 2011
Identify the main purpose of a persuasive essay and the elements necessary for it to be effective. Above is the question I had to answer. Here is what I wrote. I am looking for a second opinion if it reads well and covers the question adequately. Thanks...
asked by Jim on February 10, 2011
Write the integer that is represented by a point midway between -76 and 76 on the number line. Choose a nonzero integer for n to show that –n can be evaluated as a positive number. Scoring in Golf. A golfer played 8 rounds on a tournament course with the
re:social studies
The Democratic point of view is that the Republicans are the party of "No," and progress will be severely impeded if Republicans gain a majority in Congress. Republicans will extend the tax cuts for everyone, including the rich. why though? The Republican
asked by gaby on September 19, 2010
An airplane dropping supplies to northern villages that are isolated by severe blizzards and cannot be reached by land vehicles. The airplane is flying at an altitude of 785m and at a constant horizontal velocity of 53.5 m/s. At what horizontal distance
asked by Han on October 4, 2014
Can someone please check my answers? :) 1. What was the greatest threat to a slave family's stability? (1 point) the birth of a new baby an increase in cotton production *** a marriage in the slaveholder's family the slaveholder's acquisition of new
asked by Maggie on January 21, 2014
When 0.752 g of Ca metal is added to 200.0 mL of 0.500 M HCl(aq), a temperature increase of 12.2C is observed. Assume the solution's final volume is 200.0 mL, the density is 1.00 g/mL, and the heat capacity is 4.184 J/gC. (Note: Pay attention to
asked by nickel on February 10, 2007
suppose that an isotope of uninhexium Uuh under goes a succession of seven a -decay reactions what isotope would be the final product of these changes and show how u arrived at this answer and would the final product be a lanthanide,an actinide,atransition
asked by JILL on January 24, 2007
Geomentry
What is the converse of the theorem statement: If a line parallel to one side of a triangle intersects the other two sides, then it divides the two sides proportionally. If two sides of a triangle are divided in the same proportion, then the line
asked by Mack on April 3, 2007
speakeasy company offers customers a choice between 3 schemes:1st,a monthly charge of $15 + 5cents/ min.2nd,a monthly charge of $5 for the line rental +20 cents/min,3rd a(pay as you go) charge of 35cents/min with no monthly line rental charge.
asked by rima on May 10, 2010
I have read my reading materials and answered to the best of my understanding. If anyone could help I would appreciate it! Comparing Systems of Abnormality Psychologists use several different models to explain abnormal behavior. These different models have
asked by Tina on March 19, 2012
you have to solve the following by finding the value of x and y. can u help please? x+3(x+1)=2x 1+3(x-1)=4 2x-2(x+1)=5x 2(3x-1)=3(x-1) 4(y-1)+3(y+2)=5(y-4) 3y+7+3(y-1)=2(2y+6) These are exercises in standard algebraic manipulation of equations. Basically,
asked by number 420 on April 22, 2007
The arrow and the song poem 1 i shot an arrow into the air, 2 it fell to earth, i know not where; 3 for, so swiftly it flew, the sight 4 could not follow in its flight Part A Which form best describes the poem this excerpt is from A. lyric poem B. concrete
asked by SugarPie on February 23, 2017
A 50.0 kg parachutist jumps out of an airplane at a height of 1.00 km. The parachute opens, and the jumper lands on the ground with a speed of 5.00 m/s. By what amount was the jumper's mechanical energy reduced due to air resistance during this jump? so,
asked by Becca on December 5, 2006
|
CommonCrawl
|
Drag reduction by application of aerodynamic devices in a race car
Devang S. Nath ORCID: orcid.org/0000-0003-3887-21571,
Prashant Chandra Pujari1,
Amit Jain1 &
Vikas Rastogi1
Advances in Aerodynamics volume 3, Article number: 4 (2021) Cite this article
In this era of fast-depleting natural resources, the hike in fuel prices is ever-growing. With stringent norms over environmental policies, the automotive manufacturers are on a voyage to produce efficient vehicles with lower emissions. High-speed cars are at a stake to provide uncompromised performance but having strict rules over emissions drives the companies to approach through a different route to keep the demands of performance intact. One of the most sought-after ways is to improve the aerodynamics of the vehicles. Drag force is one of the major setbacks when it comes to achieving high speeds when the vehicle is in motion. This research aims to examine the effects of different add on devices on the vehicle to reduce drag and make the vehicle aerodynamically streamlined. A more streamlined vehicle will be able to achieve high speeds and consequently, the fuel economy is also improved. The three-dimensional car model is developed in SOLIDWORKS v17. Computational Fluid Dynamics (CFD) is performed to understand the effects of these add on devices. CFD is carried out in the ANSYS™ 17.0 Fluent module. Drag Coefficient (CD), Lift Coefficient (CL), Drag Force and Lift Force are calculated and compared in different cases. The result of the simulations was analyzed and it was observed that different devices posed several different functionalities, but maximum drag reduction was found in the case of GT with spoiler and diffuser with a maximum reduction of 16.53%.
Aerodynamics is the study of how moving objects interact with the air. How the body behaves when it comes in contact with the air determines the forces induced by the air flowing over and around the body. It is one of the most important factors affecting the performance of a race car [1]. Driving the car is like swimming through the endless ocean of air. Over the past few years, the degrading air quality and the shortage of natural resources primarily oil, have tremendous pressure on automotive manufacturers to come up with some feasible solutions to overcome this crisis. In earlier times, high-speed cars were only dependent upon horsepower of the engine to maintain the performance segment of the vehicle. But in recent trends, design engineers are adapting the concepts of aerodynamics to enhance the efficiency of the vehicle [2, 3]. Fuel consumption due to aerodynamic drag consumes about half the vehicle's energy [4, 5]. Thus, reducing the drag is one of the major approaches automotive manufacturers opt for. Shaping the body of the vehicle and inclusion of various add on devices contributes to optimization for low drag, which becomes an essential part of the design process. Drag Force predominantly depends upon the velocity, frontal area, and coefficient of drag of the body. It can be expressed as:
$$ {\mathrm{F}}_{\mathrm{D}}=0.5\ {\mathrm{C}}_{\mathrm{D}}\rho\ \mathrm{A}\ {\mathrm{V}}^2 $$
Where FD is the drag force; ρ is the density of fluid medium that is air; A is the frontal area of the body facing the fluid; V is the velocity of the body; CD is the coefficient of drag of the body.
In the similar context, lift force is of the major concerns too for design engineers, as excessive lift can make the vehicle loose traction at high speeds and can result in fatal injuries both to the driver and other pedestrians along with damage of public property. Thus, it is highly desirable the lift should be well within the stipulated range. Lift force can be expressed as:
$$ {\mathrm{F}}_{\mathrm{L}}=0.5\ {\mathrm{C}}_{\mathrm{L}}\rho\ \mathrm{A}\ {\mathrm{V}}^2 $$
Where FL is the lift force; ρ is the density of fluid medium that is air; A is the frontal area of the body facing the fluid; V is the velocity of the body; CL is the coefficient of lift of the body.
From the drag equation, it can be seen that the drag force is in proportion to the square of the speed. This implies that the resistance due to air increases exponentially as the speed of the body increases [6]. Flow separation control is also a major interest in fundamental fluid dynamics and various engineering applications [4, 7]. Flow Separation location determines the size of the wake area and the amount of aerodynamic drag is determined accordingly. When the air moving over the vehicle is separated at the rear end, it leaves a large low-pressure turbulent region behind the vehicle known as the wake. This wake contributes to the formation of pressure drag [8]. Numerous techniques have been explored to control the flow separation either by preventing it or by reducing its effects [4] (Fig. 1).
Flow separation and formation of wake region
To achieve the optimized drag for the vehicle, the research is being carried out on these certain add on aerodynamic devices to reduce the resistance offered by wind and improve the efficiency of the vehicle [9]. In this research, the effects of various aerodynamic devices like the rear wing, spoiler, diffuser, and fins are examined and the change in the coefficient of drag is investigated.
Spoiler is one of the most widely used and important aerodynamic devices in the automotive domain. Its main purpose is to "spoil" the unwanted airflow and channel the airflow in order, which helps in reducing the drag. However, the actual use of spoiler is noticed at higher speeds approximately above 120 km/h. Commercial vehicles usually adopt it to increase the design appeal of the vehicle, which provides little or no aerodynamic advantage. Thus, mostly high-performance vehicles adapt it to achieve higher speeds. The low-pressure zone behind the vehicle is reduced, thus less turbulence is created, which subsequently leads to drag reduction (Fig. 2).
Effect upon drag by using spoiler (https://i.stack.imgur.com/L5rdw.jpg)
The wing is another essential aerodynamic device often used by race cars. A rear wing may look like a spoiler but is different in its functioning. It is shaped like a wing of an airplane turned upside down [6]. Its main objective is to provide sufficient downforce or negative lift so that the vehicle has increased traction and the vehicle doesn't lift off at higher speeds [10]. It also allows to corner faster and improves stability at high speeds [11]. But using a wing may add up the drag to the vehicle body. Thus, for any amount of lift gained, drag also increases [12]. It is generally regarded as a tradeoff between drag and lift (Fig. 3).
Wing at the rear of a car (https://www.lamborghini.com/masterpieces/aventador-superveloce)
For the first time in the automotive industry, the application of fins at the rear part of the vehicle's body is witnessed by Swedish hyper-car manufacturer Koenigsegg Automotive AB. Their flagship model "Jesko Absolut" which has the least coefficient of drag in their lineup has fins instead of the wing as shown in Fig. 4. Fins are inspired by fighter jets to provide high-speed stability and to reduce aerodynamic drag.
Koenigsegg Jesko Absolut (https://www.koenigsegg.com/car/jesko-absolut)
The diffuser is one of the prominent aerodynamic devices found in Formula 1 cars. The wide versatility offered by diffusers has found its way down to the high-speed production vehicles. Diffusers are capable of reducing drag and increasing downforce for driving cars [13, 14]. The role of the diffuser is to expand the flow from underneath the car to the rear, and this in turn produces a pressure potential, which will accelerate the flow underneath the car, resulting in reduced pressure [15]. The principle behind the working of diffusers is based upon Bernoulli's principle which states that "a slow-moving fluid will exert greater pressure than the fast-moving fluid". Thus, the role of the diffusers is to accelerate the flow of air beneath the car so that less pressure is exerted in comparison to the outer body flow. This serves in ejecting out the air from below the car. The diffuser then lightens this high-speed air down to normal speed and helps fill the area behind the car, making the entire underbody a more robust downforce and importantly reducing the drag on the vehicle (Fig. 5).
Diffuser in a car (https://www.lamborghini.com/masterpieces/aventador-superveloce)
Baseline model GT
The vehicle used for simulation is shown in Fig. 6. The three-dimensional car model was developed in SOLIDWORKS v17.0. The baseline model's length, width, height are 4230 mm, 1996 mm, 1089 mm respectively. While the ground clearance is 92 mm.
Geometric model of the car
Seven different cases excluding baseline model GT with various add on devices namely, GT with spoiler, GT with wing, GT with diffuser, GT with fins, GT with both spoiler & diffuser, GT with both wing & diffuser and lastly GT with both fins & diffuser have been illustrated below.
GT with spoiler
In the baseline GT model, a spoiler has been installed at the rear end of the trunk in Fig. 7 to provide a greater streamlined flow by delaying the flow separation in an attempt to reduce the overall drag coefficient of the car.
GT with wing
A wing with a 1800 mm long span has been installed in the baseline model at the rear end of the trunk as shown in Fig. 8 with an angle of attack of 31.4 degrees. It is expected that the application of the rear wing will increase the downforce in the trade of drag force.
GT with diffuser
A diffuser is developed of length 1000 mm while its angle of inclination is 12 degrees and the width of each teeth is 7.50 mm as shown in Fig. 9. A steeper angle of inclination may result in flow separation, which will lead to an increase in drag. A lower angle of inclination will be less effective for the required purpose. Diffuser has been installed at the rear of the baseline GT as shown in Fig. 10.
Diffuser model
GT with fins
The fins are installed at the rear part of the baseline GT, each of thickness 15 mm as shown in Fig. 11 to provide overall stability at high speeds. It is expected that the overall drag coefficient of the vehicle will be decreased by the application of the fins.
GT with spoiler and diffuser
In the baseline GT model, both spoiler and diffuser are installed as shown in Fig. 12. It is expected that the combined aerodynamic devices will reduce the drag coefficient of the vehicle by a greater margin.
GT with wing and diffuser
In the baseline GT model, both wing and diffuser are installed as shown in Fig. 13. Application of both of these aerodynamic devices will provide downforce as well as a reduction in drag.
GT with fins and diffuser
In the baseline GT model, both fins and diffuser are installed as shown in Fig. 14. It is expected that the application of both of these aerodynamic devices would help in providing stability and reducing the overall drag coefficient.
The three-dimensional car model was imported to ANSYS™ workbench. Computational Fluid Dynamics (CFD) was carried out in the FLUENT module. In Design Modeler, an enclosure is developed of dimensions 12,000 × 4000 × 8000 mm to form a virtual wind tunnel as shown in Fig. 15.
An appropriate mesh was developed using ANSYS™ Mesh Tool. The mesh was observed to be coarser at the inner domain and finer at the contact region with the vehicle as shown in Fig. 16. A total of 210,681 nodes and 1,081,239 elements were formed after the meshing was done.
Meshed model
The numerical simulation was done in commercial code FLUENT [13]. Due to its stability and ease of convergence, the Standard k-epsilon model was selected as a turbulence model [15, 16]. In most high-Reynolds-number flows, such as in this particular research, the wall function approach substantially saves computational resources, because the viscosity-affected near-wall region, in which the solution variables change most rapidly, does not need to be resolved. The wall function approach is popular because it is economical, robust, and reasonably accurate. It is a practical option for the near-wall treatments for industrial flow simulations (https://www.learncax.com/knowledge-base/blog/by-category/cfd/basics-of-y-plus-boundary-layer-and-wall-function-in-turbulent-flows). The turbulence kinetic energy, k, and its rate of dissipation, epsilon ε, are obtained from the following transport equations (https://en.wikipedia.org/wiki/K-epsilon_turbulence_model):
$$ \frac{\partial \left(\rho k\right)}{\partial t}+\frac{\partial \left(\rho {ku}_i\right)}{\partial {x}_i}=\frac{\partial }{\partial {x}_j}\left[\frac{\mu_t}{\sigma_k}\frac{\partial k}{\partial {x}_j}\right]+2{\mu}_t{E}_{ij}{E}_{ij}-\rho \varepsilon $$
$$ \frac{\partial \left(\rho \varepsilon \right)}{\partial t}+\frac{\partial \left(\rho \varepsilon {u}_i\right)}{\partial {x}_i}=\frac{\partial }{\partial {x}_j}\left[\frac{\mu_t}{\sigma_{\varepsilon }}\frac{\partial \varepsilon }{\partial {x}_j}\right]+{C}_{1\varepsilon}\frac{\varepsilon }{k}2{\mu}_t{E}_{ij}E{}_{ij}-{C}_{2\varepsilon}\rho \frac{\varepsilon^2}{k} $$
In these equations, ui represents the velocity component in corresponding direction, Eij represents the component of rate of deformation, and μt represents the eddy viscosity. C1ε, and C2ε are the constants. σk and σε are the turbulent Prandtl numbers for k and ε respectively. Values of constant C1ε is 1.44, C2ε is 1.92, σk is 1.00 and σε is 1.30. The coupled scheme was set as the iterative algorithm and the residual value was set to 0.001. The frontal surface area of the vehicle is 1.99820 m2. A constant velocity boundary condition was selected for the inlet boundary. Since the majority of these devices make a difference at higher speeds, thus inlet velocity is kept at 150 kmph. Ground was selected as moving wall with a similar speed of 150 kmph to imitate the road. The outlet was set to constant pressure conditions. Boundary conditions used for the simulation are listed below in Table 1.
Table 1 Boundary conditions
All the designed car models with different and combined aerodynamic devices are simulated in ANSYS™ 17.0 Fluent. The results of the drag coefficients obtained are discussed below.
Figure 17 depicts the streamlines plot derived from the simulation for different cases. As seen from the streamlines plot in the figure, minimum flow separation is favorable, which subsequently leads to lesser turbulence. The amount of turbulence created behind the rear region of the car determines the magnitude of the drag force. In the case of GT with Wing Fig. 17c, maximum flow separation is observed, which has led to maximum turbulence, thus maximum drag force. On the other hand, the application of the Spoiler Fig. 17b on the baseline model reduced the turbulence at the back, thus drag force is also reduced considerably. Moreover, the addition of Diffuser Fig. 17d to the baseline model accounted for streamlined flow, which reduced the drag force. In the combination of these two in the case of GT with Spoiler and Diffuser Fig. 17f, it is evident from the streamlines that the least turbulence is generated, which further reduced the drag coefficient minutely, thus the least drag force out of all the cases stated above.
Streamlines after vehicle for different cases
Figure 18 shows the velocity contour for all the cases. From velocity contours, the re-circulation zone is visualized behind the vehicle. The minimum the recirculation zone is, the least turbulence is created, which subsequently leads to minimized drag.
Velocity contours behind the vehicle for different cases
From velocity contours derived from the simulation, it is evident that in the case of GT with Wing Fig. 18c, there is a large re-circulation zone extending from the bottom of the trunk to the wing's edge. Thus, it has contributed to the maximum drag force. Moreover, the addition of a Spoiler Fig. 18b, the recirculation zone is reduced from the baseline model, which reduced the drag force considerably. On the application of Diffuser Fig. 18d, the recirculation zone is reduced, accounting for more teardrop shape with less flow separation, which reduced the drag force. Furthermore, the combination of these two in the case of GT with Spoiler and Diffuser Fig. 18f, the recirculation zone is small, which has contributed to the even lesser drag out of all the cases. Moreover, as seen from the images, it can be deduced that diffuser tends to decrease the recirculation zone to have a more teardrop shape. Thus, organizing the flow and eventually drag coefficient of the vehicle is reduced. Figures 19 & 20 show the comparison of the drag coefficient and lift coefficient respectively for different cases.
Comparison of the drag coefficient for different cases
Comparison of the lift coefficient for different cases
From Table 2 and Figs. 17 & 18, it can be found that the maximum drag is in the case of the GT Wing. By application of rear diffusers, the overall drag of GT Wing with diffuser is reduced by a little margin. Secondly, the drag in the Baseline GT was further reduced by diffuser by a considerable margin. The minimum drag is observed in the case of GT Spoiler with diffuser.
Table 2 Drag coefficient, Lift coefficient and the percentage reduction from the baseline GT for different cases
Table 3 and Fig. 21 shows the trend in drag force for different speeds. Three different speeds are considered viz. 70 kmph, 150 kmph and 300 kmph for considering multiple scenarios. The graph validates the theory that drag force increases exponentially with the increase in speed.
Table 3 Drag Force comparison at different speeds for different cases
Comparison of the drag force at different speeds for different cases
In the similar context, Table 4 and Fig. 22 shows the trend in lift force for different speeds. Three different speeds are considered viz. 70 kmph, 150 kmph and 300 kmph for considering multiple scenarios. The trend in graph shows that the lift force increases exponentially with the increase in speed. But in the case of GT Wing and GT Wing with Diffuser, it is observed that negative lift or downforce is much prominent in both these cases. It is useful for having increased traction with road, which subsequently leads to increased cornering speed. But this will increase the drag (as stated in Fig. 21) and hinder achieving the potential top speed. Thus, this aerodynamic setup is usually preferred in closed circuit races where top speed requirement isn't much, but cornering speed is of primary concern.
Table 4 Lift Force comparison at different speeds for different cases
Comparison of the lift force at different speeds for different cases
The constant evolution in the history of vehicle aerodynamics has led to the development of certain devices which led to the enhancement of the overall aerodynamic characteristic of the vehicles. Not only it improves the efficiency of the vehicle but also reduces fuel consumption. The analysis of the baseline GT with different add on aerodynamic devices was studied by using numerical simulation in this paper. It has been found that aerodynamic drag can be influenced by using different add on devices. In consideration to reduce drag, it is favorable that the flow is attached to the vehicle's body as long as possible. A streamlined body would result in less flow separation, which would cause less turbulence. In the case of GT Spoiler with Diffuser, maximum drag reduction of 16.53% is observed. Although other devices like fins also reduced drag to a much extent, they may pose a different functionality such as high-speed stability by channeling flow at rear accordingly. Wings have altogether a different function. It indeed increased the drag but its prime function is to provide downforce at the cost of increased drag, and it is much like a trade-off. Diffusers on the other hand decreased the drag whenever applied in different cases. In conclusion, it may be regarded as proper optimization can lead to better aerodynamics of the vehicle in different scenarios.
V Velocity of the body
FD Drag force
FL Lift force
ρ Density of the fluid medium (air)
A Frontal area of the body facing the fluid
CD Coefficient of drag of the body
CL Coefficient of lift of the body
All data generated or analyzed during this study are included in this published article with appropriated citations.
Wang J, Li H, Liu Y, Liu T, Gao H (2018) Aerodynamic research of a racing car based on wind tunnel test and computational fluid dynamics. MATEC Web of Conferences 153:1–5. https://doi.org/10.1051/matecconf/201815304011
Yang X, Cai Z, Ye Q (2019) Aerodynamics analysis of several typical cars. J Eng Thermophys 28(2):269–275. https://doi.org/10.1134/S1810232819020085
Altinisik A, Kutukceken E, Umur H (2015) Experimental and numerical aerodynamic analysis of a passenger car: influence of the blockage ratio on drag coefficient. J Fluids Eng, Trans ASME 137(8):1–14. https://doi.org/10.1115/1.4030183
Sudin MN, Abdullah MA, Shamsuddin SA, Ramli FR, Tahir MM (2014) Review of research on vehicles aerodynamic drag reduction methods. Int J Mech Mechatronics Eng 14(2):35–47
Chowdhury H, Loganathan B, Mustary I, Moria H, Alam F (2017) Effect of various deflectors on drag reduction for trucks. Energy Procedia 110(December 2016):561–566. https://doi.org/10.1016/j.egypro.2017.03.185
Hucho WH (1997) Aerodynamics of road vehicles, 4th edition, Society of Automotive Engineers (SAE) International, Warrendale
Geropp D, Odenthal HJ (2001) Drag reduction of motor vehicles by active flow control using the Coanda effect. Exp Fluids 28(1):74–85. https://doi.org/10.1007/s003480050010
Cakir M (2012) CFD study on aerodynamic effects of a rear wing/ spoiler on a passenger vehicle. Dissertation, Santa Clara University
Hamut HS, El-Emam RS, Aydin M, Dincer I (2014) Effects of rear spoilers on ground vehicle aerodynamic drag. Int J Numerical Methods Heat Fluid Flow 24(3):627–642. https://doi.org/10.1108/HFF-03-2012-0068
Kajiwara S (2017) Passive variable rear-wing aerodynamics of an open-wheel racing Car. Automot Engine Technol 2(1–4):107–117. https://doi.org/10.1007/s41104-017-0021-9
Dey S, Saha R (2018) CFD study on aerodynamic effects of NACA 2412 airfoil as rear wing on a sports car. In: ICMIEE, 18–25 December
Reddy JJ, Gupta M (2006) Finding the optimum angle of attack for the front wing of an F1 car Using CFD. Proceedings of the 4th WSEAS international conference on fluid mechanics and aerodynamics, Elounda, pp 29–34
Rakibul Hassan SM, Islam T, Ali M, Islam MQ (2014) Numerical study on aerodynamic drag reduction of racing cars. Proc Eng 90:308–313. https://doi.org/10.1016/j.proeng.2014.11.854
Hu X, Zhang R, Ye J, Xu Y, Zhao Z (2011) Influence of different diffuser angle on Sedan's aerodynamic characteristics. Phys Procedia 22:239–245. https://doi.org/10.1016/j.phpro.2011.11.038
Peddie KM, Gonzalez LF (2004) CFD study on the diffuser of a formula 3 racecar. Orbit Univ Sydney Undergrad Res J 1(1):18–35
Alkan B, Isman MK (2013) Aerodynamic analysis of rear diffusers for a passenger car by using CFD. 7th international advanced technologies symposium (IATS' 13), Istanbul
The authors would like to thank the Centre for Advanced Studies and Research in Automotive Engineering (CASRAE) for providing the workspace/laboratory and the necessary equipment to carry out the required research.
Department of Mechanical Engineering, Delhi Technological University, Delhi, 110042, India
Devang S. Nath, Prashant Chandra Pujari, Amit Jain & Vikas Rastogi
Devang S. Nath
Prashant Chandra Pujari
Vikas Rastogi
The contribution of the authors to this work is equivalent. All authors read and approved the final manuscript.
Correspondence to Devang S. Nath.
Nath, D.S., Pujari, P.C., Jain, A. et al. Drag reduction by application of aerodynamic devices in a race car. Adv. Aerodyn. 3, 4 (2021). https://doi.org/10.1186/s42774-020-00054-7
Drag force
High speed car
|
CommonCrawl
|
Might Oort cloud comets be exchanged between solar systems?
Considering their distance from their parent stars, might Oort cloud object such as comets be exchanged between passing stars (assuming that other stars have similar Oort clouds)?
oort-cloud
dotancohendotancohen
$\begingroup$ Similar: physics.stackexchange.com/questions/41059/… $\endgroup$ – Everyone Nov 14 '13 at 3:01
You can exclude the 'considering the distance' piece - of course Oort Cloud objects could transfer between different gravitational fields.
However what is it you think will make this transfer? Without some sort of gravitational impetus why would one of these objects leave the solar system? And if you do manage to slingshot one out of the solar system at a high enough speed to exit the Sun's gravity, remember that most directions end up very far away from any other solar systems.
Tl;Dr sure, but not very likely
Rory AlsopRory Alsop
$\begingroup$ Methinks the question may have been prompted by Nevski's transit through the Solar System on a hyperbolic trajectory out of the Oort cloud ... $\endgroup$ – Everyone Nov 14 '13 at 3:06
$\begingroup$ My line of thinking is that as stars pass one another on their way around the galaxy, they could 'brush' Oort clouds and thus exchange comets. The same mechanism may also perturb the comets to rain down on their own suns/stars. $\endgroup$ – dotancohen Nov 14 '13 at 5:45
$\begingroup$ Everyone - sure. There is no reason why that can't happen, but the distances involved are huge, so I would expect most of the casualties of that kind of pass to not be captured. $\endgroup$ – Rory Alsop Nov 14 '13 at 7:38
$\begingroup$ Dotancohen - distances! Oort clouds are pretty distant, but not a patch on the distance to another star. And generally there isn't much passing going on :-) $\endgroup$ – Rory Alsop Nov 14 '13 at 7:39
$\begingroup$ Note that this answer supports the assertion that passing stars can influence Oort cloud objects. $\endgroup$ – dotancohen Oct 19 '14 at 12:27
TL;DR In response to your comment that "note that [link to answer] supports the assertion that passing stars can influence Oort cloud object" I will talk about whether this could happen to comets in the Oort cloud that surrounds the solar system.
It can happen but the stars that exist today don't pass by close enough to yank away a comet at once. However many star passages could eventually do it. In this answer I attempt to present a way to think about this problem. Skip to the last paragraph to get the directly to my answer to your question without the extra.
In my answer here I clearly state that many stars have their own Oort cloud and that if they pass by each other close enough the stars will exchange comets. This is a direct answer to your question. It is believed to happen a lot in young star clusters, but you have to realize that older stars are often separated from other stars by a great distance which prohibits this type of exchange.
Now I will discuss influence by stars on the comets in the Oort cloud (the usual one, that surrounds the solar system). This is the topic of chapter of 5.2, Stellar Perturbations, in Julio Angel Fernández's book Comets. It is possible to approximate the influence of a passing star with some reasonable simplifications. I will try to retell Fernández argument here below.
Let's say that a comet is located at a heliocentric distance $r$. Since Oort cloud comets travel very slowly compared to stars, $0.1\rm{km\cdot s^{-1}}$ versus $30 \rm{km\cdot s^{-1}}$, we can assume that the comet is at rest in the heliocentric frame. If we neglect any influence of the star when it is further away than $10^5 \rm{AU}$ from the closest approach to the Sun we only have to be concerned about the time it takes for a star to travel $2\times 10^5\rm{AU}$ (imagine the star moving through the Sun), and during this time the comet has only travelled approximately $10^3\rm{AU}$. The star can be taken to travel in a straight line since it is only slightly perturbed by the Sun. This leads, and as with everything else this comes from Fernández text, to the integral
$$ \Delta v=\int_{-\infty}^\infty F\times \rm{dt}=-\frac{2GM}{VD} $$
where $\Delta v$ is the change of velocity of the comet, $G$ is the universal gravitational constant, $M$ is the mass of the star, $V$ is the velocity of the star and $D$ is the distance of closest approach between the star to the comet. However, we can't forget either that the Sun is also influencing the comet. If the comet is much closer to the Sun than the star the influence of the star can be neglected and vice versa. Since in this question we are dealing with the case "is it possible" I will assume that the comet is far out in the Oort cloud. Under these conditions we get another expression (after taking the Sun into account), i.e.
$$ | \Delta v | \approx \frac{2GMr\rm{cos}(\beta)}{VD_\odot^2} $$
where $\beta$ is the angle between the vector between the sun and the closest point of approach to the sun of the star and the vector from the sun to the comet. $D_\odot$ is the distance between the sun and the star at the closest point of approach.
All this math is somewhat superfluous in the current context. I wanted to show you that it is possible to reason analytically about these things. Your question is whether a comet can be yanked away from its orbit in the Oort cloud and be captured by a passing star. The last formula presented here shows that for stars that actually exist now (not to say that stars or other small bodies have never passed by close to the solar system or even gone through it) the change of velocity imparted on the comet by the star is far to small for this to happen. However the change of velocity will accumulate over many star passages, and over a long time it will change the orbit of the comet in a meaningful way. Long periodic comets (LP comets) are comets that travel into the solar system in an orbit that is a very narrow ellipsis so that its perihelion (closest approach to the sun) is small but the aphelion (furthest point away from the sun) can be a great distance outside Oort cloud. Long periodic comets meet their end in different ways. Some passes to close to the sun and melt, others collide with planets, especially the big gas planets, and some get catapulted out of the solar system by a close approach to for example Jupiter. It is possible however, because long periodic comets can have orbits that extend beyond the Oort cloud where they are less influenced by the sun and more influenced by passing stars, that they might be yanked away and join another star eventually, although I still don't think it is likely. It would be possible I think to use the same kind of math to approximate the change of velocity that stars incur on LP comets to see if it is feasible, but I haven't done it.
Not the answer you're looking for? Browse other questions tagged oort-cloud or ask your own question.
Do all stars have an Oort cloud or is it a rare occurence?
Where does the Solar System end?
Why is the Oort cloud presumed to be spherical?
Why do comets come from our local Oort cloud instead of from interstellar sources?
Is it accurate to compare comets to clouds and rain?
Astronomy Olympiad Gravitation Qn
Oort cloud shape
How far out can the sun keep celestial objects revolving?
Could a black hole pass quiescently through the Oort cloud?
|
CommonCrawl
|
Extremely low-resource neural machine translation for Asian languages
Raphael Rubino ORCID: orcid.org/0000-0003-4353-78041,
Benjamin Marie1,
Raj Dabre1,
Atushi Fujita1,
Masao Utiyama1 &
Eiichiro Sumita1
Machine Translation volume 34, pages 347–382 (2020)Cite this article
This paper presents a set of effective approaches to handle extremely low-resource language pairs for self-attention based neural machine translation (NMT) focusing on English and four Asian languages. Starting from an initial set of parallel sentences used to train bilingual baseline models, we introduce additional monolingual corpora and data processing techniques to improve translation quality. We describe a series of best practices and empirically validate the methods through an evaluation conducted on eight translation directions, based on state-of-the-art NMT approaches such as hyper-parameter search, data augmentation with forward and backward translation in combination with tags and noise, as well as joint multilingual training. Experiments show that the commonly used default architecture of self-attention NMT models does not reach the best results, validating previous work on the importance of hyper-parameter tuning. Additionally, empirical results indicate the amount of synthetic data required to efficiently increase the parameters of the models leading to the best translation quality measured by automatic metrics. We show that the best NMT models trained on large amount of tagged back-translations outperform three other synthetic data generation approaches. Finally, comparison with statistical machine translation (SMT) indicates that extremely low-resource NMT requires a large amount of synthetic parallel data obtained with back-translation in order to close the performance gap with the preceding SMT approach.
Neural machine translation (NMT) (Forcada and Ñeco 1997; Cho et al. 2014; Sutskever et al. 2014; Bahdanau et al. 2015) is nowadays the most popular machine translation (MT) approach, following two decades of predominant use by the community of statistical MT (SMT) (Brown et al. 1991; Och and Ney 2000; Zens et al. 2002), as indicated by the majority of MT methods employed in shared tasks during the last few years (Bojar et al. 2016; Cettolo et al. 2016). A recent and widely-adopted NMT architecture, the so-called Transformer (Vaswani et al. 2017), has been shown to outperform other models in high-resource scenarios (Barrault et al. 2019). However, in low-resource settings, NMT approaches in general and Transformer models in particular are known to be difficult to train and usually fail to converge towards a solution which outperforms traditional SMT. Only a few previous work tackle this issue, whether using NMT with recurrent neural networks (RNNs) for low-resource language pairs (Sennrich and Zhang 2019), or for distant languages with the Transformer but using several thousands of sentence pairs (Nguyen and Salazar 2019).
In this paper, we present a thorough investigation of training Transformer NMT models in an extremely low-resource scenario for distant language pairs involving eight translation directions. More precisely, we aim to develop strong baseline NMT systems by optimizing hyper-parameters proper to the Transformer architecture, which relies heavily on multi-head attention mechanisms and feed-forward layers. These two elements compose a usual Transformer block. While the number of blocks and the model dimensionality are hyper-parameters to be optimized based on the training data, they are commonly fixed following the model topology introduced in the original implementation.
Following the evaluation of baseline models after an exhaustive hyper-parameter search, we provide a comprehensive view of synthetic data generation making use of the best performing tuned models and following several prevalent techniques. Particularly, the tried-and-tested back-translation method (Sennrich et al. 2016a) is explored, including the use of source side tags and noised variants, as well as the forward translation approach.
The characteristics of the dataset used in our experiments allow for multilingual (many-to-many languages) NMT systems to be trained and compared to bilingual (one-to-one) NMT models. We propose to train two models: using English on the source side and four Asian languages on the target side, or vice-versa (one-to-many and many-to-one). This is achieved by using a data pre-processing approach involving the introduction of tags on the source side to specify the language of the aligned target sentences. By doing so, we leave the Transformer components identical to one-way models to provide a fair comparison with baseline models.
Finally, because monolingual corpora are widely available for many languages, including the ones presented in this study, we contrast low-resource supervised NMT models trained and tuned as baselines to unsupervised NMT making use of a large amount of monolingual data. The latter being a recent alternative to low-resource NMT, we present a broad illustration of available state-of-the-art techniques making use of publicly available corpora and tools.
To the best of our knowledge, this is the first study involving eight translation directions using the Transformer in an extremely low-resource setting with less than 20k parallel sentences available. We compare the results obtained with our NMT models to SMT, with and without large language models trained on monolingual corpora. Our experiments deliver a recipe to follow when training Transformer models with scarce parallel corpora.
The remainder of this paper is organized as follows. Section 2 presents previous work on low-resource MT, multilingual NMT, as well as details about the Transformer architecture and its hyper-parameters. Section 3 introduces the experimental settings, including the datasets and tools used, and the training protocol of the MT systems. Section 4 details the experiments and obtained results following the different approaches investigated in our work, followed by an in-depth analysis in Sect. 5. Finally, Sect. 6 gives some conclusions.
This section presents previous work on low-resource MT, multilingual NMT, followed by the technical details of the Transformer model and its hyper-parameters.
Low-resource MT
NMT systems usually require a large amount of parallel data for training. Koehn and Knowles (2017) conducted a case study on English-to-Spanish translation task and showed that NMT significantly underperforms SMT when trained on less than 100 million words. However, this experiment was performed with standard hyper-parameters that are typical for high-resource language pairs. Sennrich and Zhang (2019) demonstrated that better translation performance is attainable by performing hyper-parameter tuning when using the same attention-based RNN architecture as in Koehn and Knowles (2017). The authors reported higher BLEU scores than SMT on simulated low-resource experiments in English-to-German MT. Both studies have in common that they did not explore realistic low-resource scenarios nor distant language pairs leaving unanswered the question of the applicability of their findings to these conditions.
In contrast to parallel data, large quantity of monolingual data can be easily collected for many languages. Previous work proposed several ways to take advantage of monolingual data in order to improve translation models trained on parallel data, such as a separately trained language model to be integrated to the NMT system architecture (Gulcehre et al. 2017; Stahlberg et al. 2018) or exploiting monolingual data to create synthetic parallel data. The latter was first proposed for SMT (Schwenk 2008; Lambert et al. 2011), but remains the most prevalent one in NMT (Barrault et al. 2019) due to its simplicity and effectiveness. The core idea is to use an existing MT system to translate monolingual data to produce a sentence-aligned parallel corpus, whose source or target is synthetic. This corpus is then mixed to the initial, non-synthetic, parallel training data to retrain a new MT system.
Nonetheless, this approach only leads to slight improvements in translation quality for SMT. It has been later exploited in NMT (Sennrich et al. 2016a) using the original monolingual data on the target side and the corresponding generated translations on the source side to retrain NMT. A large body of subsequent work proposed to improve this approach that we denote in this paper "backward translation." In particular, Edunov et al. (2018) and Caswell et al. (2019) respectively showed that adding synthetic noise or a tag to the source side of the synthetic parallel data effectively enables the exploitation of very large monolingual data in high-resource settings. In contrast, Burlot and Yvon (2018) showed that the quality of the backward translations has a significant impact for training good NMT systems.
As opposed to backward translation, Imamura and Sumita (2018) proposed to perform "forward translation"Footnote 1 to enhance NMT by augmenting their training data with synthetic data generated by MT on the target side. They only observed moderate improvements, but in contrast to backward translation, this approach does not require a pre-existing NMT system trained for the reverse translation direction.
Recent work has also shown remarkable results in training MT systems using only monolingual data in so-called unsupervised statistical (USMT) and neural (UNMT) machine translation (Artetxe et al. 2018; Lample et al. 2018; Artetxe et al. 2018; Lample et al. 2018). These approaches have only worked in configurations where we do not necessarily need them, i.e., in high-resource scenarios and/or for very close language pairs. Marie et al. (2019) showed that USMT can almost reach the translation quality of supervised NMT for close language pairs (Spanis+6h–Portuguese) while Marie et al. (2019) showed that this approach is unable to generate a translation for distant language pairs (English–Gujarati and English–Kazakh).
To the best of our knowledge, none of these approaches have been successfully applied to NMT systems for extremely low-resource language pairs, such as Asian languages that we experimented with in this paper.
Multilingual NMT
Most NMT architectures are flexible enough to incorporate multiple translation directions, through orthogonal approaches involving data pre-processing or architecture modifications. One such modification is to create separate encoders and decoders for each source and target language and couple them with a shared attention mechanism (Firat et al. 2016). However, this method requires substantial NMT architecture modifications and leads to an increase in learnable model parameters, which rapidly becomes computationally expensive. At the data level, a straightforward and powerful approach was proposed by Johnson et al. (2017), where a standard NMT model was used as a black-box and an artificial tokens was added on the source side of the parallel data. Such a token could be for instance <2xx>, where xx indicates the target language.
During training, the NMT system uses these source tokens to produce the designated target language and thus allows to translate between multiple language pairs. This approach allows for the so-called zero-shot translation, enabling translation directions that are unseen during training and for which no parallel training data is available (Firat et al. 2016; Johnson et al. 2017).
One of the advantages of multilingual NMT is its ability to leverage high-resource language pairs to improve the translation quality on low-resource ones. Previous studies have shown that jointly learning low-resource and high-resource pairs leads to improved translation quality for the low-resource one (Firat et al. 2016; Johnson et al. 2017; Dabre et al. 2019). Furthermore, the performance tends to improve as the number of language pairs (consequently the training data) increases (Aharoni et al. 2019).
However, to the best of our knowledge, the impact of joint multilingual NMT training with the Transformer architecture in an extremely low-resource scenario for distant languages through the use of a multi-parallel corpus was not investigated previously. This motivates our work on multilingual NMT, which we contrast with bilingual models through an exhaustive hyper-parameter tuning. All the multilingual NMT models trained and evaluated in our work are based on the approach presented in Johnson et al. (2017), which relies on the artificial token indicating the target language and prepended onto the source side of each parallel sentence pair.
Encoder–decoder models consist in encoding an input sequence \(\mathbf {X} = \{x_1, x_2, \ldots , x_n\}\) (\(\mathbf {X} \in \mathbb {R}^{n \times d}\)) and producing a corresponding output sequence \(\mathbf {Y} = \{y_1, y_2, \ldots , y_m\}\) (\(\mathbf {Y} \in \mathbb {R}^{m \times d}\)), where d is the model dimensionality and n and m are the input and output sequence lengths, respectively. The Transformer is built on stacked \(L_e\) encoder and \(L_d\) decoder layers, each layer consisting of sub-layers of concatenated multi-head dot-product scaled attention (Eq. 1) and two position-wise feed-forward layers with a non-linear activation function in between, usually a rectified linear unit (ReLU) (Nair and Hinton 2010) (Eq. 2).
$$\begin{aligned} attn ( \varvec{\iota }_1, \varvec{\iota }_2 )&= [ head_1 ; \ldots ; head_i; \ldots ; head_h ] \mathbf {W}^O, \end{aligned}$$
$$\begin{aligned} head_i ( \varvec{\iota }_1, \varvec{\iota }_2 )&= softmax \left( \frac{ \mathbf {Q} \mathbf {W}^Q_i \cdot (\mathbf {K}\mathbf {W}^K_i)^T }{\sqrt{d}}\right) \mathbf {V} \mathbf {W}^V_i, \nonumber \\ ffn \left( \mathbf {H}^l_{ attn }\right)&= ReLU \left( \mathbf {H}^l_{ attn } \mathbf {W}_1 + b_1\right) \mathbf {W}_2 + b_2. \end{aligned}$$
Both encoder and decoder layers contain self-attention and feed-forward sub-layers, while the decoder contains an additional encoder–decoder attention sub-layer.Footnote 2 The hidden representations produced by self-attention and feed-forward sub-layers in the \(l_{e}\)-th encoder layer (\(l_{e}\in \{1,\ldots ,L_{e}\}\)) are formalized by \(\mathbf {H}^{l_{e}}_{ Eattn }\) (Eq. 3) and \(\mathbf {H}^{l_{e}}_{ Effn }\) (Eq. 4), respectively. Equivalently, the decoder sub-layers in the \(l_{d}\)-th decoder layer (\(l_{d}\in \{1,\ldots ,L_{d}\}\)) are formalized by \(\mathbf {H}^{l_{d}}_{ Dattn }\) (Eq. 5), \(\mathbf {H}^{l_{d}}_{ EDattn }\) (Eq. 6), and \(\mathbf {H}^{l_{d}}_{ Dffn }\) (Eq. 7). The sub-layer \(\mathbf {H}^{l_{e}}_{ Eattn }\) for \(l_{e} = 1\) receives the input sequence embeddings \(\mathbf {X}\) instead of \(\mathbf {H}^{l_{e}-1}_{ Effn }\), as it is the first encoder layer. The same applies to the decoder.
$$\begin{aligned} \mathbf {H}^{l_{e}}_{ Eattn }&= \nu \left( \chi \left( \mathbf {H}^{l_{e}-1}_{ Effn }, attn \left( \mathbf {H}^{l_{e}-1}_{ Effn }, \mathbf {H}^{l_{e}-1}_{ Effn } \right) \right) \right) , \end{aligned}$$
$$\begin{aligned} \mathbf {H}^{l_{e}}_{ Effn }&= \nu \left( \chi \left( \mathbf {H}^{l_{e}}_{ Eattn }, ffn \left( \mathbf {H}^{l_{e}}_{ Eattn } \right) \right) \right) , \end{aligned}$$
$$\begin{aligned} \mathbf {H}^{l_{d}}_{ Dattn }&= \nu \left( \chi \left( \mathbf {H}^{l_{d}-1}_{ Dffn }, attn \left( \mathbf {H}^{l_{d}-1}_{ Dffn }, \mathbf {H}^{l_{d}-1}_{ Dffn } \right) \right) \right) , \end{aligned}$$
$$\begin{aligned} \mathbf {H}^{l_{d}}_{ EDattn }&= \nu \left( \chi \left( \mathbf {H}^{l_{d}}_{ Dattn }, attn \left( \mathbf {H}^{l_{d}}_{ Dattn }, \mathbf {H}^{L_{e}}_{ Effn } \right) \right) \right) , \end{aligned}$$
$$\begin{aligned} \mathbf {H}^{l_{d}}_{ Dffn }&= \nu \left( \chi \left( \mathbf {H}^{l_{d}}_{ EDattn }, ffn \left( \mathbf {H}^{l_{d}}_{ EDattn } \right) \right) \right) . \end{aligned}$$
Each sub-layer includes LayerNorm (Ba et al. 2016), noted \(\nu \), parameterized by \(\mathbf {g}\) and \(\mathbf {b}\), with input vector \(\hat{\mathbf {h}}\), mean \(\mu \), and standard deviation \(\varphi \) (Eq. 8), and a residual connection noted \(\chi \) with input vectors \(\hat{\mathbf {h}}\) and \(\hat{\mathbf {h}'}\) (Eq. 9). In the default Transformer architecture, LayerNorm is placed after each non-linearity and residual connection, also called post-norm. However, an alternative configuration is pre-norm, placing the normalization layer prior to the non-linearity.
$$\begin{aligned} \nu ( \hat{\mathbf {h}} )&= \frac{\hat{\mathbf {h}} - \mu }{\varphi } \odot \mathbf {g} + \mathbf {b} \end{aligned}$$
$$\begin{aligned} \chi ( \hat{\mathbf {h}}, \hat{\mathbf {h}'} )&= \hat{\mathbf {h}} + \hat{\mathbf {h}'} \end{aligned}.$$
Hyper-parameters
The Transformer architecture has a set of hyper-parameters to be optimized given the training data. Based on the formalism introduced in Sect. 2.3, the architecture-dependent hyper-parameters to optimize are the model dimensionality d, which is equal to the input and output token embedding dimensionality, the number of heads h, the size of the feed-forward layers \( ffn \) and thus the dimensions of the parameter matrices \(\mathbf {W}_1\) and \(\mathbf {W}_2\), as well as the biases \(b_1\) and \(b_2\), and the number of encoder and decoder layers \(L_e\) and \(L_d\), respectively. In addition, it was shown in recent work that the position of the normalization layer \(\nu \), before or after the attention or feed-forward layers, leads to training instability (i.e., no convergence) for deep Transformer architectures or in low-resource settings using an out-of-the-box hyper-parameter configuration (Wang et al. 2019; Nguyen and Salazar 2019).
Our exhaustive hyper-parameter search for NMT models is motivated by the findings of Sennrich and Zhang (2019) where the authors conducted experiments in low-resource settings using RNNs and showed that commonly used hyper-parameters do not lead to the best results. However, the impact of various number of layers was not evaluated in their study. This limitation could lead to sub-optimal translation quality, as other studies on mixed or low-resource settings have opted for a reduced number of layers in the Transformer architecture, for instance using 5 encoder and decoder layers instead of the usual 6 (Schwenk et al. 2019; Chen et al. 2019). In these latter publications, authors also used fewer number of attention heads, between 2 and 4, compared to the out-of-the-box 8 heads from the vanilla Transformer.
Moreover, a number of recent studies on the Transformer architecture have shown that not all attention heads are necessary. For instance, Voita et al. (2019) evaluated the importance of each head in the multi-head attention mechanism in a layer-wise fashion. They identified their lexical and syntactic roles and proposed a head pruning mechanism. However, their method is only applied to a fully-trained 8-head model. A concurrent study carried out by Michel et al. (2019), both for NMT and natural language inference tasks, focused on measuring the importance of each head through a masking approach. This method showed that a high redundancy is present in the parameters of most heads given the rest of the model and that some heads can be pruned regardless of the test set used. An extension to their approach allows for head masking during the training procedure. Results indicate that the importance of each head is established by the Transformer at an early stage during NMT training.
Finding the best performing architecture based on a validation set given a training corpus is possible through hyper-parameter grid-search or by relying on neural architecture search methods (c.f. Elsken et al. (2018)). The former solution is realistic in our extremely low-resource scenario and the latter is beyond the scope of our work.
Experimental settings
The aim of the study presented in this paper is to provide a set of techniques which tackles low-resource related issues encountered when training NMT models and more specifically Transformers. This particular NMT architecture involves a large amount of hyper-parameter combinations to be tuned, which is the cornerstone of building strong baselines. The focus of our work is a set of extremely low-resource distant language pairs with a realistic data setting, based on corpora presented in Sect. 3.1. We compare traditional SMT models to supervised and unsupervised NMT models with the Transformer architecture, whose specificities and training procedures are presented in Sect. 3.2. Details about the post-processing and evaluation methods are finally presented in Sect. 3.3.
The dataset used in our experiments contains parallel and monolingual data. The former composes the training set of the baseline NMT systems, as well as the validation and test sets, while the latter constitutes the corpora used to produce synthetic parallel data, namely backward and forward translations.
Parallel corpus
The parallel training, validation, and test sets were extracted from the Asian Language Treebank (ALT) corpus (Riza et al. 2016).Footnote 3 We focus on four Asian languages, i.e., Japanese, Lao, Malay, and Vietnamese, aligned to English, leading to eight translation directions. The ALT corpus comprises a total of 20, 106 sentences initially taken from the English Wikinews and translated into the other languages. Thus, the English side of the corpus is considered as original while the other languages are considered as translationese (Gellerstam 1986), i.e., texts that shares a set of lexical, syntactic and/or textual features distinguishing them from non-translated texts. Statistics for the parallel data used in our experiments are presented in Table 1.
Table 1 Statistics of the parallel data used in our experiments
Monolingual corpus
Table 2 Statistics for the entire monolingual corpora, comprising 163M, 87M, 737k, 15M, and 169M lines, respectively for English, Japanese, Lao, Malay, and, Vietnamese, the sampled sub-corpora made of 18k (18,088), 100k, 1M, 10M, or 50M lines. Tokens and types for Japanese and Lao are calculated at the character level
We used monolingual data provided by the Common Crawl project,Footnote 4 which were crawled from various websites in any language. We extracted the data from the April 2018 and April 2019 dumps. To identify from the dumps the lines in English, Japanese, Lao, Malay, and Vietnamese, we used the fastText (Bojanowski et al. 2016)Footnote 5 pretrained model for language identification.Footnote 6 The resulting monolingual corpora still contained a large portion of noisy data, such as long sequences of numbers and/or punctuation marks. For cleaning, we decided to remove lines in the corpora that fulfill at least one of the following conditions:
more than 25% of its tokens are numbers or punctuation marks.Footnote 7
contains less than 4 tokens.Footnote 8
contains more than 150 tokens.
Statistics of the monolingual corpora and their sub-samples used in our experiments are presented in Table 2.
MT systems
We trained and evaluated three types of MT systems: supervised NMT, unsupervised NMT, and SMT systems. The computing architecture at our disposal used for our NMT systems consists in 8 Nvidia Tesla V100 with the CUDA library version 10.2. Each of the following sections gives the details of each type of MT systems, including tools, pre-processing, hyper-parameters, and training procedures used.
Supervised NMT systems
Our supervised NMT systems include ordinary bilingual (one-to-one) and multilingual (one-to-many and many-to-one) NMT systems. All the systems were trained using the fairseq toolkit (Ott et al. 2019) based on PyTorch.Footnote 9
The only pre-processing applied to the parallel and monolingual data used for our supervised NMT systems was a sub-word transformation method (Sennrich et al. 2016b). Neither tokenization nor case alteration was conducted in order to keep our method language agnostic as much as possible. Because Japanese and Lao languages do not contain spacing, we employed a sub-word transformation approach, sentencepiece (Kudo and Richardson 2018), which is based on sequences of characters. Models were learned on joint vocabularies for bilingual NMT systems and on all languages for the multilingual ones. When the same script is used between languages, the shared vocabulary covers all the observed sub-words of these languages. When different scripts are used between languages, only common elements (punctuation, numbers, etc.) are shared. We restricted the number of sub-word transformation operations to 8000 for all the bilingual NMT models and to 32,000 for all the multilingual NMT models.
Our main objective is to explore a vast hyper-parameter space to obtain our baseline supervised NMT systems, focusing on specific aspects of the Transformer architecture. Our choice of hyper-parameters (motivated in Sect. 2.4) and their values are presented in Table 3. These are based on our preliminary experiments and on the findings of previous work that showed which Transformer hyper-parameters have the largest impact on translation quality measured by automatic metrics (Sennrich and Zhang 2019; Nguyen and Salazar 2019). The exhaustive search of hyper-parameter combinations resulted in 576 systems trained and evaluated for each translation direction, using the training and validation sets specific to each language pair presented in Table 1. Because the experimental setting presented in this paper focuses on extremely low-resource languages, it is possible to train and evaluate a large number of NMT systems without requiring too much computing resources.
We trained all the supervised NMT systems following the same pre-determined procedure, based on gradient descent using the Adam optimizer (Kingma and Ba 2014) and the cross-entropy objective with smoothed labels based on a smoothing rate of 0.1. The parameters of the optimizer were \(\beta _1 = 0.9\), \(\beta _2 = 0.98\), \(\epsilon = 10^{-9}\). The learning rate was scheduled as in Vaswani et al. (2017), initialized at \(1.7^{-7}\) and following an initial 4k steps warmup before decaying at the inverse squared root rate. The NMT architecture used the ReLU (Nair and Hinton 2010) activation function for the feed-forward layers and scaled dot-product attention for the self and encoder–decoder attention layers. A dropout rate of 0.1 was applied to all configurations during training and no gradient clipping was used. We used a batch of 1, 024 tokens and stopped training after 200 epochs for the baseline systems, 80 epochs for the systems using additional synthetic data and 40 epochs for the largest (10M) data configurations. Bilingual models were evaluated every epoch, while multilingual models were evaluated every 5 epochs, both using BLEU on non post-processed validation sets (including sub-words). The best scoring models were kept for final evaluation.
Table 3 Hyper-parameters considered during the tuning of baseline NMT systems
For each trained model, we introduced two additional hyper-parameters for decoding, i.e., the decoder beam size and translation length penalty as shown in Table 3, to be tuned. The best combination of these two parameters were kept to decode the test.
Unsupervised NMT systems
Our UNMT systems used the Transformer-based architecture proposed by Lample et al. (2018). This architecture relies on a denoising autoencoder as language model during training, on a latent representation shared across languages for the encoder and the decoder, and pre-initialized cross-lingual word embeddings. To set up a state-of-the-art UNMT system, we used for initialization a pre-trained cross-lingual language model, i.e., XLM, as performed by Lample and Conneau (2019).
UNMT was exclusively trained on monolingual data. For Lao and Malay we used the entire monolingual data, while we randomly sampled 50 million lines from each of the English, Japanese, and Vietnamese monolingual data. To select the best models, XLM and then UNMT models, we relied on the same validation sets used by our supervised NMT systems. Since pre-training an XLM model followed by the training of an UNMT system is costly, we chose to train a single multilingual XLM model and UNMT system for all translation directions. To construct the vocabulary of the XLM model, we concatenated all the monolingual data and trained a sentencepiece model with 32,000 operations. The sentencepiece model was then applied to the monolingual data for each language and we fixed the vocabulary size during training at 50,000 tokens.
For training XLM and UNMT, we ran the framework publicly released by Lample and Conneau (2019),Footnote 10 with default parameters, namely: 6 encoder and decoder layers, 8 heads, 1024 embedding and 4096 feed-forward dimensions, layer dropout and attention dropout of 0.1, and the GELU activation function. Note that XLM and UNMT models must use the same hyper-parameters and that we could not tune these hyper-parameters due to the prohibitive cost of training. Our XLM model is trained using the masked language model objective (MLM) which is usually trained monolingually. However, since we have trained a single model using the monolingual data for all the languages with a shared vocabulary, our XLM model is cross-lingual. The training steps for the denoising autoencoder component were language specific while the back-translation training steps combined all translation directions involving English.
SMT systems
We used Moses (Koehn et al. 2007)Footnote 11 and its default parameters to conduct SMT experiments, including MSD lexicalized reordering models and a distortion limit of 6. We learned the word alignments with mgiza for extracting the phrase tables. We used 4-gram language models trained, without pruning, with LMPLZ from the kenlm toolkit (Heafield et al. 2013) on the entire monolingual data. For tuning, we used kb-mira (Cherry and Foster 2012) and selected the best set of weights according to the BLEU score obtained on the validation data after 15 iterations.
In contrast to our NMT systems, we did not apply sentencepiece on the data for our experiments with SMT. Instead, to follow a standard configuration typically used for SMT, we tokenized the data using Moses tokenizer for English, Malay, and Vietnamese, only specifying the language option "-l en" for these three languages. For Japanese, we tokenized our data with MeCabFootnote 12 while we used an in-house tokenizer for Lao.
Post-processing and evaluation
Prior to evaluating NMT system outputs, the removal of spacing and special characters introduced by sentencepiece was applied to translations as a single post-processing step. For the SMT system outputs, detokenization was conducted using the detokenizer.perl tool included in the Moses toolkit for English, Malay and Vietnamese languages, while simple space deletion was applied for Japanese and Lao.
We measured the quality of translations using two automatic metrics, BLEU (Papineni et al. 2002) and chrF (Popović 2015) implemented in SacreBLEU (Post 2018) and relying on the reference translation available in the ALT corpus. For target languages which do not contain spaces, i.e., Japanese and Lao, the evaluation was conducted using the chrF metric only, while we used both chrF and BLEU for target languages containing spaces, i.e., English, Malay and Vietnamese. As two automatic metrics are used for hyper-parameter tuning of NMT systems for six translation directions, the priority is given to models performing best according to chrF, while BLEU is used as a tie-breaker. Systems outputs comparison and statistical significance tests are conducted based on bootstrap resampling using 500 iterations and 1000 samples, following (Koehn 2004).
Experiments and results
This section presents the experiments conducted in extremely low-resource settings using the Transformer NMT architecture. First, the baseline systems resulting from exhaustive hyper-parameter search are detailed. Second, the use of monolingual data to produce synthetic data is introduced and two setups, backward and forward translation, are presented. Finally, two multilingual settings, English-to-many and many-to-English, are evaluated.
The best NMT architectures and decoder hyper-parameters were determined based on the validation set and, in order to be consistent for all translation directions, on the chrF metric. The final evaluation of best architectures were conducted by translating the test set, whose evaluation scores are presented in the following paragraphs and summarized in Table 4.Footnote 13 The best configurations according to automatic metrics were kept for further experiments on each translation direction using monolingual data (presented in Sect. 4.2).
Table 4 Test set results obtained with our SMT and NMT systems before and after hyper-parameter tuning (NMT\(_{ v }\) and NMT\(_{ t }\), respectively), along with the corresponding architectures, for the eight translation directions evaluated in our baseline experiments. Tuned parameters are: d model dimension, ff feed-forward dimension, enc encoder layers, dec decoder layers, h heads, norm normalization position. Results in bold indicate systems significantly better than the others with \(p<0.05\)
English-to-Japanese
Vanilla Transformer architecture (base configuration with embeddings of 512 dimensions, 6 encoder and decoder layers, 8 heads, 2048 feed-forward dimensions, post-normalization) with default decoder parameters (beam size of 4 and length penalty set at 0.6) leads to a chrF score of 0.042, while the best configuration after hyper-parameters tuning reaches 0.212 (\(+0.170\)pts). The final configuration has embeddings of 512 dimensions, 2 encoder and 6 decoder layers with 1 attention head, 512 feed-forward dimensions with post-normalization. For decoding, a beam size of 12 and a length penalty set at 1.4 lead to the best results on the validation set.
Japanese-to-English
Out-of-the-box Transformer architecture leads to a BLEU of 0.2 and a chrF of 0.130. The best configuration reaches 8.6 BLEU and 0.372 chrF (\(+8.4\)pts and \(+0.242\)pts respectively). The best configuration according to the validation set has 512 embeddings dimensions, 2 encoder and 6 decoder layers with 1 attention heads, 512 feed-forward dimensions and post-normalization. The decoder parameters are set to a beam size of 12 and a length penalty of 1.4.
English-to-Lao
Default architecture reaches a chrF score of 0.131 for this translation direction. After hyper-parameter search, the score reaches 0.339 chrF (\(+0.208\) pts) with a best architecture composed of 1 encoder and 6 decoder layers with 4 heads, 512 dimensions for both embeddings and feed-forward, using post-normalization. The decoder beam size is 12 and the length penalty is 1.4.
Lao-to-English
Without hyper-parameter search, the results obtained on the test set are 0.4 BLEU and 0.170 chrF. Tuning leads to 10.5 (\(+10.1\)pts) and 0.374 (\(+0.204\)pts) for BLEU and chrF respectively. The best architecture has 512 dimensions for both embeddings and feed-forward, pre-normalization, 4 encoder and 6 decoder layers with 1 attention head. A beam size of 12 and a length penalty of 1.4 are used for decoding.
English-to-Malay
For this translation direction, the default setup leads to 0.9 BLEU and 0.213 chrF. After tuning, with a beam size of 4 and a length penalty of 1.4 for the decoder, a BLEU of 33.3 (\(+32.4\)pts) and a chrF of 0.605 (\(+0.392\)pts) are reached. These scores are obtained with the following configuration: embedding and feed-forward dimensions of 512, 6 encoder and decoder layers with 1 head and pre-normalization.
Malay-to-English
Default Transformer configuration reaches a BLEU of 1.7 and a chrF of 0.194. The best configuration found during hyper-parameter search leads to 29.9 BLEU (\(+28.2\)pts) and 0.559 chrF (\(+0.365\)pts), with both embedding and feed-forward dimensions at 512, 4 encoder and 6 decoder layers with 4 heads and post-normalization. For decoding, a beam size of 12 and a length penalty set at 1.4 lead to the best results on the validation set.
English-to-Vietnamese
With no parameter search for this translation direction, 0.9 BLEU and 0.134 chrF are obtained. With tuning, the best configuration reaches 26.8 BLEU (\(+25.9\)pts) and 0.468 chrF (\(+0.334\)pts) using 512 embedding dimensions, 4, 096 feed-forward dimensions, 4 encoder and decoder layers with 4 heads and pre-normalization. The decoder beam size is 12 and the length penalty is 1.4.
Vietnamese-to-English
Using default Transformer architecture leads to 0.6 BLEU and 0.195 chrF. After tuning, 21.9 BLEU and 0.484 chrF are obtained with a configuration using 512 dimensions for embeddings and feed-forward, 4 encoder and 6 decoder layers with 4 attention heads, using post-normalization. A beam size of 12 and a length penalty of 1.4 are used for decoding.
General observations
For all translation directions, the out-of-the-box Transformer architecture (NMT\(_v\)) does not lead to the best results and models trained with this configuration fail to converge. Models with tuned hyper-parameters (NMT\(_t\)) improve over NMT\(_v\) for all translation directions, while SMT remains the best performing approach. Among the two model dimensionalities (hyper-parameter noted d) evaluated, the smaller one leads to the best results for all language pairs. For the decoder hyper-parameters, the length penalty set at 1.4 is leading to the best results on the validation set for all language pairs and translation directions. A beam size of 12 is the best performing configuration for all pairs except for EN\(\rightarrow \)MS, with a beam size of 4. However, for this particular translation direction, only the chrF score is lower when using a beam size of 12 while the BLEU score is identical.
Monolingual data
This section presents the experiments conducted in adding synthetic parallel data to our baseline NMT systems using monolingual corpora described in Sect. 3.1. Four approaches were investigated, including three based on backward translation where monolingual corpora used were in the target language, and one based on forward translation where monolingual corpora used were in the source language. As backward translation variations, we investigated the use of a specific tag indicating the origin of the data, as well as the introduction of noise into the synthetic data.
Backward translation
Table 5 Test set results obtained when using backward-translated monolingual data in addition to the parallel training data. Baseline NMTt uses only parallel training data and baseline SMT uses monolingual data for its language model
The use of monolingual corpora to produce synthetic parallel data through backward translation (or back-translation, i.e., translating from target to source language) has been popularized by Sennrich et al. (2016a). We made use of the monolingual data presented in Table 2 and the best NMT systems presented in Table 4 (NMTt) to produce additional training data for each translation direction.
We evaluated the impact of different amounts of synthetic data, i.e., 18k, 100k, 1M and 10M, as well as two NMT configurations per language direction: the best performing baseline as presented in Table 4 (NMTt) and the out-of-the-box Transformer configuration, henceforth Transformer baseFootnote 14, noted NMTv. Comparison of the two architectures along with different amounts of synthetic data allows us to evaluate how much backward translations are required in order to switch back to the commonly used Transformer configuration. These findings are illustrated in Sect. 5.
A summary of the best results involving back-translation are presented in Table 5. When comparing the best tuned baseline (NMTt) to the Transformer base (NMTv) with 10M monolingual data, NMT\(_ v \) outperforms NMTt for all translation directions. For three translation directions, i.e., EN → JA, EN → MS and EN → VI, SMT outperforms NMT architectures. For the EN → LO direction, however, NMTv outperforms SMT, which is explained by the small amount of Lao monolingual data which does not allow for a robust language model to be used by SMT.
Tagged backward translation
Table 6 Test set results obtained when using tagged backward-translated monolingual data in addition to the parallel training data. Baseline NMTt uses only parallel training data and baseline SMT uses monolingual data for its language model. Results in bold indicate systems significantly better than the others with \(p<0.05\)
Caswell et al. (2019) empirically showed that adding a unique token at the beginning of each backward translation, i.e., on the source side, acts as a tag that helps the system during training to differentiate backward translations from the original parallel training data. According to the authors, this method is as effective as introducing synthetic noise for improving translation quality (Edunov et al. 2018) (noised backward translation is investigated in Sect. 4.2.3). We believe that the tagged approach is simpler than the noised one since it requires only one editing operation, i.e., the addition of the tag.
To study the effect of tagging backward translation in an extremely low-resource configuration, we performed experiments with the same backward translations used in Sect. 4.2.1 modified by the addition of a tag "[BT]" at the beginning of each source sentence on the source side of the synthetic parallel data. Table 6 presents the results obtained with our tagged back-translation experiments along with the best tuned NMT baseline using only parallel data (NMTt) and an SMT system using the monolingual data for its language model.
The tagged back-translation results indicate that the Transformer base architecture trained in this way outperforms all other approaches for the eight translation directions. It also improves over the backward translation approach without tag (see Table 5), according to both automatic metrics. All translation directions benefit from adding 10M backward translations except for the EN → JA direction, for which we observe a plateau and no improvement over the system using 1M synthetic parallel data.
Noised backward translation
As an alternative to tagged backward translation, we propose to evaluate the noised backward translation approach for the best performing configuration obtained in the previous section, namely the Transformer base architecture. Adding noise to the source side of backward translated data has previously been explored in NMT and we followed the exact approach proposed by Edunov et al. (2018). Three types of noise were added to each source sentence: word deletion with a probability of 0.1, word replacement by a specific token following a probability of 0.1, and finally word swapping with random swap of words no further than three positions apart.
Our experiments with noisy backward-translated source sentences were conducted using the Transformer base architecture only, as this configuration was leading to the best results when using tagged back-translation. The results are presented in Table 7 and show that tagged backward-translation reaches higher scores for both automatic metrics in all translation directions. Systems trained on noised backward translation outperforms our SMT system except for the EN → JA, EN → MS, and EN → VI translation directions, which are translation directions involving translationese as target.
Table 7 Test set results obtained when using noised backward-translated monolingual data in addition to the parallel training data. The NMT architecture is the Transformer base configuration. Baseline NMTt uses only parallel training data and baseline SMT uses monolingual data for its language model
Forward translation
In contrast to backward translation, with forward translation we used the synthetic data produced by NMT on the target side. Basically, we obtained this configuration by reversing the training parallel data used to train NMT with backward translations in Sect. 4.2.1. One advantage of this approach is that we have clean and original data on the source side to train a better encoder, while a major drawback is that we have synthetic data on the target side, which potentially coerces the decoder into generating ungrammatical translations. Bogoychev and Sennrich (2019) showed that forward translation is more useful when translating from original texts compared to translating from translationese. We thus expected to obtain more improvement according to the automatic metrics for the EN → XX translation directions than for the XX → EN translation directions compared to the baseline NMT systems.
Table 8 Test set results obtained when using forward-translated monolingual data in addition to the parallel training data. Baseline NMTt uses only parallel training data and baseline SMT uses monolingual data for its language model
The results obtained for our forward translation experiments show that this approach is outperformed by SMT for all translation directions. For translation directions with translationese as target, forward translation improves over the NMT baseline, confirming the findings of previous studies. Results also show that a plateau is reached when using 1M synthetic sentence pairs and only EN → MS benefits from adding 10M pairs (Table 8).
Multilingual models
Experiments on multilingual NMT involved jointly training many-to-one and one-to-many translation models. In particular, we examined two models, one translating from English into four target languages (EN → XX), and one translating from four source languages into English (XX → EN). For both models, the dataset presented in Table 1 were concatenated, while the former model necessitated target language specific tokens to be pre-pended to the source sentences, as described in Sect. 2.2.
In order to compare our baseline results to the multilingual NMT approach, we conducted the same hyper-parameter search as in Sect. 4.1. Table 9 reports on the best results along with the best NMT architecture for each translation direction. These results show that the multilingual NMT approach outperforms the bilingual NMT models for 4 translation directions, namely EN → JA, JA → EN, EN → LO, and EN → VI. For 5 translation directions, SMT reaches better performances, while multilingual NMT outperforms SMT for 2 translation directions. For JA → EN, SMT reaches better results compared to multilingual NMT based on the chrF metric, while both approaches are not significantly different based on BLEU.
Table 9 Test set results obtained with the multilingual NMT approach (NMT\(_{ m }\)) along with the corresponding architectures for the eight translation directions. Two disjoint models are trained, from and towards English. Tuned parameters are: d model dimension, ff feed-forward dimension, enc encoder layers, dec decoder layers, h heads, norm normalization position. NMT\(_{ t }\) are bilingual baseline models after hyper-parameter tuning and SMT is the baseline model without monolingual data for its language model. Results in bold indicate systems significantly better than the others with \(p<0.05\)
Unsupervised multilingual models
In this section, we present the results of our UNMT experiments performed with the system described in Sect. 3.2. Previous work has shown that UNMT can reach good translation quality but experiments were limited to high-resource language pairs. We intended to investigate the usefulness of UNMT for our extremely low-resource language pairs, including a truly low-resource language, Lao, for which we only had less than 1M lines of monolingual data. To the best of our knowledge, this is the first time that UNMT experiments and results for such small amount of monolingual data are reported. Results are presented in Table 10.
Table 10 Test set results obtained with UNMT (NMT\(_ u \)) in comparison with the best NMT → systems, trained without using monolingual data, and systems trained on 10M (or 737k for EN\(\rightarrow \)LO) tagged backward translations with the Transformer base architecture (T-BT). SMT uses monolingual data for its language model
We observed that our UNMT model (noted NMT\(_ u \)) is unable to produce translations for the EN–JA language pair in both translation directions as exhibited by BLEU and chrF scores close to 0. On the other hand, NMT\(_ u \) performs slightly better than NMT\(_ t \) for LO\(\rightarrow \)EN (+0.2 BLEU, +0.004 chrF) and MS\(\rightarrow \)EN (+2.0 BLEU, +0.011 chrF). However, tagged backward translation virtually requires less monolingual data than NMT\(_ u \), but exploits also the original parallel data, leading to the best system by a large margin of more than 10 BLEU points for all language pairs except EN-LO.
Analysis and discussion
This section presents the analysis and discussion relative to the experiments and results obtained in Sect. 4, focusing on four aspects.
The position of the layer normalization component and its impact on the convergence of NMT models as well as on the final results according to automatic metrics.
The integration of monolingual data produced by baseline NMT models with tuned hyper-parameters through four different approaches.
The combination of the eight translation directions in a multilingual NMT model.
The training of unsupervised NMT by using monolingual data only.
Layer normalization position
Layer normalization (Ba et al. 2016) is a crucial component of the Transformer architecture, since it allows for stable training and fast convergence. Some recent studies have shown that the position of the layer normalization mechanism has an impact on the ability to train deep Transformer models in high-resource settings or out-of-the-box Transformer configurations in low-resource settings. For the latter, Nguyen and Salazar (2019) show that when using a Transformer base configuration, one should always put layer normalization prior to the non-linear layers within a Transformer block (i.e., pre-norm).
Our experiments on eight translation directions in extremely low-resource settings validate these findings. More precisely, all our baseline NMT systems using the Transformer base configuration with post-non-linearity layer normalization (i.e. post-norm) do not converge. Successfully trained baseline models using post-norm require fewer number of encoder layers, while pre-norm allows for deeper architecture even with extremely low-resource language pairs.
An interesting observation made on the baseline results is that the decoder part of the Transformer does not suffer from depth-related issue. Additionally, during hyper-parameter tuning, results show that there are larger gaps in terms of chrF scores between configurations using post-norm than the ones using pre-norm. As a recommendation, we encourage the NMT practitioners to do hyper-parameter search including at least two layer normalization positions, pre- and post-norm, in order to reach the best performance in low-resource settings. Another recommendation, in order to save computing time during hyper-parameter search, is to limit the encoder depth to a maximum of 4 layers when using post-norm, as our experiments show that 6-layer encoders and post-norm do not converge for all language pairs and translation directions of the training corpora presented in this paper.
Producing synthetic parallel data from monolingual corpora has been shown to improve NMT performance. In our work, we investigate four methods to generate such parallel data, including backward and forward translations. For the former method, three variants of the commonly used backward translation are explored, with or without source-side tags indicating the origin of the data, and with or without noise introduced in the source sentences.
Test set results obtained by the best tuned baseline (NMT\(_{ t }\)) and Transformer base (NMT\(_{ v }\)) with various amount of monolingual data for back-translation (bt), tagged back-translation (t-bt) and forward translation (ft)
When using backward translations without tags and up to 10M monolingual sentences, the Transformer base architecture outperforms the best baseline architecture obtained through hyper-parameter search. However, the synthetic data itself was produced by the best baseline NMT system. Our preliminary experiments using the Transformer base to produce the backward translations did unsurprisingly lead to lower translation quality, due to the low quality of the synthetic data. Nevertheless, using more than 100k backward translations produced by the best baseline architecture allows to switch to the Transformer base. As indicated by Fig. 1, when using backward translated synthetic data (noted bt), Transformer base (NMT\(_{ v }\)) outperforms the tuned baseline (NMT\(_{ t }\)) following the same trend for the eight translation directions.
Based on the empirical results obtained in Table 6, tagged backward translation is the best performing approach among all other approaches evaluated in this work, according to automatic metrics, and regardless of the quantity of monolingual data used, as summarized in Table 11. Noised backward translations and forward translations both underperform tagged backward translation. Despite our extremely low-resource setting, we could confirm the findings of Caswell et al. (2019) that pre-pending a tag to each backward-translated sentence is very effective in improving BLEU scores, irrespective of whether the translation direction is for original or translationese texts. However, while Caswell et al. (2019) reports that tagged backward translation is as effective as introducing synthetic noise in a high-resource settings, we observe a different tendency in our experiments. Using the noised backward translation approach leads to lower scores than tagged backward translation and does not improve over the use of backward translations without noise.
Since we only have very small original parallel data, we assume that degrading the quality of the backward translations is too detrimental to train an NMT system that does not have enough instance of well-formed source sentences to learn the characteristics of the source language. Edunov et al. (2018) also report on a similar observation in a low-resource configuration using only 80k training parallel sentences. As for the use of forward translations, this approach outperforms the backward translation approach only when using a small quantity of monolingual data. As discussed by Bogoychev and Sennrich (2019), training NMT with synthetic data on the target side can be misleading for the decoder especially when the forward translations are of a very poor quality such as translations generated by a low-resouce NMT system.
Table 11 Test set results obtained with the best NMT systems using backward translations (BT), tagged backward translations (T-BT), forward translations (FT), and noised backward translations (N-BT), regardless of the hyper-parameters and quantity of monolingual data used. Results in bold indicate systems significantly better than the others with \(p<0.05\)
Comparing the BLEU scores of bilingual and multilingual models for the EN → JA, EN → LO and EN → VI translation directions, multilingual models outperform bilingual ones. On the other hand, for the other translation directions, the performance of multilingual models are comparable or lower than bilingual ones. When comparing SMT to multilingual NMT models, well tuned multilingual models outperform SMT for the EN → JA and EN → LO translation directions, while SMT is as good or better for the remaining six translation directions. In contrast, well tuned NMT models are unable to outperform SMT regardless of the translation direction.
Our observations are in line with previous works such as Firat et al. (2016), Johnson et al. (2017) which showed that multilingual models outperform bilingual ones for low-resource language pairs. However, these works did not focus on extremely low-resource setting and multi-parallel corpora for training multilingual models. Although multilingualism is known to improve performance for low-resource languages, we observe drops in performance for some of the language pairs involved. The work on multi-stage fine-tuning (Dabre et al. 2019), which uses N-way parallel corpora similar to the ones used in our work, supports our observations regarding drops in performance. However, based on the characteristics of our parallel corpora, the multilingual models trained and evaluated in our work do not benefit from additional knowledge by increasing the number of translation directions, because the translation content for all language pairs is the same. Although our multilingual models do not always outperform bilingual NMT or SMT models, we observe that they are useful for difficult translation directions where BLEU scores are below 10pts.
With regards to the tuned hyper-parameters, we noticed slight differences between multilingual and bilingual models. By comparing the best configurations in Tables 9 and 4, we observe that the best multilingual models are mostly those that use pre-normalization of layers (7 translation directions out of 8). In contrast, there is no such tendancy for bilingual models, where 3 configurations out of 8 reach the best performance using pre-normalization.
Another observation is related to the number of decoder layers, where shallower decoders are better for translating into English whereas deeper ones are better for the opposite direction. This tendancy is different from the one that bilingual models exhibit where deeper decoder layers are almost always preferred (6 layers in 7 configurations out of 8). We assume that the many-to-English models require a shallower decoder architecture because of the repetitions on the target side of the training and validation data. However, a deeper analysis is required, involving other language pairs and translation directions, to validate this hypothesis. To the best of our knowledge, in the context of multilingual NMT models for extremely low-resource settings, the study of vast hyper-parameter tuning, optimal configurations and performance comparisons do not exist.
Unsupervised NMT
Our main observation from the UNMT experiments is that it only performs similarly or slightly better, respectively for EN-LO and MS → EN, than our best supervised NMT systems with tuned hyper-parameters, while it is significantly worse for all the remaining translation directions. These results are particularly surprising for EN-LO for which we have only very few data to pre-train an XLM model and UNMT, meaning that we do not necessarily need a large quantity of monolingual data for each language in a multilingual scenario for UNMT. Nonetheless, the gap between supervised MT and UNMT is even more significant if we take as baseline SMT systems with a difference of more than 5.0 BLEU for all translation directions. Training an MT system with only 18k parallel data is thus preferable to using only monolingual data for these extremely low-resource configurations. Furthermore, using only parallel data for supervised NMT is unrealistic since we have also access at least to the same monolingual data used by UNMT. The gap is then even greater when exploiting monolingual data as tagged backward translation for training supervised NMT.
Our results are in line with the few previous work on unsupervised MT for distant low-resource language pairs (Marie et al. 2019) but to the best of our knowledge this is the first time that results for such language pairs are reported using an UNMT system initialized with pre-trained cross-lingual language model (XLM). We assume that the poor performances reached by our UNMT models are mainly due to the significant lexical and syntactic differences between English and the other languages, making the training of a single multilingual embedding space for these languages a very hard task (Søgaard et al. 2018). This is well exemplified by the EN-JA language pair, that involves two languages with a completely different writing systems, for which BLEU and chrF scores are close to 0. The performance of UNMT is thus far from the performance of supervised MT. Further research is necessary in UNMT for distant low-resource language pairs in order to improve it and rivaling the performance of supervised NMT, as observed for more similar language pairs such as French–English and English–German (Artetxe et al. 2018; Lample et al. 2018; Artetxe et al. 2018; Lample et al. 2018).
This paper presented a study on extremely low-resource language pairs with the Transformer NMT architecture involving Asian languages and eight translation directions. After conducting an exhaustive hyper-parameter search focusing on the specificity of the Transformer to define our baseline systems, we trained and evaluated translation models making use of various amount of synthetic data. Four different approaches were employed to generate synthetic data, including backward translations, with or without specific tags and added noise, and forward translations, which were then contrasted with an unsupervised NMT approach built only using monolingual data. Finally, based on the characteristics of the parallel data used in our experiments, we jointly trained multilingual NMT systems from and towards English.
The main objectives of the work presented in this paper is to deliver a recipe allowing MT practitioners to train NMT systems in extremely low-resource scenarios. Based on the empirical evidences validated by eight translation directions, we make the following recommendations.
First, an exhaustive hyper-parameter search, including the position of layer normalization within the Transformer block, is crucial for both a strong baseline and producing synthetic data of sufficient quality.
Second, a clear preference for backward compared to forward translations for the synthetic data generation approach.
Third, generating enough backward translations to benefit from the large amount of parameters available in the commonly used Transformer architecture in terms of number and dimensionality of layers.
Fourth, adding a tag on the source side of backward translations to indicate its origin, which leads to the higher performance than not adding tag or introducing noise.
As future work, we plan to enlarge the search space for hyper-parameter tuning, including more general parameters which are not specific to the Transformer architecture, such as various dropout rates, vocabulary sizes and learning rates. Additionally, we want to increase the number of learnable parameters in the Transformer architecture to avoid reaching a plateau when the amount of synthetic training data increases. Finally, we will explore other multilingual NMT approaches in order to improve the results obtained in this work and to make use of tagged backward translation as we did for the bilingual models.
This is denoted "self-training" in their paper.
For mode details about the self-attention components, please refer to Vaswani et al. (2017).
http://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/.
https://commoncrawl.org/.
https://github.com/facebookresearch/fastText/, version 0.9.1.
We used the large model presented in https://fasttext.cc/blog/2017/10/02/blog-post.html.
For the sake of simplicity, we only considered tokens that contain at least one Arabic numeral or one punctuation marks referenced in the Python3 constant "string.punctuation".
For English, Malay, and Vietnamese, we counted the tokens returned by the Moses tokenizer with the option "-l en" while for Japanese and Lao we counted the tokens returned by sentencepiece (Kudo and Richardson 2018). Note that while we refer these tokenized results for this filtering purpose, for training and evaluating our MT systems, we applied dedicated pre-processing for each type of MT systems.
Python version 3.7.5, PyTorch version 1.3.1.
https://github.com/facebookresearch/XLM.
https://github.com/moses-smt/mosesdecoder.
https://taku910.github.io/mecab/.
All the results obtained during hyper-parameter search are submitted as supplementary material to this paper.
The base model and its hyper-parameters as presented in Vaswani et al. (2017).
Aharoni R, Johnson M, Firat O (2019) Massively multilingual neural machine translation. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp 3874–3884. Association for Computational Linguistics, Minneapolis, USA. https://doi.org/10.18653/v1/N19-1388. https://aclweb.org/anthology/N19-1388
Artetxe M, Labaka G, Agirre E (2018) Unsupervised statistical machine translation. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp 3632–3642. Association for Computational Linguistics, Brussels, Belgium. https://doi.org/10.18653/v1/D18-1399. https://aclweb.org/anthology/D18-1399
Artetxe M, Labaka G, Agirre E, Cho K (2018) Unsupervised neural machine translation. In: Proceedings of the 6th international conference on learning representations. Vancouver, Canada. https://openreview.net/forum?id=Sy2ogebAW
Ba JL, Kiros JR, Hinton GE (2016) Layer normalization. arXiv:1607.06450
Bahdanau D, Cho K, Bengio Y (2015) Neural machine translation by jointly learning to align and translate. In: Proceedings of the 3rd international conference on learning representations. San Diego, USA. arxiv:1409.0473
Barrault L, Bojar O, Costa-jussà MR, Federmann C, Fishel M, Graham Y, Haddow B, Huck M, Koehn P, Malmasi S, Monz C, Müller M, Pal S, Post M, Zampieri M (2019) Findings of the 2019 conference on machine translation (WMT19). In: Proceedings of the fourth conference on machine translation (Volume 2: Shared Task Papers, Day 1), pp 1–61. Association for Computational Linguistics, Florence, Italy. https://doi.org/10.18653/v1/W19-5301. https://aclweb.org/anthology/W19-5301
Bogoychev N, Sennrich R (2019) Domain, translationese and noise in synthetic data for neural machine translation. arXiv:1911.03362
Bojanowski P, Grave E, Joulin A, Mikolov T (2016) Enriching word vectors with subword information. arXiv:1607.04606
Bojar O, Chatterjee R, Federmann C, Graham Y, Haddow B, Huck M, Jimeno Yepes A, Koehn P, Logacheva V, Monz C, Negri M, Neveol A, Neves M, Popel M, Post M, Rubino R, Scarton C, Specia L, Turchi M, Verspoor K, Zampieri M (2016) Findings of the 2016 conference on machine translation. In: Proceedings of the first conference on machine translation, pp 131–198. Association for Computational Linguistics, Berlin, Germany. https://doi.org/10.18653/v1/W16-2301. https://aclweb.org/anthology/W16-2301
Brown PF, Lai JC, Mercer RL (1991) Aligning sentences in parallel corpora. In: Proceedings of the 29th annual meeting on association for computational linguistics, pp 169–176. Association for Computational Linguistics, Berkeley, USA. https://doi.org/10.3115/981344.981366. https://aclweb.org/anthology/P91-1022
Burlot F, Yvon F (2018) Using monolingual data in neural machine translation: a systematic study. In: Proceedings of the third conference on machine translation: research papers, pp 144–155. Association for Computational Linguistics, Brussels, Belgium. https://doi.org/10.18653/v1/W18-6315. https://aclweb.org/anthology/W18-6315
Caswell I, Chelba C, Grangier D (2019) Tagged back-translation. In: Proceedings of the fourth conference on machine translation (volume 1: research papers), pp 53–63. association for computational linguistics, Florence, Italy. https://doi.org/10.18653/v1/W19-5206. https://aclweb.org/anthology/W19-5206.pdf
Cettolo M, Jan N, Sebastian S, Bentivogli L, Cattoni R, Federico M (2016) The IWSLT 2016 evaluation campaign. In: Proceedings of the 13th international workshop on spoken language translation. Seatle, USA. https://workshop2016.iwslt.org/downloads/IWSLT_2016_evaluation_overview.pdf
Chen PJ, Shen J, Le M, Chaudhary V, El-Kishky A, Wenzek G, Ott M, Ranzato M (2019) Facebook AI's WAT19 Myanmar-English Translation Task submission. In: Proceedings of the 6th Workshop on Asian Translation, pp 112–122. Association for Computational Linguistics, Hong Kong, China. https://doi.org/10.18653/v1/D19-5213. https://aclweb.org/anthology/D19-5213
Cherry C, Foster G (2012) Batch tuning strategies for statistical machine translation. In: Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp 427–436. Association for Computational Linguistics, Montréal, Canada. https://aclweb.org/anthology/N12-1047
Cho K, van Merriënboer B, Bahdanau D, Bengio Y (2014) On the properties of neural machine translation: Encoder–decoder approaches. In: Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pp. 103–111. Association for Computational Linguistics, Doha, Qatar. https://doi.org/10.3115/v1/W14-4012. https://aclweb.org/anthology/W14-4012/
Dabre R, Fujita A, Chu C (2019) Exploiting multilingualism through multistage fine-tuning for low-resource neural machine translation. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 1410–1416. Association for Computational Linguistics, Hong Kong, China. https://doi.org/10.18653/v1/D19-1146. https://aclweb.org/anthology/D19-1146
Edunov S, Ott M, Auli M, Grangier D (2018) Understanding back-translation at scale. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 489–500. Association for Computational Linguistics, Brussels, Belgium. https://doi.org/10.18653/v1/D18-1045. https://aclweb.org/anthology/D18-1045
Elsken T, Metzen JH, Hutter F (2018) Neural architecture search: a survey. arXiv:1808.05377
Firat O, Cho K, Bengio Y (2016) Multi-way, multilingual neural machine translation with a shared attention mechanism. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 866–875. Association for Computational Linguistics, San Diego, USA. https://doi.org/10.18653/v1/N16-1101. https://aclweb.org/anthology/N16-1101
Firat O, Sankaran B, Al-Onaizan Y, Yarman Vural FT, Cho K (2016) Zero-resource translation with multi-lingual neural machine translation. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 268–277. Association for Computational Linguistics, Austin, USA. https://doi.org/10.18653/v1/D16-1026. https://aclweb.org/anthology/D16-1026
Forcada ML, \(\tilde{{\rm N}}\)eco RP (1997) Recursive hetero-associative memories for translation. In: Mira J, Moreno-Díaz R, Cabestany J (eds) Biological and artificial computation: from neuroscience to technology. Springer, Berlin. https://doi.org/10.1007/BFb0032504
Gellerstam M (1986) Translationese in Swedish novels translated from English. Transl Stud Scand 1:88–95
Gulcehre C, Firat O, Xu K, Cho K, Bengio Y (2017) On integrating a language model into neural machine translation. Comput Speech Lang 45:137–148. https://doi.org/10.1016/j.csl.2017.01.014
Heafield K, Pouzyrevsky I, Clark JH, Koehn P (2013) Scalable modified Kneser-Ney language model estimation. In: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pp. 690–696. Association for Computational Linguistics, Sofia, Bulgaria. https://aclweb.org/anthology/P13-2121/
Imamura K, Sumita E (2018) NICT self-training approach to neural machine translation at NMT-2018. In: Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pp. 110–115. Association for Computational Linguistics, Melbourne, Australia. https://doi.org/10.18653/v1/W18-2713. https://aclweb.org/anthology/W18-2713
Johnson M, Schuster M, Le QV, Krikun M, Wu Y, Chen Z, Thorat N, Viégas F, Wattenberg M, Corrado G, Hughes M, Dean J (2017) Google's multilingual neural machine translation system: enabling zero-shot translation. Trans Assoc Comput Linguist 5:339–351. https://doi.org/10.1162/tacl_a_00065
Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv:1412.6980
Koehn P (2004) Statistical significance tests for machine translation evaluation. In: Proceedings of the 2004 conference on empirical methods in natural language processing, pp. 388–395
Koehn P, Hoang H, Birch A, Callison-Burch C, Federico M, Bertoldi N, Cowan B, Shen W, Moran C, Zens R, Dyer C, Bojar O, Constantin A, Herbst E (2007) Moses: Open source toolkit for statistical machine translation. In: Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pp. 177–180. Association for Computational Linguistics, Prague, Czech Republic. https://aclweb.org/anthology/P07-2045
Koehn P, Knowles R (2017) Six challenges for neural machine translation. In: Proceedings of the First Workshop on Neural Machine Translation, pp. 28–39. Association for Computational Linguistics, Vancouver, Canada. https://doi.org/10.18653/v1/W17-3204. https://aclweb.org/anthology/W17-3204
Kudo T, Richardson J (2018) Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 66–71. Association for Computational Linguistics, Brussels, Belgium. https://doi.org/10.18653/v1/D18-2012. https://aclweb.org/anthology/D18-2012
Lambert P, Schwenk H, Servan C, Abdul-Rauf S (2011) Investigations on translation model adaptation using monolingual data. In: Proceedings of the Sixth Workshop on Statistical Machine Translation, pp. 284–293. Association for Computational Linguistics, Edinburgh, Scotland. https://aclweb.org/anthology/W11-2132
Lample G, Conneau A (2019) Cross-lingual language model pretraining. In: Proceedings of Advances in Neural Information Processing Systems 32, pp. 7057–7067. Curran Associates, Inc., Vancouver, Canada. https://papers.nips.cc/paper/8928-cross-lingual-language-model-pretraining
Lample G, Conneau A, Denoyer L, Ranzato M (2018) Unsupervised machine translation using monolingual corpora only. In: Proceedings of the 6th International Conference on Learning Representations. Vancouver, Canada. https://openreview.net/forum?id=rkYTTf-AZ
Lample G, Ott M, Conneau A, Denoyer L, Ranzato M (2018) Phrase-based & neural unsupervised machine translation. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 5039–5049. Association for Computational Linguistics, Brussels, Belgium. https://doi.org/10.18653/v1/D18-1549. https://aclweb.org/anthology/D18-1549
Marie B, Dabre R, Fujita A (2019) NICT's machine translation systems for the WMT19 similar language translation task. In: Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pp. 208–212. Association for Computational Linguistics, Florence, Italy. https://doi.org/10.18653/v1/W19-5428. https://aclweb.org/anthology/W19-5428
Marie B, Sun H, Wang R, Chen K, Fujita A, Utiyama M, Sumita E (2019) NICT's unsupervised neural and statistical machine translation systems for the WMT19 news translation task. In: Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pp. 294–301. Association for Computational Linguistics, Florence, Italy. https://doi.org/10.18653/v1/W19-5330. https://aclweb.org/anthology/W19-5330
Michel P, Levy O, Neubig G (2019) Are sixteen heads really better than one? In: Proceedings of Advances in Neural Information Processing Systems 32, pp. 14014–14024. Curran Associates, Inc., Vancouver, Canada. https://papers.nips.cc/paper/9551-are-sixteen-heads-really-better-than-one
Nair V, Hinton GE (2010) Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th International Conference on International Conference on Machine Learning, pp. 807–814. Madison, USA. https://dl.acm.org/doi/10.5555/3104322.3104425
Nguyen TQ, Salazar J (2019) Transformers without tears: improving the normalization of self-attention. arXiv:1910.05895
Och FJ, Ney H (2000) Improved statistical alignment models. In: Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, pp. 440–447. Association for Computational Linguistics, Hong Kong, China. https://doi.org/10.3115/1075218.1075274. https://aclweb.org/anthology/P00-1056/
Ott M, Edunov S, Baevski A, Fan A, Gross S, Ng N, Grangier D, Auli M (2019) fairseq: A fast, extensible toolkit for sequence modeling. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pp. 48–53. Association for Computational Linguistics, Minneapolis, USA. https://doi.org/10.18653/v1/N19-4009. https://aclweb.org/anthology/N19-4009/
Papineni K, Roukos S, Ward T, Zhu WJ (2002) BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 311–318. Association for Computational Linguistics, Philadelphia, USA. https://doi.org/10.3115/1073083.1073135. https://aclweb.org/anthology/P02-1040/
Popović M (2015) chrF: character n-gram F-score for automatic MT evaluation. In: Proceedings of the Tenth Workshop on Statistical Machine Translation, pp. 392–395. Association for Computational Linguistics, Lisbon, Portugal. https://doi.org/10.18653/v1/W15-3049. https://aclweb.org/anthology/W15-3049/
Post M (2018) A call for clarity in reporting BLEU scores. In: Proceedings of the Third Conference on Machine Translation: Research Papers, pp. 186–191. Association for Computational Linguistics, Brussels, Belgium. https://doi.org/10.18653/v1/W18-6319. https://aclweb.org/anthology/W18-6319/
Riza H, Purwoadi M, Uliniansyah T, Ti AA, Aljunied SM, Mai LC, Thang VT, Thai NP, Chea V, Sam S, et al (2016) Introduction of the Asian Language Treebank. In: 2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA), pp. 1–6. Institute of Electrical and Electronics Engineers, Bali, Indonesia. https://doi.org/10.1109/ICSDA.2016.7918974. https://ieeexplore.ieee.org/document/7918974
Schwenk H (2008) Investigations on large-scale lightly-supervised training for statistical machine translation. In: Proceedings of the International Workshop on Spoken Language Translation, pp. 182–189. Honolulu, USA. https://www.isca-speech.org/archive/iwslt_08/papers/slt8_182.pdf
Schwenk H, Chaudhary V, Sun S, Gong H, Guzmán F (2019) WikiMatrix: Mining 135M parallel sentences in 1620 language pairs from Wikipedia
Sennrich R, Haddow B, Birch A (2016a) Improving neural machine translation models with monolingual data. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 86–96. Association for Computational Linguistics, Berlin, Germany. https://doi.org/10.18653/v1/P16-1009. https://aclweb.org/anthology/P16-1009
Sennrich R, Haddow B, Birch A (2016b) Neural machine translation of rare words with subword units. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1715–1725. Association for Computational Linguistics, Berlin, Germany. https://doi.org/10.18653/v1/P16-1162. https://aclweb.org/anthology/P16-1162
Sennrich R, Zhang B (2019) Revisiting low-resource neural machine translation: A case study. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 211–221. Association for Computational Linguistics, Florence, Italy. https://doi.org/10.18653/v1/P19-1021. https://aclweb.org/anthology/P19-1021
Søgaard A, Ruder S, Vulić I (2018) On the limitations of unsupervised bilingual dictionary induction. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 778–788. Association for Computational Linguistics, Melbourne, Australia. https://doi.org/10.18653/v1/P18-1072. https://aclweb.org/anthology/P18-1072
Stahlberg F, Cross J, Stoyanov V (2018) Simple fusion: Return of the language model. In: Proceedings of the Third Conference on Machine Translation: Research Papers, pp. 204–211. Association for Computational Linguistics, Brussels, Belgium. https://doi.org/10.18653/v1/W18-6321. https://aclweb.org/anthology/W18-6321
Sutskever I, Vinyals O, Le QV (2014) Sequence to sequence learning with neural networks. In: Proceedings of the 27th Neural Information Processing Systems Conference (NIPS), pp. 3104–3112. Curran Associates, Inc., Montréal, Canada. http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. In: Proceedings of the 30th Neural Information Processing Systems Conference (NIPS), pp. 5998–6008. Curran Associates, Inc., Long Beach, USA. http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf
Voita E, Talbot D, Moiseev F, Sennrich R, Titov I (2019) Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 5797–5808. Association for Computational Linguistics, Florence, Italy. https://doi.org/10.18653/v1/P19-1580. https://aclweb.org/anthology/P19-1580/
Wang Q, Li B, Xiao T, Zhu J, Li C, Wong DF, Chao LS (2019) Learning deep transformer models for machine translation. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 1810–1822. Association for Computational Linguistics, Florence, Italy. https://doi.org/10.18653/v1/P19-1176. https://aclweb.org/anthology/P19-1176
Zens R, Och FJ, Ney H (2002) Phrase-based statistical machine translation. In: Annual Conference on Artificial Intelligence, pp. 18–32. Springer
A part of this work was conducted under the program "Research and Development of Enhanced Multilingual and Multipurpose Speech Translation System" of the Ministry of Internal Affairs and Communications (MIC), Japan. Benjamin Marie was partly supported by JSPS KAKENHI Grant Number 20K19879 and the tenure-track researcher start-up fund in NICT. Atsushi Fujita and Masao Utiyama were partly supported by JSPS KAKENHI Grant Number 19H05660.
ASTREC, NICT, 3-5 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0289, Japan
Raphael Rubino, Benjamin Marie, Raj Dabre, Atushi Fujita, Masao Utiyama & Eiichiro Sumita
Raphael Rubino
Benjamin Marie
Raj Dabre
Atushi Fujita
Masao Utiyama
Eiichiro Sumita
Correspondence to Raphael Rubino.
Electronic supplementary material 1 (ODS 140 kb)
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Rubino, R., Marie, B., Dabre, R. et al. Extremely low-resource neural machine translation for Asian languages. Machine Translation 34, 347–382 (2020). https://doi.org/10.1007/s10590-020-09258-6
Issue Date: December 2020
Neural machine translation
Low-resource
Asian language
Hyper-parameter tuning
|
CommonCrawl
|
Assessing parameter identifiability in compartmental dynamic models using a computational approach: application to infectious disease transmission models
Kimberlyn Roosa1Email authorView ORCID ID profile and
Gerardo Chowell1, 2
Mathematical modeling is now frequently used in outbreak investigations to understand underlying mechanisms of infectious disease dynamics, assess patterns in epidemiological data, and forecast the trajectory of epidemics. However, the successful application of mathematical models to guide public health interventions lies in the ability to reliably estimate model parameters and their corresponding uncertainty. Here, we present and illustrate a simple computational method for assessing parameter identifiability in compartmental epidemic models.
We describe a parametric bootstrap approach to generate simulated data from dynamical systems to quantify parameter uncertainty and identifiability. We calculate confidence intervals and mean squared error of estimated parameter distributions to assess parameter identifiability. To demonstrate this approach, we begin with a low-complexity SEIR model and work through examples of increasingly more complex compartmental models that correspond with applications to pandemic influenza, Ebola, and Zika.
Overall, parameter identifiability issues are more likely to arise with more complex models (based on number of equations/states and parameters). As the number of parameters being jointly estimated increases, the uncertainty surrounding estimated parameters tends to increase, on average, as well. We found that, in most cases, R0 is often robust to parameter identifiability issues affecting individual parameters in the model. Despite large confidence intervals and higher mean squared error of other individual model parameters, R0 can still be estimated with precision and accuracy.
Because public health policies can be influenced by results of mathematical modeling studies, it is important to conduct parameter identifiability analyses prior to fitting the models to available data and to report parameter estimates with quantified uncertainty. The method described is helpful in these regards and enhances the essential toolkit for conducting model-based inferences using compartmental dynamic models.
Compartmental models
Parameter identifiability
Uncertainty quantification
Epidemic models
Structural parameter identifiability
Practical parameter identifiability
Mathematical modeling is commonly applied in outbreak investigations for analyzing mechanisms behind infectious disease transmission and explaining patterns in epidemiological data [1, 2]. Models also provide a quantitative framework for assessing intervention and control strategies and generating epidemic forecasts in real time. However, the successful application of mathematical modeling to investigate epidemics depends upon our ability to reliably estimate key transmission and severity parameters, which are critical for guiding public health interventions. In particular, parameter estimates for a given system are subject to two major sources of uncertainty: noise in the data and assumptions built in the model [3]. Ignoring this uncertainty can result in misleading inferences and potentially incorrect public health policy decisions.
Appropriate and flexible approaches for estimating parameters from data, evaluating parameter and model uncertainty, and assessing goodness of fit are gaining increasing attention [4–8]. For instance, model parameters can be estimated by connecting models with observed data through various methods, including least-squares fitting [9], maximum likelihood estimation [10, 11], and approximate Bayesian computation [12, 13]. An important, yet often overlooked step in estimating parameters is examining parameter identifiability – whether a set of parameters can be uniquely estimated from a given model and data set [14]. Lack of identifiability, or non-identifiability, occurs when multiple sets of parameter values yield a very similar model fit to the data. Non-identifiability may be attributed to the model structure (structural identifiability) or due to the lack of information in a given data set (practical identifiability), which could be associated with the number of observations, spatial-temporal resolution (e.g., daily versus weekly data), and observation error. A parameter set is considered structurally identifiable if any set of parameter values can be uniquely mapped to a model output [15]. As such, structural identifiability is the first step in understanding which model parameters can be estimated from data of certain state(s) of the system at a specific spatial-temporal resolution. Structurally identifiable parameters may still be non-identifiable in practice due to a lack of information in available data. The so-called "practical identifiability" considers real-world data issues: amount of noise in the data and sampling frequency (e.g., data collection process) [14].
Several methods have been proposed to examine structural identifiability of a model without the need of experimental data; these include Taylor series methods [15, 16], differential algebra-based methods [17, 18], and other mathematical approaches [15, 19]. These methods tend to work better in the context of simple rather than complex models. Model complexity, in general, is a function of the number of parameters necessary to characterize the states of the system and the spectrum of dynamics that can be recovered from the model. Model complexity affects the ability to reliably parameterize the model given the available data [3], so there is a need for flexible, mathematically-sound approaches to address parameter identifiability in models of varying complexity. Here, we present a general computational method for quantifying parameter uncertainty and assessing parameter identifiability through a parametric bootstrap approach. We demonstrate this approach through examples of compartmental epidemic models with variable complexity, which have been previously employed to study the transmission dynamics and control of various infectious diseases including pandemic influenza, Ebola, and Zika.
Compartmental models are widely used in epidemiological literature as a population-level modeling approach that subdivides the population into classes according to their epidemiological status [1, 20]. Compartmental dynamic models are specified by a set of ordinary differential equations and parameters that track the temporal progression of the number of individuals in each of the states of the system [3, 21]. Dynamic models follow the general form:
$$ {\dot{x}}_1(t)={f}_1\left({x}_1,{x}_2,\dots, {x}_h,\Theta \right) $$
$$ {\dot{x}}_h(t)={f}_h\left({x}_1,{x}_2,\dots, {x}_h,\Theta \right) $$
Where \( {\dot{x}}_i \) is the rate of change of the system states (where i = 1, 2, …, h) and Θ = (θ1, θ2, …, θm) is the set of model parameters.
The basic reproductive number (denoted R0) is often a parameter of interest in epidemiological studies, as it is a measure of potential for a given infectious disease to spread within a population. Mathematically, it is defined as the average number of secondary infections produced by a single index case in a completely susceptible population [22]. R0 represents an epidemic threshold for which values of R0 < 1 indicate a lack of disease spread, and values of R0 > 1 are consistent with epidemic spread. In the midst of an epidemic, R0 estimates provide insight to the intensity of interventions required to achieve control [23]. R0 is a composite parameter value, as it depends on multiple model parameters (e.g., transmission rate, infectious period), and while R0 is not directly estimated from the model, it can be calculated by relying on the uncertainty of individual parameters.
A simple and commonly utilized compartmental model is the SEIR (susceptible-exposed-infectious-removed) model [1]. We apply our methodology to this low-complexity model and work through increasingly more complex models as we demonstrate the approach for assessing parameter identifiability.
Model 1: Simple SEIR (pandemic influenza)
We analyze a simple compartmental transmission model that consists of 4 parameters and 4 states (Fig. 1). We apply this model to the context of the 1918 influenza pandemic in San Francisco, California [23]. Individuals in the model are classified as susceptible (S), exposed (E), infectious (I), or recovered (R) [1]. We assume constant population size, so S + E + I + R = N, where N is the total population size. Susceptible individuals progress to the exposed class at rate βI(t)/N, where β is the transmission rate, and I(t)/N is the probability of random contact with an infectious individual. Exposed, or latent, individuals move to the infectious class at rate k, where 1/k is the average latent period. Infectious individuals recover (move to recovered class) at rate γ, where 1/γ corresponds to the average infectious period.
Model 1: Simple SEIR – Population is divided into 4 classes: susceptible (S), exposed (E), infectious (I), and recovered/removed (R). Class C represents the auxiliary variable C (t) and tracks the cumulative number of infectious individuals from the start of the outbreak. This is presented as a dashed line, as it is not a state of the system of equations, but simply a class to track the cumulative incidence cases; meaning, individuals from the population are not moving to class C. Parameter(s) above arrows denote the rate individuals move between classes. Parameter descriptions and values are found in Table 1
The transmission process can be modeled using the following system of ordinary differential equations (where the dot denotes time derivative):
$$ \left\{\begin{array}{c}\dot{S}(t)=-\beta S(t)I(t)/N\ \\ {}\dot{E}(t)=\beta S(t)I(t)/N- kE(t)\ \\ {}\dot{I}(t)= kE(t)-\gamma I(t)\ \\ {}\dot{R}(t)=\gamma I(t)\ \\ {}\dot{C}(t)= kE(t)\ \end{array}\right. $$
The auxiliary variable C(t) tracks the cumulative number of infectious individuals from the start of the outbreak. It is not a state of the system of equations, but simply a class to track the cumulative incidence cases; meaning, individuals from the population are not moving to class C. The number of new infections, or the incidence curve, is given by \( \dot{C}(t) \).
For this model, there is only one class contributing to new infections (I), so R0, or the basic reproductive number, is simply the product of the transmission rate and the average infectious period: R0 = \( \frac{\beta }{\gamma } \) .
Model 2: SEIR with asymptomatic and hospitalized/diagnosed and reported
We use a simplified version of a complex SEIR model that consists of 8 parameters and 6 system states (Fig. 2). This model was originally developed for studying the transmission dynamics of the 1918 influenza pandemic in Geneva, Switzerland [24]. In the model, individuals are classified as susceptible (S), exposed (E), clinically ill and infectious (I), asymptomatic and partially infectious (A), hospitalized/diagnosed and reported (J), or recovered (R). Hospitalized individuals are assumed to be as infectious as individuals in the I class. Again, constant population size is assumed, so S + E + I + A + J + R = N. Susceptible individuals progress to the exposed class at rate β[I(t) + J(t) + qA(t)]/N, where β is the transmission rate, and q is a reduction factor of transmissibility in the asymptomatic class (0 < q < 1). A proportion, ρ, of exposed/latent individuals (0 < ρ < 1) become clinically infectious at rate k, while the rest (1- ρ) become partially infectious and asymptomatic at the same rate k. Asymptomatic cases progress to the recovered class at rate γ1. Clinically ill and infectious individuals are diagnosed at a rate α or recover without being diagnosed at rate γ1. Diagnosed individuals recover at rate γ2.
Model 2: SEIR with asymptomatic and hospitalized/diagnosed and reported – Population is divided into 6 classes: susceptible (S), exposed (E), clinically ill and infectious (I), asymptomatic and partially infectious (A), hospitalized/diagnosed and reported (J), and recovered (R). Class C represents the auxiliary variable C(t) and tracks the cumulative number of newly infectious individuals. Parameter(s) above (or to the left of) arrows denote the rate individuals move between classes. Parameter descriptions and values are found in Table 2
The transmission process can be modeled using the following system of ordinary differential equations:
$$ \left\{\begin{array}{c}\dot{S}(t)=-\beta S(t)\left[I(t)+J(t)+ qA(t)\right]/N\ \\ {}\dot{E}(t)=\beta S(t)\left[I(t)+J(t)+ qA(t)\right]/N- kE(t)\\ {}\dot{A}(t)=k\left(1-\rho \right)E(t)-{\gamma}_1A(t)\\ {}\dot{I}(t)= k\rho E(t)-\left(\alpha +{\gamma}_1\right)I(t)\\ {}\dot{J}(t)=\alpha I(t)-{\gamma}_2J(t)\\ {}\dot{R}(t)={\gamma}_1\left(A(t)+I(t)\right)+{\gamma}_2J(t)\\ {}\dot{C}(t)=\alpha I(t)\end{array}\right. $$
In the above system, C(t) represents the cumulative number of diagnosed/reported cases from the start of the outbreak, and \( \dot{C}(t) \) is the incidence curve of diagnosed cases.
For this model, there are three classes contributing to new infections (A, I, J), so the reproductive number is the sum of the contributions from each of these classes: R0 = R0A + R0I + R0J, where:
R0A = (fraction of asymptomatic cases) x (transmission rate) x (relative transmissibility from asymptomatic cases) x (mean time in asymptomatic class)
R0I = (fraction of symptomatic cases) x (transmission rate) x (mean time in clinically infectious class)
R0J = (fraction of symptomatic cases that are hospitalized) x (transmission rate) x (mean time in hospital) [24]
Here, \( {R}_0=\beta \Big[\left(1-\rho \right)\left(\frac{q}{\gamma_1}\right)+\rho \left(\frac{1}{\gamma_1+\alpha }+\frac{\alpha }{\left({\gamma}_1+\alpha \right){\gamma}_2}\right) \)].
Model 3: The Legrand et al. model (Ebola)
We analyze an Ebola transmission model [25] comprised of 15 parameters and 6 states (Fig. 3). This model subdivides the infectious population into three stages to account for transmission in three settings: community, hospital, and unsafe burial ceremonies. Individuals are classified as susceptible (S), exposed (E), infectious in the community (I), infectious in the hospital (H), infectious after death at funeral (F), or recovered/removed (R). Constant population size is assumed, so S + E + I + H + F + R = N. Susceptible individuals progress to the exposed class at rate (βII(t) + βHH(t) + βFF(t))/N where βI, βH, and βF represent the transmission rates in the community, hospital, and at funerals, respectively. Exposed individuals become infectious at rate α. A proportion, 0 < θ < 1, of infectious individuals are hospitalized at rate γh. Of the proportion of infectious individuals that are not hospitalized (1-θ), a proportion, 0 < δ1 < 1, move to the funeral class at rate γd, and the rest (1- δ1) move to the recovered/removed class at rate γi. A proportion, 0 < δ2 < 1, of hospitalized individuals progress to funeral class at rate \( {\upgamma}_{dh}=\frac{1}{\frac{1}{\upgamma_d}-\frac{1}{\upgamma_h}} \). The remaining proportion (1- δ2) are recovered/removed at rate \( {\upgamma}_{ih}=\frac{1}{\frac{1}{\upgamma_i}-\frac{1}{\upgamma_h}} \). δ1 and δ2 are calculated such that δ represents the case fatality ratio (Table 3). Individuals in the funeral class are removed at rate γf.
Model 3: The Legrand et al. Model – Population is divided into 6 classes: susceptible (S), exposed (E), infectious in the community (I), infectious in the hospital (H), infectious after death at funeral (F), or recovered/removed (R). Class C represents the auxiliary variable C(t) and tracks the cumulative number of newly infectious individuals. Parameter(s) above arrows denote the rate that individuals move between classes. Parameter descriptions and values are found in Table 3
The transmission process is modeled by the following set of ordinary differential equations:
$$ \left\{\begin{array}{c}\dot{S}(t)=-S(t)\left[{\upbeta}_II(t)+{\beta}_HH(t)+{\beta}_FF(t)\right]/N\ \\ {}\dot{E}(t)=S(t)\left[{\beta}_II(t)+{\beta}_HH(t)+{\beta}_FF(t)\right]/N-\alpha E(t)\\ {}\dot{I}(t)=\alpha E(t)-\left[\uptheta {\gamma}_h+{\delta}_1\left(1-\uptheta \right){\gamma}_d+\left(1-{\delta}_1\right)\left(1-\uptheta \right){\gamma}_i\right]I(t)\\ {}\dot{H}(t)=\uptheta {\gamma}_hI(t)-\left[\left(1-{\delta}_2\right){\gamma}_{ih}+{\delta}_2{\gamma}_{dh}\right]H(t)\\ {}\dot{F}(t)={\delta}_1\left(1-\uptheta \right){\gamma}_dI(t)+{\delta}_2{\gamma}_{dh}H(t)-{\gamma}_fF(t)\\ {}\dot{R}(t)=\left(1-{\delta}_1\right)\left(1-\uptheta \right){\gamma}_iI(t)+\left(1-{\delta}_2\right){\gamma}_{ih}H(t)+{\gamma}_fF(t)\\ {}\dot{C}(t)=\alpha E(t)\end{array}\right. $$
Here, C(t) represents the cumulative number of all infectious individuals, and \( \dot{C}(t) \) is the incidence curve for infectious cases.
The basic reproductive number is the sum of the contributions from each of the infectious classes (I, H, F): R0 = R0I + R0H + R0F, where:
R0I = (transmission rate in the community) x (mean time in infectious class)
R0H = (fraction of hospitalized cases) x (transmission rate in the hospital) x (mean time in hospital class)
R0F = (fraction of cases that have traditional burial ceremonies) x (transmission rate at funerals) x (mean time in funeral class)
Here, \( {R}_0=\frac{\beta_I}{\Delta }+\frac{\frac{\gamma_h\theta }{\gamma_{dh}{\delta}_2+{\gamma}_{ih}\left(1-{\delta}_2\right)}{\beta}_H}{\Delta }+\frac{\gamma_d{\delta}_1\left(1-\theta \right){\beta}_F}{\gamma_f\Delta }+\frac{\gamma_{dh}{\gamma}_h{\delta}_2\theta {\beta}_F}{\gamma_f\left({\gamma}_{ih}\left(1-{\delta}_2\right)+{\gamma}_{dh}{\delta}_2\right)\Delta }, \)
where ∆ = γhθ + γd(1 − θ)δ1 + γi(1 − θ)(1 − δ1) [25].
Model 4: Zika model with human and mosquito populations
The last example is a compartmental model of Zika transmission dynamics that includes 16 parameters and 9 states and incorporates transmission between two populations – humans and vectors (Fig. 4). This model was designed to investigate the impact of both mosquito-borne and sexually transmitted (human-to-human) routes of infection for cases of Zika virus [26]. In the human population, individuals are classified as susceptible (Sh), asymptomatically infected (Ah), exposed (Eh), symptomatically infectious (Ih1), convalescent (Ih2), or recovered (Rh). The mosquito, or vector, population is broken into susceptible (Sv), exposed (Ev), and infectious (Iv) classes. Note that the subscript 'h' is used for humans and 'v' is used for vectors. Constant population size is assumed in both populations, so Sh + Ah + Eh + Ih1 + Ih2 + Rh = Nh and Sv + Ev + Iv = Nv.
Model 4: Zika Model with human and mosquito populations – The human population (subscript h) is divided into 5 classes: susceptible (Sh), asymptomatically infected (Ah), exposed (Eh), symptomatically infectious (Ih1), convalescent (Ih2), or recovered (Rh). Class C represents the auxiliary variable C(t) and tracks the cumulative number of newly infectious individuals. The mosquito, or vector, population (subscript v; outlined in dark blue) is divided into 3 classes: susceptible (Sv), exposed (Ev), and infectious (Iv) classes. Parameter(s) above arrows denote the rate individuals/vectors move between classes. Parameter descriptions and values are found in Table 4
A proportion 0 < θ < 1 of susceptible humans move to the exposed class at rate ab(Iv(t)/Nh) + β[(αEh(t) + Ih1(t) + τIh2(t))/Nh)] where a is the mosquito biting rate, b is the transmission probability from an infectious mosquito to a susceptible human, β is the transmission rate between humans, α is the relative (human-to-human) transmissibility from exposed humans to susceptible, and τ is the relative transmissibility from convalescent humans compared to susceptible. Exposed individuals progress to symptomatically infectious at rate κh and then progress to the convalescent stage at rate γh1. Convalescent individuals recover at rate γh2. The remaining proportion of susceptible individuals (1 - θ) become asymptomatically infected at the same rate, ab(Iv(t)/Nh) + β[(αEh(t) + Ih1(t) + τIh2(t))/Nh]. Asymptomatic humans recover at rate γh and do not contribute to new infections in this model.
Susceptible mosquitos move to the exposed class at rate ac[(ρEh(t) + Ih1(t))/Nh], where c is the transmission probability from a symptomatically infectious human to a susceptible mosquito, and ρ is the relative human-to-mosquito transmission probability from exposed humans to symptomatically infected. Exposed mosquitos become infectious at rate κv. Mosquitos also leave the population at rate μv, where 1/μv is the mosquito lifespan.
The transmission process, including both populations, is represented by the set of differential equations below:
$$ \left\{\begin{array}{c}{\dot{S}}_h(t)=- ab\left({\mathrm{I}}_v(t)/{\mathrm{N}}_h\right){\mathrm{S}}_h(t)-\beta \left[\left(\alpha {E}_h(t)+{\mathrm{I}}_{h1}(t)+\tau {\mathrm{I}}_{h2}(t)\right)/{\mathrm{N}}_h\right]{\mathrm{S}}_h(t)\\ {}{\dot{E}}_h(t)=\theta \left[ ab\left({\mathrm{I}}_v(t)/{\mathrm{N}}_h\right){\mathrm{S}}_h(t)+\beta \left[\left(\alpha {E}_h(t)+{\mathrm{I}}_{h1}(t)+\tau {\mathrm{I}}_{h2}(t)\right)/{\mathrm{N}}_h\right]{\mathrm{S}}_h(t)\right]-{\kappa}_h{E}_h(t)\\ {}{\dot{I}}_{h1}(t)={\kappa}_h{E}_h(t)-{\gamma}_{h1}{\mathrm{I}}_{h1}(t)\\ {}{\dot{I}}_{h2}(t)={\gamma}_{h1}{\mathrm{I}}_{h1}(t)-{\gamma}_{h2}{\mathrm{I}}_{h2}(t)\\ {}{\dot{A}}_h(t)=\left(1-\theta \right)\left[ ab\right({\mathrm{I}}_v(t)/{\mathrm{N}}_h\left]{\mathrm{S}}_h(t)+\beta \left[\alpha {E}_h(t)+{\mathrm{I}}_{h1}(t)+\tau {\mathrm{I}}_{h2}(t)\Big)/{\mathrm{N}}_h\right]{\mathrm{S}}_h(t)\right]-{\gamma}_h{A}_h(t)\\ {}{\dot{R}}_h(t)={\gamma}_{h2}{\mathrm{I}}_{h2}(t)+{\gamma}_h{A}_h(t)\\ {}{\dot{S}}_v(t)={\mu}_v{N}_v- ac\left[\left(\rho {E}_h(t)+{\mathrm{I}}_{h1}(t)\right)/{\mathrm{N}}_h\right]\ast {\mathrm{S}}_v(t)-{\mu}_v{\mathrm{S}}_v(t)\\ {}{\dot{E}}_v(t)= ac\left[\left(\rho {E}_h(t)+{\mathrm{I}}_{h1}(t)\right)/{\mathrm{N}}_h\right]\ast {\mathrm{S}}_v(t)-\left({\kappa}_v+{\mu}_v\right){\mathrm{E}}_v(t)\\ {}{\dot{I}}_v(t)={\kappa}_v{\mathrm{E}}_v(t)-{\mu}_v{\mathrm{I}}_v(t)\\ {}\dot{C}(t)={\kappa}_h{E}_h(t)\end{array}\right. $$
C(t) represents the cumulative number of symptomatically infectious human cases, and \( \dot{C}(t) \) contains the incidence curve for symptomatic human cases.
For this example, we have two transmission processes to consider when calculating R0: sexual transmission (Rhh) and mosquito-borne (Rhv). The human population has three classes contributing to new infections: exposed, symptomatically infectious, and convalescent, so:
$$ {R}_{hh}=\frac{\alpha \theta \beta}{\kappa_h}+\frac{\theta \beta}{\gamma_{h1}}+\frac{\tau \theta \beta}{\gamma_{h2}} $$
The mosquito population only has one infectious class (Iv); the reproductive number is given by:
$$ {R}_{hv}=\sqrt{\left[\frac{a^2 b\rho cm\theta}{\kappa_h{\mu}_v}+\frac{a^2 b cm\theta}{\gamma_{h1}{\mu}_v}\right]\ast \frac{\kappa_v}{\kappa_v+{\mu}_v}}. $$
The overall basic reproductive number, considering both transmission routes, is given by the following eq. [26]:
$$ {R}_0=\frac{R_{hh}+\sqrt{R_{hh}^2+4{R}_{hv}^2}}{2} $$
For each model we simulate 200 epidemic datasets (directly from the corresponding set of ordinary differential equations) with Poisson error structure using the daily time series data of case incidence, or total number of new cases daily. Parameters for each model are set at values based on their corresponding application: the 1918 influenza pandemic in San Francisco (Model 1) [23], 1918 pandemic influenza in Geneva (Model 2) [24], 1995 Ebola in Congo (Model 3) [25], and 2016 Zika in the Americas (Model 4) [26]. As explained below, the simulated data are generated using a bootstrap approach, and we then use these data to study parameter identifiability within a realistic parameter space for each model. Parameter descriptions and their corresponding values for each model are given in Tables 1, 2, 3 and 4.
Parameter descriptions and values for Model 1
Transmission rate (per day)
1/κ
Mean latent period (days)
1/γ
Mean infectious period (days)
Basic reproductive number
Parameter values are consistent with pandemic influenza in San Francisco, 1918 [23]
Latent period (days)
γ1
Recovery rate for asymptomatic individuals (1/days)
Recovery rate for infectious individuals recovering without hospitalization (1/days)
Rate of diagnosis for hospitalized individuals (days)
Proportion of latent individuals progressing to infectious class (vs. asymptomatic class)
Reduction factor in transmissibility for asymptomatic cases
Parameter values are consistent with pandemic influenza in Geneva, 1918 [24]
βI
Transmission rate in the community (per day)
βH
Transmission rate in the hospital (per day)
βF
Transmission rate at traditional funerals (per day)
1/α
Incubation period (days)
Proportion of cases hospitalized
1/γh
Time from symptom onset to hospitalization (days)
1/γd
Time from symptom onset to death (days)
1/γi
Time from symptom onset to the end of infectiousness for survivors (days)
Case fatality ratio
\( {\updelta}_1=\frac{\updelta {\gamma}_i}{\updelta {\gamma}_i+\left(1-\updelta \right){\gamma}_d} \)
\( {\updelta}_2=\frac{\updelta {\gamma}_{ih}}{\updelta {\gamma}_{ih}+\left(1-\updelta \right){\gamma}_{dh}} \)
1/γih
Infectious period for survivors (days)
1/γdh
Time from hospitalization to death (days)
1/γf
Time from death to funeral (days)
Parameter values are consistent with the 1995 Ebola outbreak in the Democratic Republic of Congo [25]
Population size (humans)
Population size (mosquitos)
Mosquito biting rate (number of bites per mosquito per day)
Probability of infection from an infectious mosquito to a susceptible human (per bite)
Transmission rate from symptomatically infected humans to susceptible humans (per day)
Relative human-to-human transmissibility of exposed humans to symptomatic humans
Relative human-to-human transmissibility of convalescent to symptomatic humans
Proportion of symptomatic infections
1/κh
Intrinsic incubation period in humans (days)
1/γh1
Duration of acute phase (days)
Duration of convalescent phase (days)
Duration of asymptomatic infection (days)
1/μv
Mosquito lifespan (days)
Transmission probability from a symptomatically infected human to a susceptible mosquito per bite
Relative human-to-mosquito transmission probability of exposed humans to symptomatically infected humans
1/κv
Extrinsic incubation period in mosquitos (days)
Parameter values are consistent with the 2016 Zika outbreak in Brazil, Colombia, and El Salvador [26]
Parameter estimation
To estimate parameter values, we fit the model to each simulated dataset using nonlinear least squares estimation. The lsqcurvefit function in Matlab (Mathworks, Inc.) is used to find the least squares best fit to the data. This process searches for the set of parameters \( \widehat{\varTheta} \)= (\( \widehat{\theta} \)1, \( \widehat{\theta} \)2,…, \( \widehat{\theta} \)m) that minimizes the sum of squared differences between the simulated data and the model solution [3]. The model solution \( f\left({t}_i,\widehat{\varTheta}\right) \) represents the best fit to the time series data.
For this method, the initial parameter predictions affect the solution for the model as local minima occur. While we know the true parameter values (used to generate the data), this is unrealistic for a real-world modeling scenario. We vary the initial guesses of the parameter values to vary according to a uniform distribution in the range of +/− 0.1 around the true value. Another approach would consist of repeating the least squares fitting procedure several times with different initial parameter guesses and selecting the best model fit.
For each model, the sets of parameters are denoted by Θi, where i represents the number of parameters being jointly estimated. We begin with estimating one model parameter, while fixing the rest, and then increase the number of parameters jointly estimated by one until all parameters of interest are included. Population size, N, is always fixed to the true value. Also, while R0 is not being directly estimated from the model, it is a composite parameter that can be calculated using individual parameter estimates.
For each model described above, we explore parameter identifiability for the following sets of parameters. Here, the symbol ^ is used to indicate an estimated parameter, while the absence of this symbol indicates that the parameter is set to its true value from the simulated data.
(i) Model 1: Simple SEIR
$$ {\displaystyle \begin{array}{cc}{\Theta}_{\mathrm{i}}:& {\Theta}_1=\left\{\ \widehat{\beta},\kappa, \gamma\ \right\}\\ {}& {\Theta}_2=\left\{\ \widehat{\beta},\kappa, \widehat{\gamma}\ \right\}\\ {}& {\Theta}_3=\left\{\ \widehat{\beta},\widehat{\kappa},\widehat{\gamma}\ \right\}\end{array}} $$
(ii) Model 2: SEIR with asymptomatic and hospitalized/diagnosed and reported
$$ {\displaystyle \begin{array}{cc}{\Theta}_{\mathrm{i}}:& {\Theta}_1=\left\{\ \widehat{\beta},\kappa, {\gamma}_1,{\gamma}_2,\alpha, \rho, q\ \right\}\\ {}& {\Theta}_2=\left\{\ \widehat{\beta},\kappa, \widehat{\gamma_1},{\gamma}_2,\alpha, \rho, q\ \right\}\\ {}& {\Theta}_3=\left\{\ \widehat{\beta},\kappa, \widehat{\gamma_1},{\gamma}_2,\widehat{\alpha},\rho, q\ \right\}\\ {}& {\Theta}_4=\left\{\ \widehat{\beta},\kappa, \widehat{\gamma_1},{\gamma}_2,\widehat{\alpha},\widehat{\rho},q\ \right\}\\ {}& {\Theta}_5=\left\{\ \widehat{\beta},\kappa, \widehat{\gamma_1},{\gamma}_2,\widehat{\alpha},\widehat{\rho},\widehat{q}\ \right\}\end{array}} $$
(iii) Model 3: The Legrand Model (Ebola)
$$ {\displaystyle \begin{array}{cc}{\Theta}_{\mathrm{i}}:& {\Theta}_1=\left\{\ {\widehat{\beta}}_I,{\beta}_H,{\beta}_F,\alpha, \theta, {\gamma}_h,{\gamma}_d,{\gamma}_i,\delta, {\gamma}_{ih},{\gamma}_{dh},{\gamma}_f\ \right\}\\ {}& {\Theta}_2=\left\{\ {\widehat{\beta}}_I,{\widehat{\beta}}_H,{\beta}_F,\alpha, \theta, {\gamma}_h,{\gamma}_d,{\gamma}_i,\delta, {\gamma}_{ih},{\gamma}_{dh},{\gamma}_f\ \right\}\\ {}& {\Theta}_3=\left\{\ {\widehat{\beta}}_I,{\widehat{\beta}}_H,{\widehat{\beta}}_F,\alpha, \theta, {\gamma}_h,{\gamma}_d,{\gamma}_i,\delta, {\gamma}_{ih},{\gamma}_{dh},{\gamma}_f\ \right\}\\ {}& {\Theta}_4=\left\{\ {\widehat{\beta}}_I,{\widehat{\beta}}_H,{\widehat{\beta}}_F,\alpha, \theta, {\widehat{\gamma}}_h,{\gamma}_d,{\gamma}_i,\delta, {\gamma}_{ih},{\gamma}_{dh},{\gamma}_f\ \right\}\\ {}& {\Theta}_5=\left\{\ {\widehat{\beta}}_I,{\widehat{\beta}}_H,{\widehat{\beta}}_F,\alpha, \theta, {\widehat{\gamma}}_h,{\widehat{\gamma}}_d,{\gamma}_i,\delta, {\gamma}_{ih},{\gamma}_{dh},{\gamma}_f\ \right\}\\ {}& {\Theta}_6=\left\{\ {\widehat{\beta}}_I,{\widehat{\beta}}_H,{\widehat{\beta}}_F,\alpha, \theta, {\widehat{\gamma}}_h,{\widehat{\gamma}}_d,{\widehat{\gamma}}_i,\delta, {\gamma}_{ih},{\gamma}_{dh},{\gamma}_f\ \right\}\\ {}& {\Theta}_7=\left\{\ {\widehat{\beta}}_I,{\widehat{\beta}}_H,{\widehat{\beta}}_F,\alpha, \theta, {\widehat{\gamma}}_h,{\widehat{\gamma}}_d,{\widehat{\gamma}}_i,\delta, {\gamma}_{ih},{\gamma}_{dh},{\widehat{\gamma}}_f\ \right\}\end{array}} $$
(iv) Model 4: Zika model with human and mosquito populations
$$ {\displaystyle \begin{array}{cc}{\Theta}_{\mathrm{i}}:& {\Theta}_1=\left\{\ a,b,\widehat{\beta},\alpha, \tau, \theta, {\kappa}_h,{\gamma}_{h1},{\gamma}_{h2},{\gamma}_h,{\mu}_v,c,\rho, {\kappa}_v\ \right\}\\ {}& {\Theta}_2=\left\{\ a,b,\widehat{\beta},\alpha, \tau, \theta, {\kappa}_h,{\widehat{\gamma}}_{h1},{\gamma}_{h2},{\gamma}_h,{\mu}_v,c,\rho, {\kappa}_v\ \right\}\\ {}& {\Theta}_3=\left\{\ a,b,\widehat{\beta},\alpha, \tau, \theta, {\kappa}_h,{\widehat{\gamma}}_{h1},{\widehat{\gamma}}_{h2},{\gamma}_h,{\mu}_v,c,\rho, {\kappa}_v\ \right\}\\ {}& {\Theta}_4=\left\{\ a,b,\widehat{\beta},\alpha, \tau, \theta, {\kappa}_h,{\widehat{\gamma}}_{h1},{\widehat{\gamma}}_{h2},{\widehat{\gamma}}_h,{\mu}_v,c,\rho, {\kappa}_v\ \right\}\\ {}& {\Theta}_5=\left\{\ a,b,\widehat{\beta},\widehat{\alpha},\tau, \theta, {\kappa}_h,{\widehat{\gamma}}_{h1},{\widehat{\gamma}}_{h2},{\widehat{\gamma}}_h,{\mu}_v,c,\rho, {\kappa}_v\ \right\}\\ {}& {\Theta}_6=\left\{\ a,b,\widehat{\beta},\widehat{\alpha},\widehat{\tau},\theta, {\kappa}_h,{\widehat{\gamma}}_{h1},{\widehat{\gamma}}_{h2},{\widehat{\gamma}}_h,{\mu}_v,c,\rho, {\kappa}_v\ \right\}\end{array}} $$
Bootstrapping method
We use the parametric bootstrap approach [3, 27, 28] for simulating the error structure around the deterministic model solution in order to evaluate parameter identifiability. This computational approach involves repeatedly sampling observations from the best-fit model solution. Here we use a Poisson error structure, which is the most popular distribution for modeling count data [3]. The step-by-step approach to quantify parameter uncertainty is as follows:
Obtain the deterministic model solution (total daily incidence series) using nonlinear least-squares estimation (Section 2.3).
Generate S replicate datasets, assuming Poisson error structure:
Using the deterministic model solution \( f\left({t}_i,\widehat{\varTheta}\right) \), generate S (for our examples, S = 200) replicate simulated datasets \( {f}_S^{\ast}\left({t}_i,\widehat{\varTheta}\right) \). To incorporate Poisson error structure, we use the incidence curve, \( \dot{C}(t) \), as follows. For each time point t, we generate a new incidence value using a Poisson random variable with mean=\( \dot{C}(t) \). This new set of data represents an incidence curve for the system, assuming the time series follows a Poisson distribution centered on the mean at time points ti.
Re-estimate model parameters: For each simulated dataset, derive the best-fit estimates for the parameter set using least-squares fitting (Section 2.3). This results in S estimated parameter sets: \( \widehat{\varTheta} \)i where i = 1, 2, …, S.
Characterize empirical distributions and construct confidence intervals: Using the set of S parameter estimates, we can characterize the empirical distribution and construct confidence intervals for each estimated parameter. Also, for each set of estimated parameters, R0 is calculated to obtain a distribution of R0 values as well.
When a model parameter is identifiable from available data, its confidence interval lies in a finite range of values [29, 30]. Using the bootstrapping method outlined in Section 2.4, we obtain 95% confidence intervals from the distributions of each estimated parameter. A small confidence interval with a finite range of values indicates that the parameter can be precisely identified, while a wider range could be indicative of lack of identifiability. To assess the level of bias of the estimates, we calculate the mean squared error (MSE) for each parameter. MSE is calculated as: \( MSE=\frac{1}{S}\sum \limits_{i=1}^S{\left(\uptheta -\widehat{\uptheta_i}\right)}^2 \) where θ represents the true parameter value (in the simulated data), and \( \widehat{\theta_i} \) represents the estimated value of the parameter for the ith bootstrap realization.
When a parameter can be estimated with low MSE and narrow confidence, this suggests that the parameter is identifiable from the model. On the other hand, larger confidence intervals or larger MSE values may be suggestive of non-identifiability.
Model 1: Simple SEIR
Additional files 1, 2 and 3: illustrate the empirical distributions of the estimated parameters, where Additional file 1: represents the results for \( \widehat{\varTheta} \)1(β only), Additional file 2: for \( \widehat{\varTheta} \)2 (β and γ), and Additional file 3: for \( \widehat{\varTheta} \)3 (β, γ, and κ). The figures also show the original simulated data and the 200 simulated datasets for each estimated parameter set.
Estimating only β (Θ1), results in precise (small confidence interval range) and unbiased (small MSE) estimates of β. Similarly, estimating β and γ (Θ2) provides precise and unbiased estimates for both parameters. The precision of the estimates can be seen in Fig. 5: the confidence intervals for the estimates (represented by red vertical lines) remain close to the true parameter value (blue horizontal dotted line). The MSE plot (Fig. 6) shows an MSE value of < 10− 7 for β in Θ1 and values of < 10− 4 for both β and γ in Θ2.
Model 1–95% confidence intervals (vertical red lines) for the distributions of each estimated parameter obtained from the 200 realizations of the simulated datasets. Mean estimated parameter value is denoted by a red x, and the true parameter value is represented by the blue dashed horizontal line. Θi denotes the estimated parameter set, where i indicates the number of parameters being jointly estimated
Model 1 – Mean squared error (MSE) of the distribution of parameter estimates (200 realizations) for each estimated parameter set Θi, where i indicates the number of parameters being jointly estimated. Note that the y-axis (MSE) is represented with a logarithmic scale
Simultaneously estimating all 3 parameters, β, κ, and γ (Θ3), results in wider confidence intervals and larger MSE than the two previous subsets. The confidence intervals for β (0.516, 0.636) and γ (0.223, 0.277) have a narrow range and enclose the true values of the parameters. The MSE for these two are larger compared to the previous subsets, though all MSE values are < 10− 2. The confidence interval for κ has a slightly larger range (0.440, 0.613), though this correlates with a small latent period difference of less than a day. Also, the MSE for κ is comparable to the other parameters. This indicates that all three parameters can be identified from daily incidence data of the epidemic curve with Poisson error structure.
Moreover, R0 can be estimated precisely with unbiased results. Despite the larger confidence intervals for the other parameters estimated in Θ3 (compared to Θ1, Θ2), the range around R0 is still very precise: (2.286, 2.317). Similarly, MSE for R0 is < 10− 4 for all runs. This indicates that the estimates of R0 are robust to variation or bias in the other parameter estimates – we will continue to explore this theme in the proceeding models.
Estimating β only (Θ1) or β and γ1 (Θ2) provides precise estimates with small MSE (Figs. 7 & 8). For each Θi (where i > 2), each additional parameter being estimated corresponds with, on average, a larger confidence interval range and higher MSE for each estimated parameter. Essentially, for each parameter, the uncertainty grows with the number of other parameters being jointly estimated. Θ3, estimating β, γ1, and α, provides estimates of β and γ1 with relatively small confidence ranges (95% CI: (0.717, 0.851), (0.192, 0.286), respectively) and MSE values (MSE = 0.0016, 7.15*10− 4, respectively); however, estimates for α produce a wider range of values (0.386, 0.748), as well as an MSE value over 5 times higher than the other parameters (MSE = 0.0089), though still < 10− 2.
Model 2–95% confidence intervals (vertical red lines) for the parameter estimate distributions obtained from the 200 realizations of the simulated datasets. Mean estimated parameter value is denoted by red x, and the true parameter value is represented by the blue dashed horizontal line. Θi denotes the estimated parameter set, where i indicates the number of parameters being jointly estimated
Results for Θ4 and Θ5 indicate that none of the parameters can be well-identified from case incidence data while simultaneously estimating > 3 parameters. For each, multiple parameters have MSE values > 10− 2 (Fig. 8), and the confidence intervals are comparatively wide. Additionally, the confidence intervals for ρ (Θ4: (0.602, 0.858); Θ5: (0.608, 0.763)) do not include the true value of 0.60.
Looking at confidence intervals and MSE (Figs. 7 & 8) for R0, we find again that R0 is identifiable across each Θi. The confidence intervals for R0 all have a range < 0.2, and the MSE values for each Θi are < 10− 2. These R0results are consistent with those in Model 1, despite the identifiability issues of other parameters seen here in Model 2. This is an important result, indicating that even when identifiability issues exist in other model parameters, we can still provide reliable estimates of R0 without having to know the true values of the other parameters. It also shows that while noise in the data may affect parameter estimation for some parameters, composite parameters, like R0, can still be accurately calculated from the same data.
Model 3: The Legrand model (Ebola)
Estimated parameter sets Θ1 and Θ2 (βI only, βI and βH respectively) result in unbiased (MSE < 10− 3), precise estimates of the parameters (Figs. 9 & 10). However, when jointly estimating all three β values (Θ3), only βI is identifiable – the confidence interval is a finite range: (0.038, 0.102) and the estimates are unbiased (MSE = 2.71*10− 4). Parameters βH (0, 0.614) and βF (0.097, 1.341) both have wide confidence intervals indicating uncertainty suggestive of non-identifiability. Estimating four parameters (Θ4), only βH is identifiable with a small range and bias; whereas, the remaining three parameter estimates have larger confidence intervals (Fig. 9).
Model 3–95% confidence intervals (vertical red lines) for the parameter estimate distributions obtained from the 200 realizations of the simulated datasets. Mean estimated parameter value is denoted by red x, and the true parameter value is represented by the blue horizontal line. Θi denotes the estimated parameter set, where i indicates the number of parameters being jointly estimated
For Θi where i > 4, none of the parameters can be identified from the model/data. Each parameter (for runs Θ5 – Θ7) has either a large confidence range and/or comparatively large MSE. Some parameters have MSE values < 10− 2 (Fig. 10), but the wide range of uncertainty around these parameters is still indicative of non-identifiability (Fig. 9).
Remarkably, R0 can be precisely estimated with unbiased results for parameter sets Θ1 – Θ4 (Figs. 9 & 10). When simultaneously estimating five or more parameters, however, the associated uncertainty of all the parameters results in non-identifiability of R0. For Θ5, for example, R0 estimates vary widely in the range (0.683, 2.821) with an MSE of 0.467. As previously mentioned, R0 is a threshold parameter (epidemic threshold at R0 = 1), so given the confidence interval including the critical value 1, we would not have the ability to distinguish between the potential for epidemic spread versus no outbreak.
For this complex model, we find again that when estimating only 1 or 2 parameters (Θ1, Θ2), the parameters can be recovered precisely with unbiased results (Figs. 11 & 12). When jointly estimating more than two parameters (Θi: i > 2), non-identifiability issues arise. It can be seen that the confidence intervals and MSE for β and γh1 are very small, and thus they are identifiable. However, all of the confidence intervals and MSE values for each of the other parameters (Θi: i > 2) are representative of non-identifiability. The parameter estimates have a large amount of uncertainty, represented by the large confidence intervals, and are also biased estimates of the true value: MSE > 10− 2 for all.
In terms of R0, we can see that this composite parameter of interest is identifiable for all Θi (Figs. 11 & 12). Despite the large confidence intervals associated with some parameters (ex: Θ6 – γh2: (0.047, 0.573)), when estimating more than two parameters, R0 can still be estimated with low uncertainty: (Θ6 – R0: (1.480, 1.486)). The R0 estimates have little error, as MSE < 10− 4 for all Θi. This is consistent with the previous models in that R0 estimates are robust to the uncertainty and bias of the other estimated parameters.
In this paper we have introduced a simple computational approach for assessing parameter identifiability in compartmental models comprised of systems of ordinary differential equations. We have demonstrated this approach through various examples of compartmental models of infectious disease transmission and control. Using simulated time series of the number of new infectious individuals, we analyzed the identifiability of model characterizing transmission and the natural history of the disease. This type of analysis based on simulated data provides a crucial step in infectious disease modeling, as inferences based on estimates of non-identifiable parameters can lead to incorrect or ineffective public health decisions. Parameter identifiability and uncertainty analyses are essential for assessing the stability of the parameter estimates. Hence, it is important for researchers to be mindful that a good fit to the data does not imply that parameter estimates can be reliably used to evaluate hypotheses regarding transmission mechanisms. Moreover, quantifying the uncertainty surrounding parameter estimates is key when making inferences that guide public health policies or interventions.
Our bootstrap-based approach is sufficiently general to assess identifiability for compartmental modeling applications. We have shown that this method works well for models of varying levels of complexity, ranging from a simple SEIR model with only a few parameters (Model 1) to a complex, dual-population compartmental model with a total of 16 parameters (Model 4). Other methods exist to conduct parameter identifiability analyses. Some methods, such as Taylor series methods [15, 16] and differential algebra-based methods [17, 18], require more mathematical analyses, which becomes increasingly complicated as model complexity increases. Other methods rely on constructing the profile likelihood for each of the estimated parameters to assess local structural identifiability [11, 14, 31, 32]. In this method, one of the parameters (θi) is fixed across a range of realistic values, and the other parameters are refit to the data using the likelihood function of θi. Thus, identifiability of the parameters is determined by the shape of the resulting likelihood profile. Depending on the assumptions of the error structure in the data and as models become increasingly more complex, derivation of the likelihood profile and confidence intervals becomes increasingly more difficult.
Overall, our analyses indicate that parameter identifiability issues are more likely to arise with more complex models (based on number of equations/states and parameters). For example, a set of 3 parameters (Θ3) can be estimated with low uncertainty and bias from a simple model, like Model 1; however, for more complex models (Model 3, Model 4), estimating only 3 parameters from a single curve of case incidence resulted in lack of identifiability for at least one of the parameters in the set (Θ3). Also, for Θi (recall: i represents number of parameters being jointly estimated), as i increases, the uncertainty surrounding estimated parameters tended to increase, on average, as well (Fig. 7). One strategy to resolve parameter identifiability issues consists of restricting the number of parameters being jointly estimated while fixing other parameter values and conducting sensitivity analyses.
Importantly, we found that R0 is a robust composite parameter, even in the presence of identifiability issues affecting individual parameters in the model. In Model 4, despite large confidence intervals and larger MSE for the estimated parameters, R0 estimates were contained in a finite confidence interval with little bias (Figs. 11 & 12). For example, for parameter set Θ6, only two of the estimated parameters could be reliably identified from the data, yet R0 could be identified with little uncertainty or bias. These findings are in line with the identifiability results of R0 for a vector-borne disease model (similar to Model 4), even when other model parameters could not be properly estimated [14]. R0 is often a parameter of interest, as R0 values have been related to the size or impact of an epidemic [1]. Moreover, R0 estimates can be used to characterize initial transmission potential, assess the risk of an outbreak, and evaluate the impact of potential interventions, so it is beneficial to know we can reliably obtain R0 estimates, despite lack of identifiability in other parameters.
It is important to emphasize that our methodology is helpful to uncover identifiability issues which could arise from 1) the lack of information in the data or 2) the structure of the model. We also note that our examples assess identifiability of parameters by relying on the entire curve of incidence data of a single epidemic. Future work could include identifiability analyses in the context of limited data using different sections of the trajectory of the outbreak. We also assume that only one model variable (state) is observed, so future analyses could incorporate more than one observed variable to potentially improve the identifiability of parameters without changing the model. For example, for Model 3 (Ebola), the incidence curves of new hospitalized cases and new deaths could provide additional information that better constrain parameter estimates, thereby improving parameter identifiability results.
For modeling studies, we recommend conducting comprehensive parameter identifiability analyses based on simulated data prior to attempting to fit the model to data. It is important to emphasize that lack of identifiability could be due to lack of information in the data or the structure of the model. The analyses also help guide the set of parameters in the model that can be jointly estimated – identifiability issues may not arise until any given number of parameters are being simultaneously estimated. If the analysis indicates non-identifiability of certain parameters, may have to be assessed in sensitivity analyses (rather than estimated) to address the identifiability issue.
In summary, the ability to make sound public health decisions regarding an infectious disease outbreak is crucial for the general health and safety of a population. Knowledge of whether a parameter is identifiable from a given model and data is invaluable, as estimates of non-identifiable parameters should not be used to inform public health decisions. Further, parameter estimates should be presented with quantified uncertainty. The methodology presented in this paper adds to the essential toolkit for conducting model-based inferences.
We thank Dr. Ping Yan (Public Health Agency of Canada) for interesting discussions relating to parameter identifiability.
GC acknowledges financial support from the NSF grant 1414374 as part of the joint NSF-NIH-USDA Ecology and Evolution of Infectious Diseases program; UK Biotechnology and Biological Sciences Research Council grant BB/M008894/1.
The datasets generated and/or analyzed in this study can be reproduced using the methods and Tables 1-4 or are available from the corresponding author on reasonable request. Matlab code is also available upon request.
KR and GC designed the study. KR analyzed the data and KR and GC interpreted the data. GC and KR contributed to further draft and edit the manuscript. All authors read and approved the final manuscript.
All of the data employed in this study were generated through simulations. Data are deemed exempt from institutional review board assessment.
Additional file 1: Model 1 – Θ1 (estimating β only): The histograms display the empirical distributions of the parameter estimates using 200 bootstrap realizations, where the solid red horizontal line represents the 95% confidence interval for parameter estimates, and the dashed red vertical line indicates the true parameter value. Note, κ and γ are set to their true values in the data. The bottom left graph shows the data from the model (blue circles), and 200 realizations of the epidemic curve assuming a Poisson error structure (light blue lines). The solid red line corresponds to the best-fit of the model to the data, and the dashed red lines correspond to the 95% confidence bands around the best fit. (TIF 5423 kb)
Additional file 2: Model 1 – Θ2 (estimating β and γ): The histograms display the empirical distributions of the parameter estimates using 200 bootstrap realizations, where the solid red horizontal line represents the 95% confidence interval for parameter estimates, and the dashed red vertical line indicates the true parameter value. Note, κ is set to the true value from the data. The bottom left graph shows the data from the model (blue circles), and 200 realizations of the epidemic curve assuming a Poisson error structure (light blue lines). The solid red line corresponds to the best-fit of the model to the data, and the dashed red lines correspond to the 95% confidence bands around the best fit. (TIF 5423 kb)
Additional file 3: Model 1 – Θ3 (estimating β, κ, and γ): The histograms display the empirical distributions of the parameter estimates using 200 bootstrap realizations, where the solid red horizontal line represents the 95% confidence interval for parameter estimates, and the dashed red vertical line indicates the true parameter value. The bottom left graph shows the data from the model (blue circles), and 200 realizations of the epidemic curve assuming a Poisson error structure (light blue lines). The solid red line corresponds to the best-fit of the model to the data, and the dashed red lines correspond to the 95% confidence bands around the best fit. (TIF 5423 kb)
Department of Population Health Sciences, School of Public Health, Georgia State University, Atlanta, GA, USA
Division of International Epidemiology and Population Studies, Fogarty International Center, National Institute of Health, Bethesda, MD, USA
Anderson RM, May RM. Infectious diseases of humans: dynamics and control. Oxford: Oxford University Press; 1991.Google Scholar
Diekmann O, Heesterbeek JA, Metz JA. On the definition and the computation of the basic reproduction ratio R0 in models for infectious diseases in heterogeneous populations. J Math Biol. 1990;28(4):365–82.PubMedView ArticleGoogle Scholar
Chowell G. Fitting dynamic models to epidemic outbreaks with quantified uncertainty: a primer for parameter uncertainty, identifiability, and forecasts. Infectious Disease Modelling. 2017;2:379–98.PubMedPubMed CentralView ArticleGoogle Scholar
He D, King A, King AA, Ionides EL. Plug-and-play inference for disease dynamics: measles in large and small populations as a case study. J R Soc Interface. 2010;7(43):271–83.PubMedView ArticleGoogle Scholar
Goeyvaerts N, Willem L, Van Kerckhove K, Vandendijck Y, Hanquet G, Beutels P, et al. Estimating dynamic transmission model parameters for seasonal influenza by fitting to age and season-specific influenza-like illness incidence. Epidemics. 2015;13:1–9.PubMedView ArticleGoogle Scholar
Chowell G, Viboud C, Simonsen L, Merler S, Vespignani A. Perspectives on model forecasts of the 2014–2015 Ebola epidemic in West Africa: lessons and the way forward. BMC Med. 2017;15(1):42.PubMedPubMed CentralView ArticleGoogle Scholar
Banks HT, Holm K, Robbins D. Standard error computations for uncertainty quantification in inverse problems: asymptotic theory vs. bootstrapping. Math Comput Model. 2010;52:1610–25.PubMedPubMed CentralView ArticleGoogle Scholar
Gibson GJ, Streftaris G, Thong D. Comparison and assessment of epidemic models. Stat Sci. 2018;33(1):19–33.View ArticleGoogle Scholar
Banks H, Davidian M, Samuels J Jr, Sutton K. An inverse problem statistical methodology summary. In: Chowell G, Hyman J, Bettencourt L, Castillo-Chavez C, editors. Mathematical and statistical estimation approaches in epidemiology. Dordecht, The Netherlands: Springer; 2009. p. 249–302.View ArticleGoogle Scholar
Wu KM, Riley S. Estimation of the basic reproductive number and mean serial interval of a novel pathogen in a small, well-observed discrete population. PLoS One. 2016;11(2):1–12.Google Scholar
Breto C. Modeling and inference for infectious disease dynamics: a likelihood-based approach. Stat Sci. 2018;33(1):57–69.PubMedView ArticleGoogle Scholar
Scranton K, Knape J, de Valpine P. An approximate Bayesian computation approach to parameter estimation in a stochastic stage-structured population model. Ecology. 2014;5:1418.View ArticleGoogle Scholar
Abdessalem AB, Dervilis N, Wagg D, Worden K. Model selection and parameter estimation in structural dynamics using approximate Bayesian computation. Mech Syst Signal Process. 2018;99:306–25.View ArticleGoogle Scholar
Kao Y-H, Eisenberg M. Practical unidentifiability of a simple vector-borne model: implications for parameter estimation and intervention assessment. Epidemics. 2018;25:89–100.PubMedPubMed CentralView ArticleGoogle Scholar
Miao H, Xia X, Perelson AS, Wu H. On identifiability of nonlinear ODE models and applications in viral dynamics. SIAM Rev. 2011;1:3.View ArticleGoogle Scholar
Pohjanpalo H. System identifiability based on power-series expansion of solution. Math Biosci. 1978;41:21–33.View ArticleGoogle Scholar
Eisenberg MC, Robertson SL, Tien JH. Identifiability and estimation of multiple transmission pathways in cholera and waterborne disease. J Theor Biol. 2013;324:84–102.PubMedView ArticleGoogle Scholar
Ljung L, Glad T. Testing global identifiability for arbitrary model parameterizations. IFAC Proceedings Volumes. 1991;24:1085–90.View ArticleGoogle Scholar
Chis O-T, Banga JR, Balsa-Canto E. Structural identifiability of systems biology models: a critical comparison of methods. PLoS One. 2011;6(11):1–16.View ArticleGoogle Scholar
Lloyd A. Introduction to epidemiological modeling: basic models and their properties; 2007.Google Scholar
Brauer F, van der Driessche P, Wu J, Allen LJS. Mathematical epidemiology. Berlin: Springer; 2008.View ArticleGoogle Scholar
van den Driessche P, Watmough J. Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission. Math Biosci. 2002;180:29–48.PubMedView ArticleGoogle Scholar
Chowell G, Nishiura H. Comparative estimation of the reproduction number for pandemic influenza from daily case notification data. J R Soc Interface. 2007;4(12):155–66.PubMedView ArticleGoogle Scholar
Chowell G, Ammon CE, Hengartner NW, Hyman JM. Estimation of the reproductive number of the Spanish flu epidemic in Geneva, Switzerland. Vaccine. 2006;24:6747–50.PubMedView ArticleGoogle Scholar
Legrand J, Grais RF, Boelle PY, Valleron AJ, Flahault A. Understanding the dynamics of Ebola epidemics. Epidemiol Infect. 2007;4:610.View ArticleGoogle Scholar
Gao D, Lou Y, He D, Porco TC, Kuang Y, Chowell G, et al. Prevention and control of Zika as a mosquito-borne and sexually transmitted disease: A mathematical modeling analysis. Scientific Reports. 2016;6:28070.PubMedPubMed CentralView ArticleGoogle Scholar
Efron B, Tibshirani R. An introduction to the bootstrap. New York: Chapman & Hall; 1993.View ArticleGoogle Scholar
Chowell G, Hengartner NW, Castillo-Chavez C, Fenimore PW, Hyman JM. The basic reproductive number of Ebola and the effects of public health measures: the cases of Congo and Uganda; 2005.Google Scholar
Cobelli C, Romanin-Jacur G. Controllability, observability and structural identifiability of multi input and multi output biological compartmental systems. IEEE Trans Biomed Eng. 1976;BME-23(2):93.View ArticleGoogle Scholar
Jacquez JA. Compartmental analysis in biology and medicine. 2nd ed. Ann Arbor: University of Michigan Press; 1985.Google Scholar
Raue A, Kreutz C, Maiwald T, Bachmann J, Schilling M, Klingmuller U, et al. Structural and practical identifiability analysis of partially observed dynamical models by exploiting the profile likelihood. Bioinformatics. 2009;25(15):1923–9.PubMedView ArticleGoogle Scholar
Nguyen VK, Binder SC, Boianelli A, Meyer-Hermann M, Hernandez-Vargas EA. Ebola virus infection modeling and identifiability problems. Front Microbiol. 2015;6:257.PubMedPubMed CentralGoogle Scholar
|
CommonCrawl
|
Mats Gyllenberg 1, , Jifa Jiang 2, , Lei Niu 1,, and Ping Yan 1,3,
Department of Mathematics and Statistics, University of Helsinki, Helsinki FI-00014, Finland
Mathematics and Science College, Shanghai Normal University, Shanghai 200234, China
School of Sciences, Zhejiang A & F University, Hangzhou 311300, China
* Corresponding author: Lei Niu
Received April 2019 Revised September 2019 Published December 2019
Fund Project: This work is supported by the National Natural Science Foundation of China (NSFC) under Grant No. 11371252 and Grant No. 11771295, Shanghai Gaofeng Project for University Academic Program Development, and the Academy of Finland
Figure(10) / Table(1)
We study the permanence and impermanence for discrete-time Kolmogorov systems admitting a carrying simplex. Sufficient conditions to guarantee permanence and impermanence are provided based on the existence of a carrying simplex. Particularly, for low-dimensional systems, permanence and impermanence can be determined by boundary fixed points. For a class of competitive systems whose fixed points are determined by linear equations, there always exists a carrying simplex. We provide a universal classification via the equivalence relation relative to local dynamics of boundary fixed points for the three-dimensional systems by the index formula on the carrying simplex. There are a total of $ 33 $ stable equivalence classes which are described in terms of inequalities on parameters, and we present the phase portraits on their carrying simplices. Moreover, every orbit converges to some fixed point in classes $ 1-25 $ and $ 33 $; there is always a heteroclinic cycle in class $ 27 $; Neimark-Sacker bifurcations may occur in classes $ 26-31 $ but cannot occur in class $ 32 $. Based on our permanence criteria and the equivalence classification, we obtain the specific conditions on parameters for permanence and impermanence. Only systems in classes $ 29, 31, 33 $ and those in class $ 27 $ with a repelling heteroclinic cycle are permanent. Applications to discrete population models including the Leslie-Gower models, Atkinson-Allen models and Ricker models are given.
Keywords: Permanence, carrying simplex, competitive system, classification, fixed point index, phase portrait, heteroclinic cycle, Neimark-Sacker bifurcation, population model.
Mathematics Subject Classification: Primary: 37B25, 37Cxx, 37N25; Secondary: 92D25.
Citation: Mats Gyllenberg, Jifa Jiang, Lei Niu, Ping Yan. Permanence and universal classification of discrete-time competitive systems via the carrying simplex. Discrete & Continuous Dynamical Systems - A, 2020, 40 (3) : 1621-1663. doi: 10.3934/dcds.2020088
L. J. S. Allen, E. J. Allen and D. N. Atkinson, Integrodifference equations applied to plant dispersal, competition, and control, in Differential Equations with Applications to Biology, Fields Institute Communications, 21, Amer. Math. Soc., Providence, RI, 1999, 15–30. Google Scholar
H. Amann, Fixed point equations and nonlinear eigenvalue problems in ordered Banach spaces, SIAM Rev., 18 (1976), 620-709. doi: 10.1137/1018114. Google Scholar
D. N. Atkinson, Mathematical Models for Plant Competition and Dispersal, Master's thesis, Texas Tech University in Lubbock, 1997. Google Scholar
S. Baigent, Geometry of carrying simplices of 3-species competitive Lotka-Volterra systems, Nonlinearity, 26 (2013), 1001-1029. doi: 10.1088/0951-7715/26/4/1001. Google Scholar
S. Baigent, Convexity of the carrying simplex for discrete-time planar competitive Kolmogorov systems, J. Difference Equ. Appl., 22 (2016), 609-622. doi: 10.1080/10236198.2015.1125895. Google Scholar
S. Baigent, Convex geometry of the carrying simplex for the May–Leonard map, Discrete Contin. Dyn. Syst. Ser. B, 24 (2019), 1697-1723. doi: 10.3934/dcdsb.2018288. Google Scholar
S. Baigent and Z. Hou, Global stability of interior and boundary fixed points for Lotka-Volterra systems, Differ. Equ. Dyn. Syst., 20 (2012), 53-66. doi: 10.1007/s12591-012-0103-0. Google Scholar
S. Baigent and Z. Hou, Global stability of discrete-time competitive population models, J. Difference Equ. Appl., 23 (2017), 1378-1396. doi: 10.1080/10236198.2017.1333116. Google Scholar
E. C. Balreira, S. Elaydi and R. Luís, Global stability of higher dimensional monotone maps, J. Difference Equ. Appl., 23 (2017), 2037-2071. doi: 10.1080/10236198.2017.1388375. Google Scholar
Å. Brännström and D. J. T. Sumpter, The role of competition and clustering in population dynamics, Proc. R. Soc. B, 272 (2005), 2065-2072. doi: 10.1098/rspb.2005.3185. Google Scholar
X. Chen, J. Jiang and L. Niu, On Lotka-Volterra equations with identical minimal intrinsic growth rate, SIAM J. Appl. Dyn. Syst., 14 (2015), 1558-1599. doi: 10.1137/15M1006878. Google Scholar
S. N. Chow and J. K. Hale, Methods of Bifurcation Theory, Fundamental Principles of Mathematical Science, 251, Springer-Verlag, New York-Berlin, 1982. doi: 10.1007/978-1-4613-8159-4. Google Scholar
J. M. Cushing, On the fundamental bifurcation theorem for semelparous Leslie models, in Dynamics, Games and Science, CIM Ser. Math. Sci., 1, Springer, Cham, 2015, 215–251. Google Scholar
J. M. Cushing, S. Levarge, N. Chitnis and S. M. Henson, Some discrete competition models and the competitive exclusion principle, J. Difference Equ. Appl., 10 (2004), 1139-1151. doi: 10.1080/10236190410001652739. Google Scholar
N. V. Davydova, O. Diekmann and S. A. van Gils, On circulant populations. Ⅰ. The algebra of semelparity, Linear Algebra Appl., 398 (2005), 185-243. doi: 10.1016/j.laa.2004.12.020. Google Scholar
P. de Mottoni and A. Schiaffino, Competition systems with periodic coefficients: A geometric approach, J. Math. Biol., 11 (1981), 319-335. doi: 10.1007/BF00276900. Google Scholar
O. Diekmann, Y. Wang and P. Yan, Carrying simplices in discrete competitive systems and age-structured semelparous populations, Discrete Contin. Dyn. Syst., 20 (2008), 37-52. doi: 10.3934/dcds.2008.20.37. Google Scholar
H. T. M. Eskola and S. A. H. Geritz, On the mechanistic derivation of various discrete-time population models, Bull. Math. Biol., 69 (2007), 329-346. doi: 10.1007/s11538-006-9126-4. Google Scholar
M. A. Fishman, Density effects in population growth: An exploration, Biosystems, 40 (1997), 219-236. doi: 10.1016/S0303-2647(96)01649-8. Google Scholar
J. E. Franke and A.-A. Yakubu, Mutual exclusion versus coexistence for discrete competitive systems, J. Math. Biol., 30 (1991), 161-168. doi: 10.1007/BF00160333. Google Scholar
J. E. Franke and A.-A. Yakubu, Geometry of exclusion principles in discrete systems, J. Math. Anal. Appl., 168 (1992), 385-400. doi: 10.1016/0022-247X(92)90167-C. Google Scholar
B. M. Garay and J. Hofbauer, Robust permanence for ecological differential equations, minimax, and discretizations, SIAM J. Math. Anal., 34 (2003), 1007-1039. doi: 10.1137/S0036141001392815. Google Scholar
S. A. H. Geritz, Resident-invader dynamics and the coexistence of similar strategies, J. Math. Biol., 50 (2005), 67-82. doi: 10.1007/s00285-004-0280-8. Google Scholar
S. A. H. Geritz, M. Gyllenberg, F. J. A. Jacobs and K. Parvinen, Invasion dynamics and attractor inheritance, J. Math. Biol., 44 (2002), 548-560. doi: 10.1007/s002850100136. Google Scholar
S. A. H. Geritz and E. Kisdi, On the mechanistic underpinning of discrete-time population models with complex dynamics, J. Theoret. Biol., 228 (2004), 261-269. doi: 10.1016/j.jtbi.2004.01.003. Google Scholar
S. A. H. Geritz, E. Kisdi, G. Meszéna and J. A. J. Metz, Evolutionarily singular strategies and the adaptive growth and branching of the evolutionary tree, Evolutionary Ecology, 12 (1998), 35-57. doi: 10.1023/A:1006554906681. Google Scholar
S. A. H. Geritz, J. A. J. Metz, E. Kisdi and G. Meszéna, Dynamics of adaptation and evolutionary branching, Phys. Rev. Lett., 78 (1997), 2024-2027. doi: 10.1103/PhysRevLett.78.2024. Google Scholar
W. Govaerts, R. K. Ghaziani, Y. A. Kuznetsov and H. G. E. Meijer, Numerical methods for two-parameter local bifurcation analysis of maps, SIAM J. Sci. Comput., 29 (2007), 2644-2667. doi: 10.1137/060653858. Google Scholar
W. Govaerts, Y. A. Kuznetsov, H. G. E. Meijer and N. Neirynck, A study of resonance tongues near a Chenciner bifurcation using MatcontM, European Nonlinear Dynamics Conference, 2011, 24–29. Google Scholar
A. Granas and J. Dugundji, Fixed Point Theory, Springer Monographs in Mathematics, Springer-Verlag, New York, 2003. doi: 10.1007/978-0-387-21593-8. Google Scholar
M. Gyllenberg, I. Hanski and T. Lindström, Continuous versus discrete single species population models with adjustable reproductive strategies, Bull. Math. Biol., 59 (1997), 679-705. doi: 10.1007/BF02458425. Google Scholar
M. Gyllenberg, J. Jiang and L. Niu, A note on global stability of three-dimensional Ricker models, J. Difference Equ. Appl., 25 (2019), 142-150. doi: 10.1080/10236198.2019.1566459. Google Scholar
M. Gyllenberg, J. Jiang, L. Niu and P. Yan, On the dynamics of multi-species Ricker models admitting a carrying simplex, J. Difference Equ. Appl., in press. doi: 10.1080/10236198.2019.1663182. Google Scholar
M. Gyllenberg, J. Jiang, L. Niu and P. Yan, On the classification of generalized competitive Atkinson-Allen models via the dynamics on the boundary of the carrying simplex, Discrete Contin. Dyn. Syst., 38 (2018), 615-650. doi: 10.3934/dcds.2018027. Google Scholar
M. Gyllenberg, P. Yan and Y. Wang, A 3D competitive Lotka-Volterra system with three limit cycles: A falsification of a conjecture by Hofbauer and So, Appl. Math. Lett., 19 (2006), 1-7. doi: 10.1016/j.aml.2005.01.002. Google Scholar
J. K. Hale and A. S. Somolinos, Competition for fluctuating nutrient, J. Math. Biol., 18 (1983), 255-280. doi: 10.1007/BF00276091. Google Scholar
M. P. Hassell, Density-dependence in single-species populations, J. Anim. Ecol., 44 (1975), 283-295. doi: 10.2307/3863. Google Scholar
M. P. Hassell and H. N. Comins, Discrete time models for two-species competition, Theoret. Population Biology, 9 (1976), 202-221. doi: 10.1016/0040-5809(76)90045-9. Google Scholar
M. W. Hirsch, Systems of differential equations which are competitive or cooperative. Ⅲ. Competing species, Nonlinearity, 1 (1988), 51-71. doi: 10.1088/0951-7715/1/1/003. Google Scholar
M. W. Hirsch, On existence and uniqueness of the carrying simplex for competitive dynamical systems, J. Biol. Dyn., 2 (2008), 169-179. doi: 10.1080/17513750801939236. Google Scholar
J. Hofbauer, Heteroclinic cycles in ecological differential equations, Tatra Mt. Math. Publ., 4 (1994), 105-116. Google Scholar
J. Hofbauer, V. Hutson and W. Jansen, Coexistence for systems governed by difference equations of Lotka-Volterra type, J. Math. Biol., 25 (1987), 553-570. doi: 10.1007/BF00276199. Google Scholar
[43] J. Hofbauer and K. Sigmund, Evolutionary Games and Population Dynamics, Cambridge University Press, Cambridge, 1998. doi: 10.1017/CBO9781139173179. Google Scholar
J. Hofbauer and J. W.-H. So, Multiple limit cycles for three dimensional Lotka-Volterra equations, Appl. Math. Lett., 7 (1994), 65-70. doi: 10.1016/0893-9659(94)90095-7. Google Scholar
Z. Hou and S. Baigent, Global stability and repulsion in autonomous Kolmogorov systems, Commun. Pure Appl. Anal., 14 (2015), 1205-1238. doi: 10.3934/cpaa.2015.14.1205. Google Scholar
T. Hüls and C. Pötzsche, Qualitative analysis of a nonautonomous Beverton-Holt Ricker model, SIAM J. Appl. Dyn. Syst., 13 (2014), 1442-1488. doi: 10.1137/140955434. Google Scholar
V. Hutson and W. Moran, Persistence of species obeying difference equations, J. Math. Biol., 15 (1982), 203-213. doi: 10.1007/BF00275073. Google Scholar
J. Jiang and L. Niu, On the equivalent classification of three-dimensional competitive Atkinson/Allen models relative to the boundary fixed points, Discrete Contin. Dyn. Syst., 36 (2016), 217-244. doi: 10.3934/dcds.2016.36.217. Google Scholar
J. Jiang and L. Niu, On the equivalent classification of three-dimensional competitive Leslie/Gower models via the boundary dynamics on the carrying simplex, J. Math. Biol., 74 (2017), 1223-1261. doi: 10.1007/s00285-016-1052-y. Google Scholar
J. Jiang, L. Niu and Y. Wang, On heteroclinic cycles of competitive maps via carrying simplices, J. Math. Biol., 72 (2016), 939-972. doi: 10.1007/s00285-015-0920-1. Google Scholar
J. Jiang, L. Niu and D. Zhu, On the complete classification of nullcline stable competitive three-dimensional Gompertz models, Nonlinear Anal. Real World Appl., 20 (2014), 21-35. doi: 10.1016/j.nonrwa.2014.04.006. Google Scholar
F. G. W. Jones and J. N. Perry, Modelling populations of cyst-nematodes (Nematoda: Heteroderidae), J. Applied Ecology, 15 (1978), 349-371. doi: 10.2307/2402596. Google Scholar
R. Kon, Permanence of discrete-time Kolmogorov systems for two species and saturated fixed points, J. Math. Biol., 48 (2004), 57-81. doi: 10.1007/s00285-003-0224-8. Google Scholar
R. Kon, Convex dominates concave: An exclusion principle in discrete-time Kolmogorov systems, Proc. Amer. Math. Soc., 134 (2006), 3025-3034. doi: 10.1090/S0002-9939-06-08309-2. Google Scholar
R. Kon and Y. Takeuchi, Permanence of host-parasitoid systems, Nonlinear Anal., 47 (2001), 1383-1393. doi: 10.1016/S0362-546X(01)00273-5. Google Scholar
Y. A. Kuznetsov, Elements of Applied Bifurcation Theory, Applied Mathematical Sciences, 112, Springer-Verlag, New York, 1998. doi: 10.1007/978-1-4757-3978-7. Google Scholar
Y. A. Kuznetsov and R. J. Sacker, Neimark-Sacker bifurcation, Scholarpedia, 3 (2008). doi: 10.4249/scholarpedia.1845. Google Scholar
R. Law and A. R. Watkinson, Response-surface analysis of two-species competition: An experiment on Phleum arenarium and Vulpia fasciculata, J. Ecol., 75 (1987), 871-886. doi: 10.2307/2260211. Google Scholar
P. H. Leslie and J. C. Gower, The properties of a stochastic model for two competing species, Biometrika, 45 (1958), 316-330. doi: 10.1093/biomet/45.3-4.316. Google Scholar
J. M. Levine and M. Rees, Coexistence and relative abundance in annual plant assemblages: The roles of competition and colonization, Amer. Naturalist, 160 (2002), 452-467. doi: 10.1086/342073. Google Scholar
Z. Lu and Y. Luo, Three limit cycles for a three-dimensional Lotka-Volterra competitive system with a heteroclinic cycle, Comput. Math. Appl., 46 (2003), 231-238. doi: 10.1016/S0898-1221(03)90027-7. Google Scholar
Z. Lu and W. Wang, Permanence and global attractivity for Lotka-Volterra difference systems, J. Math. Biol., 39 (1999), 269-282. doi: 10.1007/s002850050171. Google Scholar
R. M. May, Biological populations with nonoverlapping generations: Stable points, stable cycles, and chaos, Science, 186 (1974), 645-647. doi: 10.1126/science.186.4164.645. Google Scholar
R. M. May and G. F. Oster, Bifurcations and dynamic complexity in simple ecological models, Amer. Naturalist, 110 (1976), 573-599. doi: 10.1086/283092. Google Scholar
C. D. Meyer, Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2000. doi: 10.1137/1.9780898719512. Google Scholar
J. Mierczyński, The ${C}^1$ property of convex carrying simplices for competitive maps, Ergodic Theory Dynam. Systems, (2018), 1–16. doi: 10.1017/etds.2018.85. Google Scholar
J. Mierczyński, The ${C}^1$ property of convex carrying simplices for three-dimensional competitive maps, J. Difference Equ. Appl., 24 (2018), 1199-1209. doi: 10.1080/10236198.2018.1428964. Google Scholar
J. Mierczyński, L. Niu and A. Ruiz-Herrera, Linearization and invariant manifolds on the carrying simplex for competitive maps, J. Differential Equations, 267 (2019), 7385-7410. doi: 10.1016/j.jde.2019.08.001. Google Scholar
L. Niu and A. Ruiz-Herrera, Trivial dynamics in discrete-time systems: Carrying simplex and translation arcs, Nonlinearity, 31 (2018), 2633-2650. doi: 10.1088/1361-6544/aab46e. Google Scholar
M. Rees and M. Westoby, Game-theoretical evolution of seed mass in multi-species ecological models, Oikos, 78 (1997), 116-126. doi: 10.2307/3545807. Google Scholar
W. E. Ricker, Stock and recruitment, J. Fish. Res. Board. Can., 11 (1954), 559-623. doi: 10.1139/f54-039. Google Scholar
L.-I. W. Roeger, Discrete May-Leonard competition models. Ⅱ, Discrete Contin. Dyn. Syst. Ser. B, 5 (2005), 841-860. doi: 10.3934/dcdsb.2005.5.841. Google Scholar
L.-I. W. Roeger and L. J. S. Allen, Discrete May–Leonard competition models. Ⅰ, J. Difference Equ. Appl., 10 (2004), 77-98. doi: 10.1080/10236190310001603662. Google Scholar
A. Ruiz-Herrera, Exclusion and dominance in discrete population models via the carrying simplex, J. Difference Equ. Appl., 19 (2013), 96-113. doi: 10.1080/10236198.2011.628663. Google Scholar
H. L. Smith, Periodic competitive differential equations and the discrete dynamics of competitive maps, J. Differential Equations, 64 (1986), 165-194. doi: 10.1016/0022-0396(86)90086-0. Google Scholar
H. L. Smith, Planar competitive and cooperative difference equations, J. Differ. Equations Appl., 3 (1998), 335-357. doi: 10.1080/10236199708808108. Google Scholar
H. L. Smith and H. R. Thieme, Dynamical Systems and Population Persistence, Graduate Studies in Mathematics, 118, American Mathematical Society, Providence, RI, 2011. Google Scholar
C. R. Townsend, M. Begon and J. L. Harper, Essentials of Ecology, Blackwell Publishing, 2008. Google Scholar
W. Van den berg, W. A. H. Rossing and J. Grasman, Contest and scramble competition and the carry-over effect in Globodera spp. in potato-based crop rotations using an extended Ricker model, J. Nematol., 38 (2006), 210-220. Google Scholar
P. van den Driessche and M. L. Zeeman, Three-dimensional competitive Lotka–Volterra systems with no periodic orbits, SIAM J. Appl. Math., 58 (1998), 227-234. doi: 10.1137/S0036139995294767. Google Scholar
G. C. Varley, G. R. Gradwell and M. P. Hassell, Insect Population Ecology, Blackwell Scientific Publications, Oxford, 1973. Google Scholar
Y. Wang and J. Jiang, Uniqueness and attractivity of the carrying simplex for discrete-time competitive dynamical systems, J. Differential Equations, 186 (2002), 611-632. doi: 10.1016/S0022-0396(02)00025-6. Google Scholar
D. Xiao and W. Li, Limit cycles for the competitive three dimensional Lotka-Volterra system, J. Differential Equations, 164 (2000), 1-15. doi: 10.1006/jdeq.1999.3729. Google Scholar
E. C. Zeeman and M. L. Zeeman, On the convexity of carrying simplices in competitive Lotka-Volterra systems, in Differential Equations, Dynamical Systems, and Control Science, Lecture Notes in Pure and Appl. Math., 152, Dekker, New York, 1994, 353–364. Google Scholar
E. C. Zeeman and M. L. Zeeman, From local to global behavior in competitive Lotka-Volterra systems, Trans. Amer. Math. Soc., 355 (2003), 713-734. doi: 10.1090/S0002-9947-02-03103-3. Google Scholar
E. C. Zeeman and M. L. Zeeman, An $n$-dimensional competitive Lotka-Volterra system is generically determined by the edges of its carrying simplex, Nonlinearity, 15 (2002), 2019-2032. doi: 10.1088/0951-7715/15/6/312. Google Scholar
M. L. Zeeman, Hopf bifurcations in competitive three-dimensional Lotka-Volterra systems, Dynam. Stability Systems, 8 (1993), 189-217. doi: 10.1080/02681119308806158. Google Scholar
Figure 1. A carrying simplex $ \Sigma $ with a repelling heteroclinic cycle $ \partial\Sigma $
Figure 2. The phase portrait on $ \Sigma $ replaced by $ \Delta^1 $. A closed dot $ \bullet $ denotes a fixed point which attracts on $ \Sigma $, and an open dot $ \circ $ denotes the one which repels on $ \Sigma $. Each $ \Sigma $ stands for an equivalence class. Class $ 1 $ corresponds to Proposition 4.7 (a) and (b); class $ 2 $ corresponds to Proposition 4.7 (c); class $ 3 $ corresponds to Proposition 4.7 (d)
Figure 3. The phase portrait on $ \Sigma $ for class $ 33 $. Every orbit in the interior of $ \Sigma $ converges to $ p $. The fixed point notation is as in Table 1
Figure 4. The phase portrait on $ \Sigma $ for class $ 29 $. The fixed point notation is as in Table 1
Figure 5. The orbit emanating from $ x_0 = (1, 0.0667, 0.0667) $ for the map $ T\in\mathrm{CLG}(3) $ with the parameter matrix $ U $ given in Example 5.1 and $ r_1 = 1, r_2 = 0.2, r_3 = 1 $ leads away from $ \partial \Sigma $ and tends to an attracting invariant closed curve, and the orbit emanating from $ x_0 = (0.2151, 0.746, 0.0173) $ also tends to an attracting invariant closed curve
Figure 6. The orbit emanating from $ x_0 = (1, 0.0667, 0.0667) $ for the map $ T\in\mathrm{CGAA}(3) $ with the parameter matrix $ U $ given in Example 5.1 and $ r_1 = r_2 = r_3 = 1 $, $ c_1 = \frac{1}{10}, c_2 = \frac{1}{5}, c_3 = \frac{1}{5} $ leads away from $ \partial \Sigma $ and tends to an attracting invariant closed curve, and the orbit emanating from $ x_0 = (0.7, 0.1642, 0.1685) $ also tends to an attracting invariant closed curve
Figure 7. The orbit emanating from $ x_0 = (0.04, 0.12, 0.36) $ for the map $ T\in\mathrm{CGAA}(3) $ with the parameter matrix $ U $ given in Example 5.3 and $ r_1 = r_2 = r_3 = 1 $, $ c_1 = 0.1, c_2 = 0.79, c_3 = 0.1 $ tends to an attracting invariant closed curve, while the orbit emanating from $ x_0 = (0.0002, 0.023, 0.486) $ approaches the heteroclinic cycle $ \partial \Sigma $
Figure 8. The orbit emanating from $ x_0 = (0.427, 0.8574, 0.014) $ for the map $ T\in\mathrm{MFC}(3) $ with the parameter matrix $ U $ given in Example 5.4, $ c = \frac{4}{5} $ and $ r_1 = r_3 = 1, r_2 = 0.03 $ tends to an attracting invariant closed curve
Figure 9. The orbit emanating from $ x_0 = (0.5962, 0.4857, 0.193) $ for the map $ T\in\mathrm{MFC}(3) $ with the parameter matrix $ U $ given in Example 5.5, $ c = \frac{4}{5} $ and $ r_1 = r_3 = 1, r_2 = 0.02 $ tends to an attracting invariant closed curve
Figure 10. The orbit emanating from $ x_0 = (0.3128, 0.8347, 0.0199) $ for the map $ T\in\mathrm{CRC}(3) $ with the parameter matrix $ U $ given in Example 5.4 and $ r_1 = \frac{1}{11}, r_2 = 0.01, r_3 = \frac{2}{7} $ tends to an attracting invariant closed curve
Table 1. The $33$ equivalence classes in $\mathrm{DCS}(3, f)$, where $\gamma_{ij} = \mu_{ii}-\mu_{ji}$, $\beta_{ij} = \frac{\mu_{jj}-\mu_{ij}}{\mu_{ii}\mu_{jj}-\mu_{ij}\mu_{ji}}$ ($\beta_{ij}$ is well defined; see Remark 4.6), $i, j = 1, 2, 3$ and $i\neq j$, and each $\Sigma$ is given by a representative map of that class. A fixed point is represented by a closed dot $\bullet$ if it attracts on $\Sigma$, by an open dot $\circ$ if it repels on $\Sigma$, and by the intersection of its stable and unstable manifolds if it is a saddle on $\Sigma$. For classes $1-25$ and $33$, every orbit converges to some fixed point; for classes $26-31$, Neimark-Sacker bifurcations might occur; for class $27$, $\partial \Sigma$ is a heteroclinic cycle; for class $32$, the unique positive fixed point is a repeller and Neimark-Sacker bifurcation cannot occur in this class
Yunshyong Chow, Sophia Jang. Neimark-Sacker bifurcations in a host-parasitoid system with a host refuge. Discrete & Continuous Dynamical Systems - B, 2016, 21 (6) : 1713-1728. doi: 10.3934/dcdsb.2016019
Mats Gyllenberg, Jifa Jiang, Lei Niu, Ping Yan. On the classification of generalized competitive Atkinson-Allen models via the dynamics on the boundary of the carrying simplex. Discrete & Continuous Dynamical Systems - A, 2018, 38 (2) : 615-650. doi: 10.3934/dcds.2018027
Chunqing Wu, Patricia J.Y. Wong. Global asymptotical stability of the coexistence fixed point of a Ricker-type competitive model. Discrete & Continuous Dynamical Systems - B, 2015, 20 (9) : 3255-3266. doi: 10.3934/dcdsb.2015.20.3255
Hebai Chen, Xingwu Chen, Jianhua Xie. Global phase portrait of a degenerate Bogdanov-Takens system with symmetry. Discrete & Continuous Dynamical Systems - B, 2017, 22 (4) : 1273-1293. doi: 10.3934/dcdsb.2017062
Paolo Perfetti. Fixed point theorems in the Arnol'd model about instability of the action-variables in phase-space. Discrete & Continuous Dynamical Systems - A, 1998, 4 (2) : 379-391. doi: 10.3934/dcds.1998.4.379
Hongfu Yang, Xiaoyue Li, George Yin. Permanence and ergodicity of stochastic Gilpin-Ayala population model with regime switching. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3743-3766. doi: 10.3934/dcdsb.2016119
Jifa Jiang, Lei Niu. On the equivalent classification of three-dimensional competitive Atkinson/Allen models relative to the boundary fixed points. Discrete & Continuous Dynamical Systems - A, 2016, 36 (1) : 217-244. doi: 10.3934/dcds.2016.36.217
Ling-Hao Zhang, Wei Wang. Direct approach to detect the heteroclinic bifurcation of the planar nonlinear system. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 591-604. doi: 10.3934/dcds.2017024
Jihua Yang, Erli Zhang, Mei Liu. Limit cycle bifurcations of a piecewise smooth Hamiltonian system with a generalized heteroclinic loop through a cusp. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2321-2336. doi: 10.3934/cpaa.2017114
Stephen Baigent. Convex geometry of the carrying simplex for the May-Leonard map. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1697-1723. doi: 10.3934/dcdsb.2018288
Antonio Garijo, Armengol Gasull, Xavier Jarque. Local and global phase portrait of equation $\dot z=f(z)$. Discrete & Continuous Dynamical Systems - A, 2007, 17 (2) : 309-329. doi: 10.3934/dcds.2007.17.309
Odo Diekmann, Yi Wang, Ping Yan. Carrying simplices in discrete competitive systems and age-structured semelparous populations. Discrete & Continuous Dynamical Systems - A, 2008, 20 (1) : 37-52. doi: 10.3934/dcds.2008.20.37
Zhanyuan Hou, Stephen Baigent. Heteroclinic limit cycles in competitive Kolmogorov systems. Discrete & Continuous Dynamical Systems - A, 2013, 33 (9) : 4071-4093. doi: 10.3934/dcds.2013.33.4071
Qizhen Xiao, Binxiang Dai. Heteroclinic bifurcation for a general predator-prey model with Allee effect and state feedback impulsive control strategy. Mathematical Biosciences & Engineering, 2015, 12 (5) : 1065-1081. doi: 10.3934/mbe.2015.12.1065
Frédérique Billy, Jean Clairambault, Franck Delaunay, Céline Feillet, Natalia Robert. Age-structured cell population model to study the influence of growth factors on cell cycle dynamics. Mathematical Biosciences & Engineering, 2013, 10 (1) : 1-17. doi: 10.3934/mbe.2013.10.1
Xiaoyue Li, Xuerong Mao. Population dynamical behavior of non-autonomous Lotka-Volterra competitive system with random perturbation. Discrete & Continuous Dynamical Systems - A, 2009, 24 (2) : 523-545. doi: 10.3934/dcds.2009.24.523
Zhihua Liu, Hui Tang, Pierre Magal. Hopf bifurcation for a spatially and age structured population dynamics model. Discrete & Continuous Dynamical Systems - B, 2015, 20 (6) : 1735-1757. doi: 10.3934/dcdsb.2015.20.1735
Kie Van Ivanky Saputra, Lennaert van Veen, Gilles Reinout Willem Quispel. The saddle-node-transcritical bifurcation in a population model with constant rate harvesting. Discrete & Continuous Dynamical Systems - B, 2010, 14 (1) : 233-250. doi: 10.3934/dcdsb.2010.14.233
Pengmiao Hao, Xuechen Wang, Junjie Wei. Global Hopf bifurcation of a population model with stage structure and strong Allee effect. Discrete & Continuous Dynamical Systems - S, 2017, 10 (5) : 973-993. doi: 10.3934/dcdss.2017051
Xianlong Fu, Zhihua Liu, Pierre Magal. Hopf bifurcation in an age-structured population model with two delays. Communications on Pure & Applied Analysis, 2015, 14 (2) : 657-676. doi: 10.3934/cpaa.2015.14.657
Mats Gyllenberg Jifa Jiang Lei Niu Ping Yan
|
CommonCrawl
|
Hierarchical combinatorial deep learning architecture for pancreas segmentation of medical computed tomography cancer images
Min Fu1,2,
Wenming Wu3,
Xiafei Hong3,
Qiuhua Liu1,2,
Jialin Jiang3,
Yaobin Ou1,
Yupei Zhao3 &
Xinqi Gong2
BMC Systems Biology volume 12, Article number: 56 (2018) Cite this article
Efficient computational recognition and segmentation of target organ from medical images are foundational in diagnosis and treatment, especially about pancreas cancer. In practice, the diversity in appearance of pancreas and organs in abdomen, makes detailed texture information of objects important in segmentation algorithm. According to our observations, however, the structures of previous networks, such as the Richer Feature Convolutional Network (RCF), are too coarse to segment the object (pancreas) accurately, especially the edge.
In this paper, we extend the RCF, proposed to the field of edge detection, for the challenging pancreas segmentation, and put forward a novel pancreas segmentation network. By employing multi-layer up-sampling structure replacing the simple up-sampling operation in all stages, the proposed network fully considers the multi-scale detailed contexture information of object (pancreas) to perform per-pixel segmentation. Additionally, using the CT scans, we supply and train our network, thus get an effective pipeline.
Working with our pipeline with multi-layer up-sampling model, we achieve better performance than RCF in the task of single object (pancreas) segmentation. Besides, combining with multi scale input, we achieve the 76.36% DSC (Dice Similarity Coefficient) value in testing data.
The results of our experiments show that our advanced model works better than previous networks in our dataset. On the other words, it has better ability in catching detailed contexture information. Therefore, our new single object segmentation model has practical meaning in computational automatic diagnosis.
Recently, due to the great development in deep neural network and increasing medical needs, Computer-Aided Diagnosis (CAD) system has become a new fashion. The high morbidity of pancreas cancers leads to great interest in developing useful CAD methods for diagnosis and treatment, in which accurate pancreas segmentation is fundamentally important. Therefore, developing an advanced pancreas segmentation method is necessary.
Nowadays, pancreas segmentation from Computed Tomography (CT) images is still an open challenge. The accuracy of pancreas segmentation in CT scans is still limit to 73% Dice Similarity Coefficient (DAC) on the patients without pancreatic cancer lesion [1,2,3,4,5,6], since the pancreas with cancer lesion are more challenging to be segmented. Previous efforts in pancreas segmentation are all referred as MALF (Multi-Atlas Registration & Label Fusion), a top-down model fitting method [1,2,3,4]. To optimize the per-pixel organ labeling process, they are all based on applying volumetric multiple atlas registration [7,8,9] and robust label fusion approach [10,11,12].
Recently, a new bottom-up pancreas segmentation method [5] has been reported, based on probability maps, which are aggregated to classify image regions, or super-pixels [13,14,15], into pancreas or non-pancreas label. By leveraging mid-level visual representations of image, this method aims to enhance the segmentation accuracy of highly deformable organs, such as the pancreas segmentation. Furtherly, this work has been improved [6] by using a set of multi-scale and multi-level deep Convolutional Neural Networks (CNN) to confront the high complexity of pancreas appearance in CT images.
In the past few years, deep CNN has become popular in the computer vision community, owing to its ability to accomplish various state-of-the-art tasks, such as image classification [16,17,18], semantic segmentation [19, 20] and object detection [21,22,23,24]. And there is a recent trend of applying it in edge detection, object segmentation and object detection [25] in medical imaging, and a series of deep learning based approaches have been invented. Fully Convolution Network (FCN) [20] adopts a skip architecture combining information from a deep layer and a shallow layer, which could produce accurate and detailed segmentations. Besides, the network could take input in arbitrary size and produce correspondingly-sized output. Holistically-nested edge detection (HED) [26] has been developed to perform image-to-image training and prediction. This deep learning model leverages fully convolutional neural networks and deeply-supervised nets, and accomplishes the task of object boundary detection by automatically learning rich hierarchical representations [17]. In the observation that only adopting the features from the last convolutional stage would cause losing some useful richer hierarchical features when classifying pixels to edge or non-edge class, richer convolutional features network (RCF) has been developed. Combining the multistage outputs, it accomplishes the task of edge detection better.
However, when it comes to the single object segmentation (pancreas segmentation), the RCF does not achieve great performance as in edge detection, because the detailed texture information of the object caught by the network is not accurate enough. To overcome this difficulty, we propose a novel multi-stage up-sampling structure into the network, to accomplish the task of single object segmentation (pancreas segmentation) more perfectly. In the following method section, we will explain our dataset, the detail of the multi-layer up-sampling structure,the loss function we used, the whole workflow, and the evalution criteria.Besides, the experiment result will be shown in the results section.
Our dataset are the real pancreas cancer CT images from the General Surgery Department of Peking Union Medical College Hospital. There are totally 59 patients, including 15 patients with non-pancreas diseases, and 44 with pancreas-related diseases, with a sum of 236 image slices. With the informed consent, patients' information, including name, gender, age, are confidential. At the slice level, one patient has 4 abdomen CT images in different phases, such as non-enhanced phase, arterial phase, portal phase, delayed phase. Additionally, the five sorts of pancreas-related diseases included in the dataset are: PDAC (Pancreatic Ductal Adenocarcinoma), PNET (Pancreatic Neuroendocrine Tumors), IPMN (Intraductal Papillary Mucinous Neoplasia), SCA (Serous CystAdenoma of the pancreas), and SPT (Solid Pseudopapillary Tumour of the pancreas) (Fig. 1).
Examples of the six types (including non-disease) of abdomen CT image for (a) Healthy, (b) PNET, (c) PDAC, (d) IPMN, (e) SCA, (f) SPT. From row1 to row4 are non-enhanced phase, arterial phase, portal phase, delayed phase
Multi-layer up-sampling structure
Inspired by the previous work on deep convolutional neural network [17, 26], we design our network by modifying the RCF network [27]. Based on Holistically-nested Edge Detection (HED) network, it is an edge detection architecture aiming to extract visually salient edges and object boundaries from natural images [27].
The whole network contains a feature extraction network and 5 feature fusing layers with up-sampling layers. The feature extraction network contains 13 conv layers and 4 pooling layers [27], which are divided into 5 stages (shown in Fig. 2). Different from the traditional classification network, there is no fully connected layer in the network. Besides, to get richer interior information and improve the overall performance, the RCF network combines the hierarchical features extracted from the 5 stages of the convolutional layers.
Architecture of our network. Part (a) shows the main structure of our network. In the feature extract network, each color box stands for a conv layer, and the conv layers are divided into 5 different stages in different colors. Furtherly, each stage is connected to a features fusing layer. After that, an up-sampling structure is used to de-convolute the extracted features to the initial size. Part (b) and (c) separately show the up-sampling structure of the RCF network and ours
Each stage combines a feature fusing layer, i.e., each convolutional layer in each stage is connected to a convolutional layer with kernel size 1*1 and channel depth 21 and then the resulting is accumulated using an element-wise layer to attain hybrid features [26], and a 1*1–1 convolutional layer follows them. After the feature fusing layer, an up-sampling structure (also called de-convolution) is used to up-sample the feature map to the input image size. Beneficial from the non-full-connection layers and up-sampling structures, the network can duel with input images in arbitrary size and output the response-size probability map.
In the up-sampling process, the images outputted by the last layer has to be resized as the input images, thus more detailed texture information is added into the images. The starting point of our network design lies in the construction of this detailed texture information.
The novel network proposed by us is shown in the part (a) of Fig. 2. Compared with RCF, our modifications can be described as following: We adopt the multi-layer up-sampling structures to replace the four de-convolutional layers. Then on the stage 2 to 5, the 1*1–1 conv layer is connected by the multi-layer up-sampling structure, and the output images of them are combined in the fusion stage.
Our novel structure consists of several up-sampling layers that include diverse convolutional kernels. We initialize them with bilinear interpolation. Then in the training process, the convolutional kernels in the layers continuously learn and adjust the parameters during iteration and repeated optimization.
Compared with the task of edge detection, single object segmentation requires the model containing far more accurate detailed texture information. In the previous RCF network, the de-convolutional layer could produce the loss pixels and resize the images, but resulting from the simple bilinear interpolation, the information added is too coarse to segment the object. As we all know, in an image, there are strong relationships between the neighbor pixels, and it is an ideal method to produce a missing pixel by using its nearest neighbors. However, adopting only one step of up-sampling may lead to produce a pixel by comparably far ones since too much pixels are missed in the images. In contrast, a multi-layer up-sampling structure ensures that a missing pixel is produced by its neighbors by multi-step up-sampling, and furtherly guarantees higher quality of output on each stage. Additionally, different from simple bilinear interpolation, the pattern, that the convolutional kernels adjust the parameters during the training process, assures the up-sampling operation and the whole model fit the local dataset better by producing a set of optimized parameters. The comparison of up-sampling structure in the RCF network and ours is shown in the part (b) and (c) in the Fig. 2.
Hence, we acquire multi-stage outputs with more accurate detailed texture information helpful to single object segmentation. We show the intermediate results from each stage in Fig. 3. Compared with the five outputs of RCF, they are obviously in higher quality. And the quantized advantages are shown in section 3.
Example of multistage output. The first column is the original input from our datasets. And from row 1 to row 6 are the six classes of pancreas disease, namely healthy, PNET, PDAC, IPMN, SCA, SPT. From the column 2 to column 6 are the output of stage 1 to 5 from our model
To train and optimize our segmentation model, we adopt per pixel loss function [26], and thus necessary to have the ground-truth maps. Each CT scan has been labeled by an annotator with medical knowledge. The ground-truth maps show the edge possibility of each pixel. 0 means that the annotator does not label at this pixel, and 1 means that the annotator labels at this pixel. Additionally, the negative sample consists of pixels with possibility value equal to 0, and the positive sample consists of other pixels.
$$ L(W)=\kern0.5em {\sum}_{i=1}^{\mid I\mid}\left({\sum}_{k=1}^Kl\left({X}_i^{(k)};W\right)+l\left({X}_i^{fuse};W\right)\right), $$
K means the number of stages making output. As shown in the Equation1, the loss value of each image is the addition of loss value of each pixel, which is made of loss value of each stage-out and fusion stage.\( l\left({X}_i^{(k)};W\right) \) denotes the loss value of a pixel in the k-th stage. Similarly, \( l\left({X}_i^{fuse};W\right) \) denotes the loss value of a pixel in the fusion stage. X i is the activation value (feature vector) at pixel i. W is all the parameters in our network. |I| is the number of all pixel in an image.
$$ l\left({X}_i;\mathrm{W}\right)=\left\{\begin{array}{c}\alpha \ast \log \left(1-P\left({X}_i;W\right)\right)\ if\ {y}_i=0\\ {}\beta \ast \log P\left({X}_i;W\right)\kern2.5em otherwise\end{array}\right. $$
P(X i ; W) is the edge possibility value at pixel i. P denotes the standard sigmoid function.
$$ \left\{\begin{array}{c}\upalpha =\uplambda \ast \frac{\left|{Y}^{+}\right|}{\left|{Y}^{+}\right|+\left|{Y}^{-}\right|}\\ {}\upbeta =\frac{\mid {Y}^{+}\mid }{\mid {Y}^{+}\mid +\mid {Y}^{-}\mid}\end{array}\right. $$
To balance the negative and positive sample, we adopt the hyper-parameter λ (λ is set as 1.1 when training). Y+ denotes the positive sample of an image, and Y− denotes the negative sample of an image.
Workflow of our segmentation
We implement a deep learning framework based on our new multi-layer up-sampling neural network for pancreas segmentation (Fig. 4). The segmentation pipeline consists of two modules, model training and optimization (Fig. 4).
Workflow of the segmentation process. The data with manually label are used to training and optimization. When the whole architecture is trained, the architecture receives the input CT images and directly output the pancreas segmentation result
In the model training module, firstly we preprocess both the original CT images and the ground truth images. The original images are in different size about 400 pixels*500 pixels. We resize the images' height to 256 pixels and keep the ratio of each image's height and width. Reducing the size of the images can not only speed the model training, but also retain more information of the original data.
After resizing the image size, to enlarge the training dataset and prevent the deep learning model over-fitting, we do the data augmentation basing on [28], such as translation transform and scale transform. After that, we trained our multi-layer up-sampling neural network based on Convolution Neural Network (CNN). Since the dataset is still small, we adopt transfer learning, i.e., fine-tuning our CNN models pre-trained from BSDS500 dataset [26] (a natural dataset for edge detection) to our medical CT image tasks, which [29] has examine why transfer learning from pre-trained natural dataset is useful in medical image tasks. After pre-training, the model gets an original set of parameters, and then was fine-tuned in our dataset, so that the network could easily converge in our dataset with a higher speed.
Our advanced model outputs a probability map of each training data. The probability map is in response-size with the input image, whose pixels are the probability of the corresponding pixel's belonging to pancreas. Besides, to highlight the pancreas, we rescale the probability map from the grey [0, 1] to [0,255] and do the gray value inversion, so in the probability map, darker region has higher probability to be pancreas.
The optimization module is divided into 3 steps: fusing, maximum connected area and threshold filter. In the fusing step, a set of probability maps belonging to the same input image is fused into a new image. To predict a specific pixel, we simply count the probability maps with its probability larger than 0. Then the specific pixel of a fuse image is made up of the mean of true positive pixel. In the maximum connected area step, after transforming the fuse image to binary image, we search the fused image's pixels to find the non-zeros neighbors of current pixel, and obtain one or several connected areas. Then we select the region with maximum area. In the filter step, we simply get a mask showing the maximum connected area, and use it to segment the pancreas from the original input image.
Here, P is the prediction image, G is the ground-truth image, and S is the area of foreground in certain image. Then we have the following criteria:
Precision (also called positive predictive value), is the fraction of correctly predicted foreground area among that in prediction
$$ Precision=\frac{S\left(P\bigcap G\right)}{S(P)} $$
where S(P ⋂ G) is the interaction area in foreground of P and G.
Recall (also known as sensitivity), is the fraction of correctly predicted foreground area over that in ground-truth.
$$ Recall=\frac{S\left(P\bigcap G\right)}{S(G)} $$
Dice Similarity Coefficient (DSC), measures the similarity of prediction image and ground-truth image. The definition of DSC is the same as F1 score. Here we also give its relationship with precision and recall.
$$ {\displaystyle \begin{array}{l} DSC\left(P,G\right)=\frac{2^{\ast }S\left(P\cap G\right)}{S(P)+S(G)}=\kern0.5em \frac{2}{\frac{S(P)}{S\left(P\cap G\right)}+\frac{S(G)}{S\left(P\cap G\right)}}\\ {}\kern8.5em =\frac{2}{\frac{1}{precision}+\frac{1}{recall}}=\frac{2\ast precision\ast recall}{precision+ recall}\end{array}} $$
Jaccard similarity coefficient, also known as Intersection over Union (originally coined coefficient de communauté by Paul Jaccard), is a statistic used for comparing the similarity and diversity of prediction image and ground-truth image. It is defined as the size of the intersection area divided by the size of the union area:
$$ \mathrm{Jaccard}\left(\mathrm{P},\mathrm{G}\right)=\frac{S\left(P\bigcap G\right)}{S\left(P\bigcup G\right)}=\frac{S\left(P\bigcap G\right)}{S(P)+S(G)-S\left(P\bigcap G\right)} $$
All of the criterias ranges from 0 to 1, with best value at 1 and worst at 0.
In our experiment, we randomly split the dataset of 59 patients into 5-folds, training and testing folds, with 10, 10, 10, 10 and 9 for each one. Then we do data augmentation, such as zooming in, flipping, rotating for each training data and enlarge the data into 128 times, and the whole dataset up to 30,208 images.
Besides, our CNN model is pre-trained in BSDS500 dataset and fine-tuned in our dataset with stochastic gradient descent (SGD) algorithm and step-wise learning schedule to optimize. The model is implemented by a deep learning framework CAFFE [30] and run over one NVIDIA QUADRO M4000 GPU.
Using 5-fold cross-validation, we could achieve a mean of precision of 76.83%, a mean of recall of 78.74%, a mean of DSC of 75.92%, and the mean of JACCARD of 63.29%. Apart from the recall one, all of them are higher than the RCF network. At the same time, our method with multi-scale input (OURS-MS) reaches 77.36%, 79.12%, 76.36%, 63.72% in mean of precision, recall, DSC and Jaccard. Table 1 show the detailed performance of three models.
Table 1 Compare the three segmentation models' performance in four measurements: precision, recall, DSC and Jaccard index
In the pancreas segmentation task, the number of positive samples is much less than that of negative samples, which means that the Precision-Recall (PR) curve can better reflect the performance of the prediction [31]. Figure 5 shows the Recall value can reach more than 90% while the Precision value is still more than 60%, which means that we could attain excellent reservation of the pancreas organ area in a decent precision.
The Precision-Recall curve. The blue, orange and green curves stand for the performance of RCF, our model and OURS-MS.
Our model's performances in different types of pancreas cancer are shown in Table 2. We can see that the values of four measurements are comparably high, and the standard deviations are not too big, which means that our model is robust in different types of pancreas cancer.
Table 2 Model's performance in different types of pancreas cancer (with healthy type)
Our model's performances in different phases are shown in Table 3. We can see that the values of four measurements are comparably high, and the standard deviations are not too big, which means that our model is robust in different phases.
Table 3 Model's performance in different phases. The Phase1 to Phase4 are non-enhanced phase, arterial phase, portal phase, delayed phase
Figure 6 shows some examples of the pancreas segmentation result, a comparison of ground-truth and output of our model. The red curve is ground-truth annotation, and the green curve highlights the output. We can easily find that the two curves of four images share high similarity, and high accuracy has been gained in our model. Images in row1 get the best performance, where the DSC values are around 94%, images in row2 get the DSC value on quartile2, around 79%, and those in row3 reach the DSC values around 70%, which is on the quartile1.
Some examples of pancreas segmentation result. Red curve shows the ground truth while green for the predicted. Row1 are in the best performance, row2 are on the quartile2 and row3 on the quartile1
We summarize our contributions as follow. In this paper, we design an automatically pancreas segmentation architecture based on deep learning model, and get a 76.36% DSC value.
We extend the Richer Convolutional Feature network to pancreas segmentation and improve the RCF network with multi-layer up-sampling structure and get over 1% better performance in pancreas segmentation. Besides, we find that, in experiment, testing with multi-scale input and training with data augmentation, especially rotation, can improve the performance of the network.
Significantly, our model is robust in different types of pancreas cancer and different phases of CT images.
Computer-Aided Diagnosis
DSC:
Dice Similarity Coefficient
FCN:
Fully Convolution Network
HED:
Holistically-nested edge detection
IPMN:
Intraductal Papillary Mucinous Neoplasia
MALF:
Multi-Atlas Registration & Label Fusion
PNET:
Pancreatic Neuroendocrine Tumors
Precision-Recall
RCF:
Richer Convolutional Feature network
SCA:
Serous CystAdenoma of the pancreas
SGD:
stochastic gradient descent
SPT:
Solid Pseudopapillary Tumour of the pancreas
Std:
Chu C, Oda M, Kitasaka T, et al. Multi-organ segmentation based on spatially-divided probabilistic atlas from 3D abdominal CT images[J]. Med Image Comput Comput Assist Interv. 2013;16(2):165–72.
Wolz R, Chu C, Misawa K, et al. Automated abdominal multi-organ segmentation with subject-specific atlas generation.[J]. IEEE Trans Med Imaging. 2013;32(9):1723.
Tong T, Wolz R, Wang Z, et al. Discriminative dictionary learning for abdominal multi-organ segmentation[J]. Med Image Anal. 2015;23(1):92–104.
Okada T, Linguraru MG, Hori M, et al. Abdominal multi-organ segmentation from CT images using conditional shape–location and unsupervised intensity priors[J]. Med Image Anal. 2015;26(1):1.
Farag A, Lu L, Turkbey E, et al. a bottom-up approach for automatic pancreas segmentation in abdominal CT scans[J]. Lect Notes Comput Sci. 2014;8676:103–13.
Roth H R, Lu L, Farag A, et al. DeepOrgan: Multi-level Deep Convolutional Networks for Automated Pancreas Segmentation[J] 2015, 9349:556–564.
Modat M, Mcclelland J, Ourselin S. Lung registration using the NiftyReg package[J]. Medical image analysis for the clinic-a grand Challenge. 2010;
Avants BB, Tustison N, Song G. Advanced normalization tools (ANTS)[J]. Or Insight. 2009:1–35.
Avants BB, Tustison NJ, Song G, et al. A reproducible evaluation of ANTs similarity metric performance in brain image registration.[J]. NeuroImage. 2011;54(3):2033–44.
Wang H, Suh JW, Das SR, et al. Multi-atlas segmentation with joint label fusion.[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence. 2013;35(3):611–23.
Bai W, Shi W, O'Regan DP, et al. A probabilistic patch-based label fusion model for multi-atlas segmentation with registration refinement: application to cardiac MR images[J]. IEEE Trans Med Imaging. 2013;32(7):1302–15.
Wang L, Shi F, Li G, et al. Segmentation of neonatal brain MR images using patch-driven level sets[J]. NeuroImage. 2014;84(1):141–58.
Felzenszwalb PF, Huttenlocher DP. Efficient graph-based image segmentation[J]. Int J Comput Vis. 2004;59(2):167–81.
Pont-Tuset J, Arbeláez P, Barron JT, et al. Multiscale combinatorial grouping for image segmentation and object proposal generation[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence. 2016;39(1):128–40.
Girshick R, Donahue J, Darrell T, et al. Region-based convolutional networks for accurate object detection and segmentation[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence. 2015;38(1):142–58.
Konishi S, Yuille AL, Coughlan JM, et al. Statistical edge detection: learning and evaluating edge cues[J]. Pattern Analysis & Machine Intelligence IEEE Transactions on. 2003;25(1):57–74.
Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. Computer Science, 2014 arXiv preprint arXiv. 1556:1409.
Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions[C]// computer vision and pattern recognition. IEEE. 2015:1–9.
Chen LC, Papandreou G, Kokkinos I, et al. Semantic image segmentation with deep convolutional nets and fully connected CRFs[J]. Computer Science. 2014;4:357–61.
Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[C]// computer vision and pattern recognition. IEEE. 2015:3431–40.
Girshick R. Fast R-CNN[C]// IEEE international conference on computer vision. IEEE. 2015:1440–8.
Girshick R, Donahue J, Darrell T, et al. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation[J] 2013:580–587.
Dai J, Li Y, He K, et al. R-FCN: Object Detection via Region-based Fully Convolutional Networks[C]. NIPS, 2016: 379–387.
Ren S, He K, Girshick R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence. 2015;39(6):1137.
Yan Z, Zhan Y, Peng Z, et al. Bodypart recognition using multi-stage deep learning[C]// information processing in medical imaging: conference. Inf Process Med Imaging. 2015;449
Xie S, Tu Z. Holistically-Nested Edge Detection[J]. Int J Comput Vis. 2015:1–16.
Liu Y, Cheng M M, Hu X, et al. Richer Convolutional Features for Edge Detection[J]. 2016. arXiv:1612.02103v2 [cs.CV].
Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks[C]// international conference on neural information processing systems. Curran Associates Inc. 2012:1097–105.
Shin HC, Roth HR, Gao M, et al. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning[J]. IEEE Trans Med Imaging. 2016;35(5):1285.
Jia, Yangqing, Shelhamer, et al. Caffe: Convolutional Architecture for Fast Feature Embedding[J]. 2014:675–678.
Davis J, Goadrich M. The relationship between precision-recall and ROC curves[C]// international conference on machine learning. ACM. 2006:233–40.
This research was supported by National Natural Science Fundation of China (31670725, 91730301) to Xinqi Gong.
The publication cost of this article was funded by National Natural Science Fundation of China (91730301).
All the data are provided by the General Surgery Department of Peking Union Medical College Hospital. All the patient have signed the informed consent.
About this supplement
This article has been published as part of BMC Systems Biology Volume 12 Supplement 4, 2018: Selected papers from the 11th International Conference on Systems Biology (ISB 2017). The full contents of the supplement are available online at https://bmcsystbiol.biomedcentral.com/articles/supplements/volume-12-supplement-4.
Mathematics Department, School of Information, Renmin University of China, Beijing, China
Min Fu, Qiuhua Liu & Yaobin Ou
Mathematical Intelligence Application Lab, Institute for Mathematical Sciences, Renmin University of China, Beijing, China
Min Fu, Qiuhua Liu & Xinqi Gong
Department of General Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
Wenming Wu, Xiafei Hong, Jialin Jiang & Yupei Zhao
Min Fu
Wenming Wu
Xiafei Hong
Qiuhua Liu
Jialin Jiang
Yaobin Ou
Yupei Zhao
Xinqi Gong
X.Q.G supervised the project and designed the ideas. M. F, W.M.W, X.F.H, Q.H. L and J.L.J. did the experiments and drafted the initial manuscript. Y.P.Z and Y.B.O participated in supervision. All authors discussed the results and commented on the manuscript. All authors read and approved the final manuscript.
Correspondence to Yaobin Ou, Yupei Zhao or Xinqi Gong.
Fu, M., Wu, W., Hong, X. et al. Hierarchical combinatorial deep learning architecture for pancreas segmentation of medical computed tomography cancer images. BMC Syst Biol 12, 56 (2018). https://doi.org/10.1186/s12918-018-0572-z
Pancreas segmentation
Single object segmentation
|
CommonCrawl
|
Centralfallout | Explore The World News
Britney Spears, Jamie Lynn feud: 5 things we've learned about their heated public spat in the actress's words
Team USA to feature 4 US service members at Beijing Olympics
Quarantined Italian MPs gear up for drive-in vote for new president | Italy
Wagamama owner TRG expects to rise above the Omicron effect | Restaurant Group
Minister says blackmail has 'no place in British politics' after No 10 allegations – UK politics live | Politics
Netflix's market value tumbles as it predicts subscriber slowdown | Netflix
SS Rajamouli's Jr NTR and Ram Charan starrer's new release date has a Baahubali 2 connect?
Suga shares first ever picture of his pet dog Holly; not just ARMY, J-Hope is melting too – read tweets
6 new movies and shows releasing on 21 January on Netflix, ZEE5, Amazon Prime Video and more OTT platforms
Shah Rukh Khan, Hrithik Roshan, Salman Khan and more Bollywood actors' FEES for dancing at weddings will make your jaw drop
Asur 2, The Family Man 3, Delhi Crime 2, Made in Heaven 2 and more sequels of OTT series to release in 2022
UK consumer confidence plunges over rising living costs and inflation
At Sundance, Two Films Look at Abortion and the Jane Collective
Where is the bachelor being filmed 2022: Who Owns The Mansion?
Kohls ps5 is The Retailer Cancelling All The PlayStation Orders
Who Is Peaches Tiktok Celebrity And Social Media Star
Home Education Interest Coverage Ratio- Meaning, Uses, Formula & Limitation
Interest Coverage Ratio- Meaning, Uses, Formula & Limitation
Interest Coverage Ratio: Hi, Friends Today, going to sharing more excitable information on the topic of Interest Coverage Ratio.
Please move on to the article and; keeps enjoy reading it.
What is the Interest's Coverage Ratio?
The Formula for the Interest Coverage Ratio
Understanding the Interest Coverage Ratio
How to Use the Interest Coverage Ratio
Example of the Interest Coverage Ratio
A Special Considerations
Limitations of the Interest Coverage Ratio adjust.
Variations on the Interest Coverage Ratio
The Primary Uses of Interest Coverage Ratio
The Additional Resources4 on Interest Coverage Ratio
What does the Interest Coverage Ratio tell one?
How is the Interest Coverage Ratio is calculating?
What is a good Interest Coverage Ratio?
What does a lousy interest coverage ratio indicate?
A Debt and Profitability ratio use to establish how easily a company can pay interest on its outstanding debt. For example, the interest coverage ratio may calculate by dividing a company's earnings before the interest and taxes by its interest expense during a given period.
The interest coverage ratio sometimes is called the Time's Interest Earns Ratio. The Lenders, Investors, and Creditors frequently use this formula. Establish a company's riskiness relative to its current debt or future borrowing.
The interest coverage ratio measures how well a company can pay the interest due to outstanding debt.
It is also called the Time's Interest Earned Ratio. Creditors and Expectant lenders use this ratio to evaluate the risk of lending capital to a company. Therefore, a higher coverage ratio is better. However, the appropriate balance may change by industry.
\begin{aligned} and\text{Interest Coverage Ratio}=\frac{\text{EBIT}}{\text{Interest Expense}}\\ and\textbf{where:}\\ and\text{EBIT}=\text{Earnings before interest and taxes} \end{aligned}
Interest Coverage Ratio=
there:
EBIT=Earnings before interest and taxes
It will measures how many times an organization can cover its current interest payment with its availability of earnings. In other words, it will measure the margin of safety a company has for paying interest on its debt during a given period.
The ratio is calculating by dividing a company's EBIT. Through the company's interest expenses for the same period. The lower the balance is, the more the company is burdened by debt expense when a company's interest coverage ratio is only 1.5 or lower. Its ability to meet the interest expenses is valid.
An organization need to have more than enough earnings to cover the interest payments to surviving the future. Perhaps predictable financial hardships may arise. Therefore, a company's ability to meet its interest obligations is a feature of its solvency. It is thus an essential factor in return for Shareholders.
Interpretation is essential when it comes to using ratios in company analysis. For instance, at the same time, looking at a single interest coverage ratio. It may reveal a good deal about a company's current financial position. In addition, discussing interest coverage ratios over time will frequently give a much clearer picture of a company's work and path.
By examining the interest coverage ratios quarterly for the past five years, for example, trends may move out. It gives investors a much better idea of whether a low current interest coverage ratio improves or worsens.
Suppose a high current interest coverage ratio is stable. May also use the balance to compares the ability of different companies to pay out their interest. That will help when invests' decision.
Usually, stability in interest coverage ratios is one of the most important things to look for when examining the interest coverage ratio in this way. A decreasing interest coverage ratio is frequently something for investors to be aware of as it shows that a company may be unable to pay its debts in the future.
Important: Look at the interest coverage ratio at one point in time will help to tell Analysts a little about its ability to service its debt. But examining the interest coverage ratio over time will provide a clearer picture. If or not the debt is becoming a burden on the company's financial position.
Whole, the interest coverage ratio is a good assessment of a company's short-term financial health. Moreover, at the same time, making future estimates by analyzing a company's interest coverage ratio history.
It will be a good way of assessing an investment opportunity. It isn't easy to accurately assume a company's long-term financial health with any ratio or metric.
Desirability particular level of this ratio is in the eye of the beholder to an extent. As a result, some Banks or potential relation buyers may be comfortable. With a less desirable balance in exchange for charging the company's higher interest rate on the debt.
Suppose that company's earnings during a given quarter are a price of $625,000. It has debts upon which it is legally answerable for payments of $30,000 every month. Calculating the interest coverage ratio here, one would need to convert the monthly interest payments into quarterly payments by multiplying them through three. Interest coverage ratio for the company is the price of $625,000 or $90,000 ($30,000 x 3) = 6.94.
Stay above water with the interest payments is a vital and ongoing concern for any company as soon as a firm struggles with it. It may have to borrow more or dip into its cash reserve. That is much better uses to investing in Capital Assets or Emergencies.
The interest coverage ratio of 1.5 is usually considering a minimum acceptable ratio for a company. The attaching point below is that lenders will likely refuse to borrow more money. May become aware the company's risk for default as too high.
Moreover, an interest coverage ratio below shows the company is not generating sufficient revenues to satisfy its interest's expenses. Suppose a company's ratio is below one. Then it will likely need to spend some of its cash reserves to meet the difference or borrow more.
That will be difficult for the reasons stating above. Otherwise, even if the earnings are low for a single month. The company risks falling into bankruptcy.
Although it creates Debt and Interest and borrowing can positively affect a company's profitability by developing capital assets, according to the cost-benefit analysis. But a company must also be new ideas in its borrowing.
In addition, because interest affects a company's profitability. A company should only take a loan knows it will have a good handle on its interest payments for years to come.
A good interest coverage ratio would serve as a good Indicator of this circumstance. Potentially an indicator of the company's ability to pay out the debt itself. Large corporations, moreover, may frequently have both high-interest coverage ratios and large borrowings with the ability to regularly payout worth interest payments. Thus, large companies may continue to borrows without much worry.
Businesses may frequently survive for a long time. At the same time, only paying out their interest payments and not the debt itself. Yet, It is commonly considered a dangerous practice.
Suppose the company is relatively small and therefore has low revenue than the larger companies. Paying out the debt will help to pay out interest down the road. With the reducing debt, the company frees up cash flow. As a result, it may adjust the debt's interest rate.
Like any use attempting to level the efficiency of a business, the interest coverage ratio comes with a set of limitations that are important for any investor to consider before using it.
It's important to note that interest coverage is significant when measuring companies in different industries and even when measuring companies within the same sector for establishing companies in particular sectors, like a utility company. Thus, it is an interest coverage ratio of two is frequently an acceptable standard.
A well-established utility will likely have consistent production and revenue, mainly due to government regulations, so even with a relatively low-interest coverage ratio, it may be able to cover its interest payments reliably.
However, other industries, such as manufacturing, are much more volatile and may often have a higher minimum acceptable interest coverage ratio of three, for example.
These kinds of companies usually see more significant fluctuation in business. For example, during the recession of 2008, car sales drop a great extent. So hurts the auto manufacturing industry; a workers' strike is another example of an unexpected event that may damage the interest coverage ratios.
Because these industries are more likely to have these fluctuations, they must depend on a more extraordinary ability to cover their interest to account for periods of low earnings.
Because of such wide variations like these, when comparing companies' interest coverage ratios, should compare a company to others in the same industry: Suitably, those who have similar business models and revenue numbers as well.
In addition, at the same time, all debt is essential to consider when calculating the interest coverage ratio. Companies may choose to identify or exclude certain types of debt in their interest coverage ratio calculations.
As such, when considering a company's self-publishing interest coverage ratio. It is essential to determine if all obligations were includes or otherwise calculate the interest coverage ratio independently.
A couple of relatively common variations of the interest coverage ratio are essential to consider before studying the percentages of companies. These changes come from alterations to EBIT in the numerator of interest coverage ratio calculations.
One such variation uses earnings before the Interest, Taxes, Depreciation, and Amortization (EBITDA). Unless EBIT in calculating the interest coverage ratio. Because this change excludes depreciation and amortization, the numerator in calculations using EBITDA will frequently be higher than those using EBIT.
However, the interest expense will be the same in both cases. Therefore, the EBITDA calculations will produce a higher interest coverage ratio than the EBIT calculations.
Another change uses earnings before interest after taxes (EBIAT) instead of EBIT in interest coverage's ratio calculations. It affects deducting tax expenses from the numerator to provide a more accurate picture of a company's ability to pay its interest expenses because taxes are an essential financial component to consider; for a clearer picture of its ability to cover its interest expenses. EBIAT can use to calculate the interest coverage ratios instead of EBIT.
All of these changes in calculating the interest in the denominator; usually speaking, these three changes increase innovation. Using the EBITDA is being the most willing to respect. Those using EBIT are the most conservative. Those using EBIAT are the most exacting.
The Interest Coverage Ratio use to determine a company's ability to pay its interest expense on the outstanding debt.
2. The Interest Coverage Ratio use by Lenders, Creditors, and Investors to determine the riskiness of lending money to the company; the interest Coverage Ratio using to determine the company's stability. – a declining Interest Coverage Ratio shows that a company may be unable to meet its debt obligations in the future.
3. The Interest Coverage Ratio using to determine the short-term financial health of a company.
4. Fashion analysis of Interest Coverage Ratio gives a clear picture of the stability of a company concerning interest payments.
It is a global provider of financial Analyst Training and Profession advancement for Finance Professionals. Like the Financial Modeling and Valuation Analyst Certification Program. Learning more and expand the career.
I was checking out the additional relevant CFI resources below.
Effective Annual Interest Expense
Cost of Debt
The Interest Coverage Ratio use by Lenders, Creditors, and Investors to determine the riskiness of lending money to the company; the interest Coverage Ratio using to determine the company's stability. – a declining Interest Coverage Ratio shows that a company may be unable to meet its debt obligations in the future.
The Interest Coverage Ratio using to determine the short-term financial health of a company.
Fashion analysis of Interest Coverage Ratio gives a clear picture of the stability of a company concerning interest payments.
The Additional Resources on Interest Coverage Ratio
It measures a company's ability to handle its outstanding debt and is one of several debt ratios used to calculate its financial condition. The term "Coverage" mention the length of time.
Ordinarily, the number of legal years—for that interest payments can make with the company's recently available earnings. In simple terms, it represents how many times the company will pay its obligations using its earnings.
The ratio is calculating by dividing EBIT or some changes, thereby interest on debt expenses. The cost of borrowing funding during a given period, usually annually.
A ratio above one shows that a company can service the interest on its debts using its earnings. It has demonstrated the ability to maintain the revenues at a reasonably accurate level; an interest coverage ratio of two or better may be minimally acceptable to Analysts or Investors for companies with historically more volatile revenues. The interest coverage ratio may not consider suitable unless it is well above three.
A terrible interest coverage ratio is any number below one as this means that the company's current earnings are insufficient to service its outstanding debt. A company's chances to continue to meet its interest expenses on an ongoing basis are still doubtful, even with an interest coverage ratio below 1.5, especially if the company is vulnerable to seasonal or cyclical dips in revenues.
So, it's essential information on the topic of Interest Coverage Ratio.
Stop Limit
Multicollinearity
ASUS TUF's Gaming A15 Laptop
Dell G3 3500
ASUS ROG Zephyrus
HP Omen X 2S
Sum of Squares
coverage ratio
interest cover ratio formula
time interest earned ratio
Previous articleSum of Squares – What Is It & It's Formula
Next articleMultinomial Distribution- What Is It & It's Example
What is PEMDAS Rule?
Career Stagnation? Upskill and Grow. Learn Digital Marketing
Is Social Security Socialism ?
Packers' Aaron Rodgers takes jab at Tom Brady, Patriots with 'Deflategate'...
|
CommonCrawl
|
Existence of infinitely many solutions for semilinear problems on exterior domains
A. C. Nascimento
Universidade Federal do Piauí, Campus Universitário Ministro Petrônio Portella, Ininga, 64049-550, Teresina-PI, Brazil
Received September 2019 Revised March 2020 Published June 2020
Fund Project: The author was supported by CNPq 140383/2013-1
In this paper we study special properties of solutions of the initial value problem (IVP) associated to the Benjamin-Ono-Zakharov-Kuznetsov (BO-ZK) equation. We prove that if initial data has some prescribed regularity on the right hand side of the real line, then this regularity is propagated with infinite speed by the flow solution. In other words, the extra regularity on the data propagates in the solutions in the direction of the dispersion. The method of proof to obtain our result uses weighted energy estimates arguments combined with the smoothing properties of the solutions. Hence we need to have local well-posedness for the associated IVP via compactness method. In particular, we establish a local well-posedness in the usual $ L^{2}( \mathbb R^2) $-based Sobolev spaces $ H^s( \mathbb R^2) $ for $ s>\frac{5}{4} $ which coincides with the best available result in the literature proved employing more complicated tools.
Keywords: Nonlinear dispersive equation, propagation of regularity, local well-posedness, BO-ZK equation, weighted energy estimates.
Mathematics Subject Classification: Primary: 35Q53, 35G31.
Citation: A. C. Nascimento. On special regularity properties of solutions of the benjamin-ono-zakharov-kuznetsov (bo-zk) equation. Communications on Pure & Applied Analysis, 2020, 19 (9) : 4285-4325. doi: 10.3934/cpaa.2020194
B. Bajvsank and R. Coifman, On singular integrals, in Proc. Symp. Pure Math., American Mathematical Society, Providence, RI, (1966), 1–17. Google Scholar
T. B. Benjamin, Internal waves of permanent form in fluids of great depth, J. Fluid Mech., 29 (1967), 559-592. Google Scholar
J. Bergh and J. Löfström, Interpolation spaces. An introduction, Grundlehren der Mathematischen Wissenschaften, No. 223, Springer-Verlag, Berlin-New York, 1976. Google Scholar
J. L. Bona and R. Smith, The initial-value problem for the Korteweg-de Vries equation, Philos. Trans. R. Soc. London. Ser. A, 278 (1975), 555-601. doi: 10.1098/rsta.197A.0035. Google Scholar
A. P. Calderón, Commutators of singular integral operators, Proc. Natl. Acad. Sci. USA, 53 (1965), 1092-1099. doi: 10.1073/pnas.53.A.1092. Google Scholar
A. Cunha and A. Pastor, The IVP for the Benjamin-Ono-Zakharov-Kuznetsov equation in low regularity Sobolev spaces, J. Differ. Equ., 261 (2016), 2041-2067. doi: 10.1016/j.jde.2016.04.022. Google Scholar
A. Cunha and A. Pastor, The IVP for the Benjamin-Ono-Zakharov-Kuznetsov equation in weighted Sobolev spaces, J. Math. Anal. Appl., 417 (2014), 660-693. doi: 10.1016/j.jmaa.2014.03.056. Google Scholar
L. Dawson, H. McGahagan and G. Ponce, On the decay properties of solutions to a class of Schrödinger equations, Proc. Amer. Math. Soc., 136 (2008), 2081-2090. doi: 10.1090/S0002-9939-08-09355-6. Google Scholar
A. Esfahani and A. Pastor, Ill-posedness results for the (generalized) Benjamin-Ono-Zakharov-Kuznetsov equation, Proc. Amer. Math. Soc., 139 (2011), 943-956. doi: 10.1090/S0002-9939-2010-10532-4. Google Scholar
A. Esfahani and A. Pastor, Instability of solitary wave solutions for the generalized BO-ZK equation, J. Differ. Equ., 247 (2009), 3181-3201. doi: 10.1016/j.jde.2009.09.014. Google Scholar
A. Esfahani and A. Pastor, Sharp constant of an anisotropic Gagliardo-Nirenberg-type inequality and applications, Bull. Braz. Math. Soc. (N.S.), 48 (2017), 171-185. doi: 10.1007/s00574-016-0017-5. Google Scholar
A. Esfahani and A. Pastor, On the unique continuation property for Kadomtsev-Petviashvili-I and Benjamin-Ono-Zakharov-Kuznetsov equations, Bull. Lond. Math. Soc., 43 (2011), 1130-1140. doi: 10.1112/blms/bdr048. Google Scholar
A. Esfahani, A. Pastor and J. L. Bona, Stability and decay properties of solitary-wave solutions for the generalized BO-ZK equation, Adv. Differ. Equ., 20 (2015), 801-834. Google Scholar
G. B. Folland, Introduction to Partial Differential Equations, 2$^{nd}$ edition, Princeton University Press, Princeton, NJ, 1995. Google Scholar
L. Grafakos and O. Seungly, The Kato-Ponce Inequality, Commun. Partial Differ. Equ., 39 (2014), 1128-1157. doi: 10.1080/03605302.2013.822885. Google Scholar
R. J. Iório Jr., On the Cauchy problem for the Benjamin-Ono equation, Commun. Partial Differ. Equ., 11 (1986), 1031-1081. doi: 10.1080/03605308608820456. Google Scholar
A. D. Ionescu, C. Kenig and D. Tataru, Global well-posedness of the initial value problem for the KP I equation in the energy space, Invent. Math., 173 (2008), 265-304. doi: 10.1007/s00222-008-0115-0. Google Scholar
P. Isaza, F. Linares and G. Ponce, On the propagation of regularities in solutions of the Benjamin-Ono equation, J. Funct. Anal., 270 (2016), 976-1000. doi: 10.1016/j.jfa.201A.11.009. Google Scholar
P. Isaza, F. Linares and G. Ponce, On the propagation of regularity of solutions of the Kadomtsev-Petviashvili equation, SIAM J. Math. Anal., 48 (2016), 1006-1024. doi: 10.1137/15M1012098. Google Scholar
P. Isaza, F. Linares and G. Ponce, On the propagation of regularity and decay of solutions to the k-generalized Korteweg-de Vries equation, Commun. Partial Differ. Equ., 40 (2015), 1336-1364. doi: 10.1080/03605302.2014.985794. Google Scholar
M. C. Jorge, G. Cruz-Pacheco, L. Mier-y-Teran-Romero and N. F. Smyth, Evolution of two-dimensional lump nanosolitons for the Zakharov-Kuznetsov and electromigration equations, Chaos, 15 (2005), Art. 037104. doi: 10.1063/1.1877892. Google Scholar
T. Kato, On the Cauchy Problem for the (Generalized) Korteweg-de Vries Equation, in Advances in Mathematics Supplementary Studies, Studies in Applied Mathematics, Vol. 8, London, Academic Press, (1983), 93–128. Google Scholar
T. Kato and G. Ponce, Commutator estimates and the Euler and Navier-Stokes equations, Commun. Pure Appl. Math., 41 (1988), 891–907. doi: 10.1002/cpa.3160410704. Google Scholar
C. Kenig, On the local and global well-posedness theory for the KP-I equation, Ann. Inst. Henri Poincare Anal. Non Lineaire, 21 (2004), 827-838. doi: 10.1016/j.anihpc.2003.12.002. Google Scholar
C. Kenig and K. Koenig, On the local well-posedness of the Benjamin-Ono and modified Benjamin-Ono equations, Math. Res. Lett., 10 (2003), 879-895. doi: 10.4310/MRL.2003.v10.n6.a13. Google Scholar
C. E. Kenig, F. Linares, G. Ponce and L. Vega, On the regularity of solutions to the k-generalized Korteweg-de Vries equation, Proc. Amer. Math. Soc., 146 (2018), 3759-3766. doi: 10.1090/proc/13506. Google Scholar
C. E. Kenig, G. Ponce and L. Vega, Well-posedness of the initial value problem for the Korteweg-de Vries equation, J. Amer. Math. Soc., 4 (1991), 323-347. doi: 10.2307/2939277. Google Scholar
C. E. Kenig, G. Ponce and L. Vega, Well-posedness and scattering results for the generalized Korteweg-de Vries equation via contraction principle, Commun. Pure Appl. Math., 46 (1993), 527-620. doi: 10.1002/cpa.3160460405. Google Scholar
C. E. Kenig and S. N. Ziesler, Maximal function estimates with applications to a modified Kadomstev-Petviashvili equation, commun. Pure Appl. Anal., 4 (2005), 45-91. doi: 10.3934/cpaa.200A.4.45. Google Scholar
H. Koch and N. Tzvetkov, On the Local Well-Posedness of the Benjamin-Ono equation in $H^{s}(\mathbb R)$, Int. Math. Res. Not., 26 (2003), 1449-1464. doi: 10.1155/S1073792803211260. Google Scholar
J. C. Latorre, A. A. Minzoni, N. F. Smyth and C. A. Vargas, Evolution of Benjamin-Ono solitons in the presence of weak Zakharov-Kuznetsov lateral dispersion, Chaos, 16 (2006), Art. 043103. doi: 10.1063/1.2355555. Google Scholar
D. Li, On Kato-Ponce and fractional Leibniz, Rev. Mat. Iberoam., 35 (2019), 23-100. doi: 10.4171/rmi/1049. Google Scholar
F. Linares, H. Miyazaki and G. Ponce, On a class of solutions to the generalized KdV type equation, Commun. Contemp. Math., (2019). doi: 10.1142/S0219199718500566. Google Scholar
F. Linares, D. Pilod and J. C. Saut, The Cauchy problem for the fractional Kadomtsev-Petviashvili equations, SIAM J. Math. Anal., 50 (2018), 3172-3209. doi: 10.1137/17M1145379. Google Scholar
F. Linares, G. Ponce and D. L. Smith, On the regularity of solutions to a class of nonlinear dispersive equations, Math. Ann., 369 (2017), 797-837. doi: 10.1007/s00208-016-1452-8. Google Scholar
F. Linares and G. Ponce, On special regularity Properties of Solutions of the Zakharov-Kuznetsov Equation, Commun. Pure Appl. Anal., 17 (2018), 1561-1572. doi: 10.3934/cpaa.2018074. Google Scholar
A. J. Mendez, On the propagation of regularity for solutions of the dispersion generalized Benjamin-Ono equation, to appear in Anal. Partial Differ. Equ., arXiv: 1901.00823. Google Scholar
A. J. Mendez, On the propagation of regularity for solutions of the fractional Korteweg-de Vries equation, preprint, arXiv: 1902.08296. Google Scholar
L. Molinet, J. C. Saut and N. Tzvetkov, Ill-posedness issues for the Benjamin-Ono and related equations, SIAM J. Math. Anal., 33 (2001), 982-988. doi: 10.1137/S0036141001385307. Google Scholar
C. Muscalu, J. Pipher, T. Tau and C. Thiele, Bi-parameter paraproducts, Acta Math., 193 (2004), 269-296. doi: 10.1007/BF02392566. Google Scholar
A. C. Nascimento, On the propagation of regularities in solutions of the fifth order Kadomtsev-Petviashvili II equation, J. Math. Anal. Appl., 478 (2019), 156-181. doi: 10.1016/j.jmaa.2019.0A.024. Google Scholar
A. C. Nascimento, On the properties in the solutions of a Kadomtsev-Petviashvili-Benjamin-Ono(KP-BO) equation, preprint. Google Scholar
H. Ono, Algebraic solitary waves in stratified fluids, J. Phys. Soc. Jpn., 39 (1975), 1082-1091. doi: 10.1143/JPSJ.39.1082. Google Scholar
F. Ribaud and S. Vento, Local and global well-posedness results for the Benjamin-Ono-Zakharov-Kuznetsov equation, Discrete Contin. Dyn. Syst., 37 (2017), 449-483. doi: 10.3934/dcds.2017019. Google Scholar
V. Zakharov and E. Kuznetsov, Three-dimensional solitons, Sov. Phys. JETP, 39 (1974), 285-286. Google Scholar
Xavier Carvajal, Liliana Esquivel, Raphael Santos. On local well-posedness and ill-posedness results for a coupled system of mkdv type equations. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020382
Tong Tang, Jianzhu Sun. Local well-posedness for the density-dependent incompressible magneto-micropolar system with vacuum. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020377
Kihoon Seong. Low regularity a priori estimates for the fourth order cubic nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5437-5473. doi: 10.3934/cpaa.2020247
Noufel Frikha, Valentin Konakov, Stéphane Menozzi. Well-posedness of some non-linear stable driven SDEs. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 849-898. doi: 10.3934/dcds.2020302
Boris Andreianov, Mohamed Maliki. On classes of well-posedness for quasilinear diffusion equations in the whole space. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 505-531. doi: 10.3934/dcdss.2020361
Thierry Cazenave, Ivan Naumkin. Local smooth solutions of the nonlinear Klein-gordon equation. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020448
Antoine Benoit. Weak well-posedness of hyperbolic boundary value problems in a strip: when instabilities do not reflect the geometry. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5475-5486. doi: 10.3934/cpaa.2020248
Xiaopeng Zhao, Yong Zhou. Well-posedness and decay of solutions to 3D generalized Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 795-813. doi: 10.3934/dcdsb.2020142
Dongfen Bian, Yao Xiao. Global well-posedness of non-isothermal inhomogeneous nematic liquid crystal flows. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1243-1272. doi: 10.3934/dcdsb.2020161
Leilei Wei, Yinnian He. A fully discrete local discontinuous Galerkin method with the generalized numerical flux to solve the tempered fractional reaction-diffusion equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020319
Huiying Fan, Tao Ma. Parabolic equations involving Laguerre operators and weighted mixed-norm estimates. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5487-5508. doi: 10.3934/cpaa.2020249
Erica Ipocoana, Andrea Zafferi. Further regularity and uniqueness results for a non-isothermal Cahn-Hilliard equation. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020289
Xuefei He, Kun Wang, Liwei Xu. Efficient finite difference methods for the nonlinear Helmholtz equation in Kerr medium. Electronic Research Archive, 2020, 28 (4) : 1503-1528. doi: 10.3934/era.2020079
Vo Van Au, Hossein Jafari, Zakia Hammouch, Nguyen Huy Tuan. On a final value problem for a nonlinear fractional pseudo-parabolic equation. Electronic Research Archive, 2021, 29 (1) : 1709-1734. doi: 10.3934/era.2020088
Patrick Martinez, Judith Vancostenoble. Lipschitz stability for the growth rate coefficients in a nonlinear Fisher-KPP equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 695-721. doi: 10.3934/dcdss.2020362
Manuel del Pino, Monica Musso, Juncheng Wei, Yifu Zhou. Type Ⅱ finite time blow-up for the energy critical heat equation in $ \mathbb{R}^4 $. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3327-3355. doi: 10.3934/dcds.2020052
Claudianor O. Alves, Rodrigo C. M. Nemer, Sergio H. Monari Soares. The use of the Morse theory to estimate the number of nontrivial solutions of a nonlinear Schrödinger equation with a magnetic field. Communications on Pure & Applied Analysis, 2021, 20 (1) : 449-465. doi: 10.3934/cpaa.2020276
Alex H. Ardila, Mykael Cardoso. Blow-up solutions and strong instability of ground states for the inhomogeneous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2021, 20 (1) : 101-119. doi: 10.3934/cpaa.2020259
Michiel Bertsch, Danielle Hilhorst, Hirofumi Izuhara, Masayasu Mimura, Tohru Wakasa. A nonlinear parabolic-hyperbolic system for contact inhibition and a degenerate parabolic fisher kpp equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3117-3142. doi: 10.3934/dcds.2019226
|
CommonCrawl
|
JEM-EUSO Consensus Wait-freedom Distributed computability Process crash failure Agreement FPGA Genetic programming Linearizability Shared memory Asynchronous system Atomic read/write register Combinatorial topology Distributed computing finite field arithmetic
( see all 46)
France [x] 257 (%)
Mexico [x] 257 (%)
Spain 45 (%)
United States 38 (%)
Italy 26 (%)
UNAM 26 (%)
CINVESTAV-IPN 22 (%)
Kyoto University 19 (%)
Universidad Nacional Autónoma de México (UNAM) 19 (%)
Hiroshima University 18 (%)
Rajsbaum, Sergio 39 (%)
Raynal, Michel 22 (%)
Decouchant, Dominique 19 (%)
Adams, J. H., Jr. 17 (%)
Ahmad, S. 17 (%)
Experimental Astronomy 17 (%)
Distributed Computing 10 (%)
Structural Information and Communication Complexity 8 (%)
Advances in Artificial Intelligence 6 (%)
MICAI 2004: Advances in Artificial Intelligence 6 (%)
Book 202 (%)
Journal 55 (%)
Springer 257 (%)
Computer Science 224 (%)
Artificial Intelligence (incl. Robotics) 113 (%)
Computer Communication Networks 80 (%)
Algorithm Analysis and Problem Complexity 76 (%)
Information Systems Applications (incl. Internet) 68 (%)
962 Authors
661 Institutions
Showing 1 to 100 of 257 matching Articles Results per page: 10 20 50 Export (CSV)
PIÑAS: Supporting a Community of Co-authors on the Web
Distributed Communities on the Web (2002-01-01) 2468: 113-124 , January 01, 2002
By Morán, Alberto L.; Decouchant, Dominique; Favela, Jesus; Martínez-Enríquez, Ana María; González Beltrán, Beatriz; Mendoza, Sonia Show all (6)
To provide efficient support for collaborative writing to a community of authors is a complex and demanding task, members need to communicate, coordinate, and produce in a concerted fashion in order to obtain a final version of the documents that meets overall expectations. In this paper, we present the PIÑAS middleware, a platform that provides potential and actual collaboration spaces, as well as specific services customized to support collaborative writing on the Web. We start by introducing PIÑAS Collaborative Spaces and an extended version of Doc2U, the current tool that implements them, that integrate and structure a suite of specialized project and session services. Later, a set of services for the naming, identification, and shared management of authors, documents and resources in a replicated Web architecture is presented. Finally, a three-tier distributed architecture that organizes these services and a final discussion on how they support a community of authors on the Web is presented.
Ontology-Based Resource Discovery in Pervasive Collaborative Environments
Collaboration and Technology (2013-01-01) 8224: 233-240 , January 01, 2013
By García, Kimberly; Kirsch-Pinheiro, Manuele; Mendoza, Sonia; Decouchant, Dominique Show all (4)
Most of the working environments offer multiple hardware and software that could be shared among the members of staff. However, it could be particularly difficult to take advantages of all these resources without a proper software support capable of discovering the ones that fulfill both a user's requirements and each resource owner's sharing preferences. To try to overcome this problem, several service discovery protocols have been developed, aiming to promote the use of network resources and to reduce configuration tasks. Unfortunately, these protocols are mainly focused on finding resources based just on their type or some minimal features, lacking information about: user preferences, restrictions and contextual variables. To outstrip this deficiency, we propose to exploit the power of semantic description, by creating a knowledge base integrated by a set of ontologies generically designed to be adopted by any type of organization. To validate this proposal, we have customized the ontologies for our case of study, which is a research center.
Access Control-Based Distribution of Shared Documents
On the Move to Meaningful Internet Systems 2004: OTM 2004 Workshops (2004-01-01) 3292: 12-13 , January 01, 2004
By Mendoza, Sonia; Morán, Alberto L.; Decouchant, Dominique; Enríquez, Ana María Martínez; Favela, Jesus Show all (5)
The PIÑAS platform provides an authoring group with support to collaboratively and consistently produce shared Web documents. Such documents may include costly multimedia resources, whose management raises important issues due to the constraints imposed by Web technology. This poster presents an approach for distributing shared Web documents to the authoring group's sites, taking into consideration current organization of the concerned sites, access rights granted to the co-authors and storage device capabilities.
Adaptive Distribution Support for Co-authored Documents on the Web
Groupware: Design, Implementation, and Use (2005-01-01) 3706: 33-48 , January 01, 2005
By Mendoza, Sonia; Decouchant, Dominique; Morán, Alberto L.; Enríquez, Ana María Martínez; Favela, Jesus Show all (5)
In order to facilitate and improve collaboration among co-authors, working in the Web environment, documents must be made seamlessly available to them. Web documents may contain multimedia resources, whose management raises important issues due to the constraints and limits imposed by Web technology. This paper proposes an adaptive support for distributing shared Web documents and multimedia resources across authoring group sites. Our goal is to provide an efficient use of costly Web resources. Distribution is based on the current arrangement of the participating sites, the roles granted to the co-authors and the site capabilities. We formalize key concepts to ensure that system's properties are fulfilled under the specified conditions and to characterize distribution at a given moment. The proposed support has been integrated into the PIÑAS platform, which allows an authoring group to collaboratively and consistently produce shared Web documents.
A Distributed Event Service for Adaptive Group Awareness
MICAI 2002: Advances in Artificial Intelligence (2002-01-01) 2313: 506-515 , January 01, 2002
By Decouchant, Dominique; Martńez-Enríquez, Ana Mará; Favela, Jesús; L.Morán, Alberto; Mendoza, Sonia; Jafar, Samir Show all (6)
This paper is directly focused on the design of middleware functions to support a distributed cooperative authoring environment on the World Wide Web. Using the advanced storage and access functions of the PIÑAS middleware, co-authors can produce fragmented and replicated documents in a structured, consistent and efficient way. However, despite it provides elaborated, concerted, secure and parameterizable cooperative editing support and mechanisms, this kind of applications requires a suited and efficient inter-application communication service to design and implement flexible, efficient, and adapted group awareness functionalities.
Thus, we developed a proof-of-concept implementation of a centralized version of a Distributed Event Management Service that allows to establish communication between cooperative applications, either in distributed or centralized mode. As an essential component for the development of cooperative environments, this Distributed Event Management Service allowed us to design an Adaptive Group Awareness Engine whose aim is to automatically deduce and adapt co-author's cooperative environments to allow them collaborate closer. Thus, this user associated inference engine captures the application events corresponding to author's actions,and uses its knowledge and rule bases,to detect co-author's complementary or related work, specialists, or beginners, etc. Its final goal is to propose modifications to the author working environments, application interfaces, communication or interaction ways, etc.
An Adaptive Cooperative Web Authoring Environment
Adaptive Hypermedia and Adaptive Web-Based Systems (2002-01-01) 2347: 535-538 , January 01, 2002
By Martínez-Enríquez, Ana María; Decouchant, Dominique; Morán, Alberto L.; Favela, Jesus Show all (4)
Using AllianceWeb, authors distributed around the world can cooperate producing large documents in a consistent and concerted way. In this paper, we highlight the main aspects of the group awareness function that allows each author to diffuse his contribution to other co-authors, and to control the way by which other contributions are integrated into his environment. In order to support this function, essential to every groupware application, we have designed a self-adaptive cooperative interaction environment, parametrized by user preferences. Thus, the characteristics of an adaptive group awareness agent are defined.
Renaming Is Weaker Than Set Agreement But for Perfect Renaming: A Map of Sub-consensus Tasks
LATIN 2012: Theoretical Informatics (2012-01-01) 7256: 145-156 , January 01, 2012
By Castañeda, Armando; Imbs, Damien; Rajsbaum, Sergio; Raynal, Michel Show all (4)
In the wait-free shared memory model substantial attention has been devoted to understanding the relative power of sub-consensus tasks. Two important sub-consensus families of tasks have been identified: k-set agreement and M-renaming. When 2 ≤ k ≤ n − 1 and n ≤ M ≤ 2n − 2, these tasks are more powerful than read/write registers, but not strong enough to solve consensus for two processes.
This paper studies the power of renaming with respect to set agreement. It shows that, in a system of n processes, n-renaming is strictly stronger than (n − 1)-set agreement, but not stronger than (n − 2)-set agreement. Furthermore, (n + 1)-renaming cannot solve even (n − 1)-set agreement. As a consequence, there are cases where set agreement and renaming are incomparable when looking at their power to implement each other.
A Survey on Some Recent Advances in Shared Memory Models
Structural Information and Communication Complexity (2011-01-01) 6796: 17-28 , January 01, 2011
By Rajsbaum, Sergio; Raynal, Michel
Due to the advent of multicore machines, shared memory distributed computing models taking into account asynchrony and process crashes are becoming more and more important. This paper visits models for these systems and analyses their properties from a computability point of view. Among them, the base snapshot model and the iterated model are particularly investigated. The paper visits also several approaches that have been proposed to model failures (mainly the wait-free model and the adversary model) and gives also a look at the BG simulation. The aim of this survey is to help the reader to better understand the power and limits of distributed computing shared memory models.
The Opinion Number of Set-Agreement
Principles of Distributed Systems (2014-01-01) 8878: 155-170 , January 01, 2014
By Fraigniaud, Pierre; Rajsbaum, Sergio; Roy, Matthieu; Travers, Corentin Show all (4)
This paper carries on the effort to bridging runtime verification with distributed computability, studying necessary conditions for monitoring failure prone asynchronous distributed systems. It has been recently proved that there are correctness properties that require a large number of opinions to be monitored, an opinion being of the form true, false, perhaps, probably true, probably no, etc. The main outcome of this paper is to show that this large number of opinions is not an artifact induced by the existence of artificial constructions. Instead, monitoring an important class of properties, requiring processes to produce at most k different values does require such a large number of opinions. Specifically, our main result is a proof that it is impossible to monitor k-set-agreement in an n-process system with fewer than min {2k,n} + 1 opinions. We also provide an algorithm to monitor k-set-agreement with min {2k,n} + 1 opinions, showing that the lower bound is tight.
Plasticity of Interaction Interfaces: The Study Case of a Collaborative Whiteboard
By Sánchez, Gabriela; Mendoza, Sonia; Decouchant, Dominique; Gallardo-López, Lizbeth; Rodríguez, José Show all (5)
The development of plastic user interfaces constitutes a promising research topic. They are intentionally designed to automatically adapt themselves to changes of their context of use defined in terms of the user (e.g., identity and role), the environment (e.g., location and available information/tools) and the platform. Some single-user systems already integrate some plasticity capabilities, but this topic remains quasi-unexplored in CSCW. This work is centered on prototyping a plastic collaborative whiteboard that adapts itself: 1) to the platform, as it can be launched from heterogeneous computer devices and 2) to each collaborator, when he is working from several devices. This application can split its interface between the users' devices in order to facilitate the interaction. Thus, the distributed interface components work in the same way as if they were co-located within a unique device. At any time, group awareness is maintained among collaborators.
GMTE: A Tool for Graph Transformation and Exact/Inexact Graph Matching
Graph-Based Representations in Pattern Recognition (2013-01-01) 7877: 71-80 , January 01, 2013
By Hannachi, Mohamed Amine; Bouassida Rodriguez, Ismael; Drira, Khalil; Pomares Hernandez, Saul Eduardo Show all (4)
Multi-labelled graphs are a powerful and versatile tool for modelling real applications in diverse domains such as communication networks, social networks, and autonomic systems, among others. Due to dynamic nature of such kind of systems the structure of entities is continuously changing along the time, this because, it is possible that new entities join the system, some of them leave it or simply because the entities' relations change. Here is where graph transformation takes an important role in order to model systems with dynamic and/or evolutive configurations. Graph transformation consists of two main tasks: graph matching and graph rewriting. At present, few graph transformation tools support multi-labelled graphs. To our knowledge, there is no tool that support inexact graph matching for the purpose of graph transformation. Also, the main problem of these tools lies on the limited expressiveness of rewriting rules used, that negatively reduces the range of application scenarios to be modelling and/or negatively increase the number of rewriting rules to be used. In this paper, we present the tool GMTE - Graph Matching and Transformation Engine. GMTE handles directed and multi-labelled graphs. In addition, to the exact graph matching, GMTE handles the inexact graph matching. The approach of rewriting rules used by GMTE combines Single PushOut rewriting rules with edNCE grammar. This combination enriches and extends the expressiveness of the graph rewriting rules. In addition, for the graph matching, GMTE uses a conditional rule schemata that supports complex comparison functions over labels. To our knowledge, GMTE is the first graph transformation tool that offers such capabilities.
The Universe of Symmetry Breaking Tasks
By Imbs, Damien; Rajsbaum, Sergio; Raynal, Michel
Processes in a concurrent system need to coordinate using a shared memory or a message-passing subsystem in order to solve agreement tasks such as, for example, consensus or set agreement. However, coordination is often needed to "break the symmetry" of processes that are initially in the same state, for example, to get exclusive access to a shared resource, to get distinct names or to elect a leader.
This paper introduces and studies the family of generalized symmetry breaking (GSB) tasks, that includes election, renaming and many other symmetry breaking tasks. Differently from agreement tasks, a GSB task is "inputless", in the sense that processes do not propose values; the task only specifies the symmetry breaking requirement, independently of the system's initial state (where processes differ only on their identifiers). Among various results characterizing the family of GSB tasks, it is shown that (non adaptive) perfect renaming is universal for all GSB tasks.
Erratum to: Ultra high energy photons and neutrinos with JEM-EUSO
Experimental Astronomy (2015-11-01) 40: 235-237 , November 01, 2015
By Adams, J. H., Jr.; Ahmad, S.; Albert, J. -N.; Allard, D.; Anchordoqui, L.; Andreev, V.; Anzalone, A.; Arai, Y.; Asano, K.; Ave Pernas, M.; Baragatti, P.; Barrillon, P.; Batsch, T.; Bayer, J.; Bechini, R.; Belenguer, T.; Bellotti, R.; Belov, K.; Berlind, A. A.; Bertaina, M.; Biermann, P. L.; Biktemerova, S.; Blaksley, C.; Blanc, N.; Błȩcki, J.; Blin-Bondil, S.; Blümer, J.; Bobik, P.; Bogomilov, M.; Bonamente, M.; Briggs, M. S.; Briz, S.; Bruno, A.; Cafagna, F.; Campana, D.; Capdevielle, J. -N.; Caruso, R.; Casolino, M.; Cassardo, C.; Castellinic, G.; Catalano, C.; Catalano, G.; Cellino, A.; Chikawa, M.; Christl, M. J.; Cline, D.; Connaughton, V.; Conti, L.; Cordero, G.; Crawford, H. J.; Cremonini, R.; Csorna, S.; Dagoret-Campagne, S.; de Castro, A. J.; De Donato, C.; de la Taille, C.; De Santis, C.; del Peral, L.; Dell'Oro, A.; De Simone, N.; Di Martino, M.; Distratis, G.; Dulucq, F.; Dupieux, M.; Ebersoldt, A.; Ebisuzaki, T.; Engel, R.; Falk, S.; Fang, K.; Fenu, F.; Fernández-Gómez, I.; Ferrarese, S.; Finco, D.; Flamini, M.; Fornaro, C.; Franceschi, A.; Fujimoto, J.; Fukushima, M.; Galeotti, P.; Garipov, G.; Geary, J.; Gelmini, G.; Giraudo, G.; Gonchar, M.; González Alvarado, C.; Gorodetzky, P.; Guarino, F.; Guzmán, A.; Hachisu, Y.; Harlov, B.; Haungs, A.; Hernández Carretero, J.; Higashide, K.; Ikeda, D.; Ikeda, H.; Inoue, N.; Inoue, S.; Insolia, A.; Isgrò, F.; Itow, Y.; Joven, E.; Judd, E. G.; Jung, A.; Kajino, F.; Kajino, T.; Kaneko, I.; Karadzhov, Y.; Karczmarczyk, J.; Karus, M.; Katahira, K.; Kawai, K.; Kawasaki, Y.; Keilhauer, B.; Khrenov, B. A.; Kim, J. -S.; Kim, S. -W.; Kim, S. -W.; Kleifges, M.; Klimov, P. A.; Kolev, D.; Kreykenbohm, I.; Kudela, K.; Kurihara, Y.; Kusenko, A.; Kuznetsov, E.; Lacombe, M.; Lachaud, C.; Lee, J.; Licandro, J.; Lim, H.; López, F.; Maccarone, M. C.; Mannheim, K.; Maravilla, D.; Marcelli, L.; Marini, A.; Martinez, O.; Masciantonio, G.; Mase, K.; Matev, R.; Medina-Tanco, G.; Mernik, T.; Miyamoto, H.; Miyazaki, Y.; Mizumoto, Y.; Modestino, G.; Monaco, A.; Monnier-Ragaigne, D.; Morales de los Ríos, J. A.; Moretto, C.; Morozenko, V. S.; Mot, B.; Murakami, T.; Murakami, M. Nagano; Nagata, M.; Nagataki, S.; Nakamura, T.; Napolitano, T.; Naumov, D.; Nava, R.; Neronov, A.; Nomoto, K.; Nonaka, T.; Ogawa, T.; Ogio, S.; Ohmori, H.; Olinto, A. V.; Orleański, P.; Osteria, G.; Panasyuk, M. I.; Parizot, E.; Park, I. H.; Park, H. W.; Pastircak, B.; Patzak, T.; Paul, T.; Pennypacker, C.; Perez Cano, S.; Peter, T.; Picozza, P.; Pierog, T.; Piotrowski, L. W.; Piraino, S.; Plebaniak, Z.; Pollini, A.; Prat, P.; Prévôt, G.; Prieto, H.; Putis, M.; Reardon, P.; Reyes, M.; Ricci, M.; Rodríguez, I.; Rodríguez Frías, M. D.; Ronga, F.; Roth, M.; Rothkaehl, H.; Roudil, G.; Rusinov, I.; Rybczyński, M.; Sabau, M. D.; Sáez-Cano, G.; Sagawa, H.; Saito, A.; Sakaki, N.; Sakata, M.; Salazar, H.; Sánchez, S.; Santangelo, A.; Santiago Crúz, L.; Sanz Palomino, M.; Saprykin, O.; Sarazin, F.; Sato, H.; Sato, M.; Schanz, T.; Schieler, H.; Scotti, V.; Segreto, A.; Selmane, S.; Semikoz, D.; Serra, M.; Sharakin, S.; Shibata, T.; Shimizu, H. M.; Shinozaki, K.; Shirahama, T.; Siemieniec-Oziȩbło, G.; Silva López, H. H.; Sledd, J.; Słomińska, K.; Sobey, A.; Sugiyama, T.; Supanitsky, D.; Suzuki, M.; Szabelska, B.; Szabelski, J.; Tajima, F.; Tajima, N.; Tajima, T.; Takahashi, Y.; Takami, H.; Takeda, M.; Takizawa, Y.; Tenzer, C.; Tibolla, O.; Tkachev, L.; Tokuno, H.; Tomida, T.; Tone, N.; Toscano, S.; Trillaud, F.; Tsenov, R.; Tsunesada, Y.; Tsuno, K.; Tymieniecka, T.; Uchihori, Y.; Unger, M.; Vaduvescu, O.; Valdés-Galicia, J. F.; Vallania, P.; Valore, L.; Vankova, G.; Vigorito, C.; Villaseñor, L.; von Ballmoos, P.; Wada, S.; Watanabe, J.; Watanabe, S.; Watts, J., Jr; Weber, M.; Weiler, T. J.; Wibig, T.; Wiencke, L.; Wille, M.; Wilms, J.; Włodarczyk, Z.; Yamamoto, T.; Yamamoto, Y.; Yang, J.; Yano, H.; Yashin, I. V.; Yonetoku, D.; Yoshida, K.; Yoshida, S.; Young, R.; Zotov, M. Yu.; Zuccaro Marchi, A.; The JEM-EUSO Collaboration Show all (289)
Ultra high energy photons and neutrinos with JEM-EUSO
Ultra high energy photons and neutrinos are carriers of very important astrophysical information. They may be produced at the sites of cosmic ray acceleration or during the propagation of the cosmic rays in the intergalactic medium. In contrast to charged cosmic rays, photon and neutrino arrival directions point to the production site because they are not deflected by the magnetic fields of the Galaxy or the intergalactic medium. In this work we study the characteristics of the longitudinal development of showers initiated by photons and neutrinos at the highest energies. These studies are relevant for development of techniques for neutrino and photon identification by the JEM-EUSO telescope. In particular, we study the possibility of observing the multi-peak structure of very deep horizontal neutrino showers with JEM-EUSO. We also discuss the possibility to determine the flavor content of the incident neutrino flux by taking advantage of the different characteristics of the longitudinal profiles generated by different type of neutrinos. This is of grate importance for the study of the fundamental properties of neutrinos at the highest energies. Regarding photons, we discuss the detectability of the cosmogenic component by JEM-EUSO and also estimate the expected upper limits on the photon fraction which can be obtained from the future JEM-EUSO data for the case in which there are no photons in the samples.
Adaptive Resource Management in the PIÑAS Web Cooperative Environment
Advances in Web Intelligence (2004-01-01) 3034: 33-43 , January 01, 2004
By Mendoza, Sonia; Decouchant, Dominique; Martínez Enríquez, Ana María; Morán, Alberto L. Show all (4)
The PIÑAS Web cooperative environment allows distributed authors working together to produce shared documents in a consistent way. The management of shared resources in such an environment raises important technical issues due to the constraints imposed by Web technology. An elaborated group awareness function is provided that allows each author notifying his contributions to other authors, and controlling the way by which other contributions are integrated into his/her environment. In order to support this function, essential to every groupware application, we designed a self-adaptive cooperative environment. We propose a new way of structuring Web documents to be considered as independent resource containers with their corresponding management context. This representation of information simplifies the design of mechanisms to share, modify and update documents and their resources in a consistent and controlled way. Scenarios are used to motivate the need for robust mechanisms for the management of shared Web documents and to illustrate how the extensions presented address these issues.
Environment and Financial Markets
Computational Science - ICCS 2004 (2004-01-01) 3039: 787-794 , January 01, 2004
By Szatzschneider, Wojciech; Jeanblanc, Monique; Kwiatkowska, Teresa
We propose to put the environment into financial markets. We explain how to do it, and why the financial approach is practically the only one suited for stopping and inverting environmental degradation. We concentrate our attention on deforestation, which is the largest environmental problem in the third world, and explain how to start the project and what kind of optimization problems should be solved to ensure the optimal use of environmental funds. In the final part we analyze the dynamical control for bounded processes and awards partially based on the mean of the underlying value.
An Inference Engine for Web Adaptive Cooperative Work
By Martínez-Enríquez, Ana María; Muhammad, Aslam; Decouchant, Dominique; Favela, Jeséus Show all (4)
This paper describes the principle of an inference engine that analyzes useful information of actions, performed by cooperating users, to propose modifications of the states and/or the presentation of the shared objects. Using cooperative groupware applications, a group of people may work on the same task while other users may pursue their individual goals using various other applications (cooperative or non-cooperative)with different roles. In such environment, consistency, group awareness and security have essential signifficance. The work of each user can be observed by capturing their actions and then analyzing them in relation to the history of previous actions. The proposed Adaptive Inference Engine (AIE)behaves as a consumer of application events which analyzes this information on the basis of some predefined rules and then proposes some actions that may be applied within the cooperative environment. In all cases, the user controls the execution of the proposed group awareness actions in his working environment. A prototype of the AIE is developed using the Amaya Web Authoring Toolkit and the PI ~NAS collaborative authoring middleware.
Before Getting There: Potential and Actual Collaboration
Groupware: Design, Implementation, and Use (2002-01-01) 2440: 147-167 , January 01, 2002
By Morán, Alberto L.; Favela, Jesus; Martínez-Enríquez, M.; Decouchant, Dominique Show all (4)
In this paper we introduce the concepts of Actual and Potential Collaboration Spaces. The former applies to the space where collaborative activities are performed, while the second relates to the initial space where opportunities for collaboration are identified and an initial interaction is established. We present a characterization for Potential Collaboration Spaces featuring awareness elements for the potential of collaboration and mechanisms to gather and present them, as well as mechanisms to establish an initial interaction and associated GUI elements. We argue that by making this distinction explicit, and characterizing Potential Collaboration Spaces, designers of groupware can better identify the technical requirements of their systems and thus provide solutions that more appropriately address their users concerns. We illustrate this concept with the design of an application that supports Potential Collaboration Spaces for the PIÑAS web-based coauthoring middleware.
Extrinsic Evaluation on Automatic Summarization Tasks: Testing Affixality Measurements for Statistical Word Stemming
Advances in Computational Intelligence (2013-01-01): 7630 , January 01, 2013
By Méndez-Cruz, Carlos-Francisco; Torres-Moreno, Juan-Manuel; Medina-Urrea, Alfonso; Sierra, Gerardo Show all (4)
This paper presents some experiments of evaluation of a statistical stemming algorithm based on morphological segmentation. The method estimates affixality of word fragments. It combines three indexes associated to possible cuts. This unsupervised and language-independent method has been easily adapted to generate an effective morphological stemmer. This stemmer has been coupled with Cortex, an automatic summarization system, in order to generate summaries in English, Spanish and French. Summaries have been evaluated using ROUGE. The results of this extrinsic evaluation show that our stemming algorithm outperforms several classical systems.
Computing in the Presence of Concurrent Solo Executions
By Herlihy, Maurice; Rajsbaum, Sergio; Raynal, Michel; Stainer, Julien Show all (4)
In a wait-free model any number of processes may crash. A process runs solo when it computes its local output without receiving any information from other processes, either because they crashed or they are too slow. While in wait-free shared-memory models at most one process may run solo in an execution, any number of processes may have to run solo in an asynchronous wait-free message-passing model.
This paper is on the computability power of models in which several processes may concurrently run solo. It first introduces a family of round-based wait-free models, called the d-solo models, 1 ≤ d ≤ n, where up to d processes may run solo. The paper gives then a characterization of the colorless tasks that can be solved in each d-solo model. It also introduces the (d,ε)-solo approximate agreement task, which generalizes ε-approximate agreement, and proves that (d,ε)-solo approximate agreement can be solved in the d-solo model, but cannot be solved in the (d + 1)-solo model. The paper studies also the relation linking d-set agreement and (d,ε)-solo approximate agreement in asynchronous wait-free message-passing systems.
These results establish for the first time a hierarchy of wait-free models that, while weaker than the basic read/write model, are nevertheless strong enough to solve non-trivial tasks.
Automatically Adjusting Concurrency to the Level of Synchrony
Distributed Computing (2014-01-01) 8784: 1-15 , January 01, 2014
By Fraigniaud, Pierre; Gafni, Eli; Rajsbaum, Sergio; Roy, Matthieu Show all (4)
The state machine approach is a well-known technique for building distributed services requiring high performance and high availability, by replicating servers, and by coordinating client interactions with server replicas using consensus. Indulgent consensus algorithms exist for realistic eventually partially synchronous models, that never violate safety and guarantee liveness once the system becomes synchronous. Unavoidably, these algorithms may never terminate, even when no processor crashes, if the system never becomes synchronous.
This paper proposes a mechanism similar to state machine replication, called RC-simulation, that can always make progress, even if the system is never synchronous. Using RC-simulation, the quality of the service will adjust to the current level of asynchrony of the network — degrading when the system is very asynchronous, and improving when the system becomes more synchronous. RC-simulation generalizes the state machine approach in the following sense: when the system is asynchronous, the system behaves as if k + 1 threads were running concurrently, where k is a function of the asynchrony.
In order to illustrate how the RC-simulation can be used, we describe a long-lived renaming implementation. By reducing the concurrency down to the asynchrony of the system, RC-simulation enables to obtain renaming quality that adapts linearly to the asynchrony.
RANSAC-GP: Dealing with Outliers in Symbolic Regression with Genetic Programming
Genetic Programming (2017-01-01): 10196 , January 01, 2017
By López, Uriel; Trujillo, Leonardo; Martinez, Yuliana; Legrand, Pierrick; Naredo, Enrique; Silva, Sara Show all (6)
Genetic programming (GP) has been shown to be a powerful tool for automatic modeling and program induction. It is often used to solve difficult symbolic regression tasks, with many examples in real-world domains. However, the robustness of GP-based approaches has not been substantially studied. In particular, the present work deals with the issue of outliers, data in the training set that represent severe errors in the measuring process. In general, a datum is considered an outlier when it sharply deviates from the true behavior of the system of interest. GP practitioners know that such data points usually bias the search and produce inaccurate models. Therefore, this work presents a hybrid methodology based on the RAndom SAmpling Consensus (RANSAC) algorithm and GP, which we call RANSAC-GP. RANSAC is an approach to deal with outliers in parameter estimation problems, widely used in computer vision and related fields. On the other hand, this work presents the first application of RANSAC to symbolic regression with GP, with impressive results. The proposed algorithm is able to deal with extreme amounts of contamination in the training set, evolving highly accurate models even when the amount of outliers reaches 90%.
On the Number of Opinions Needed for Fault-Tolerant Run-Time Monitoring in Distributed Systems
Runtime Verification (2014-01-01) 8734: 92-107 , January 01, 2014
By Fraigniaud, Pierre; Rajsbaum, Sergio; Travers, Corentin
Decentralized runtime monitoring involves a set of monitors observing the behavior of system executions with respect to some correctness property. It is generally assumed that, as soon as a violation of the property is revealed by any of the monitors at runtime, some recovery code can be executed for bringing the system back to a legal state. This implicitly assumes that each monitor produces a binary opinion, true or false, and that the recovery code is launched as soon as one of these opinions is equal to false. In this paper, we formally prove that, in a failure-prone asynchronous computing model, there are correctness properties for which there is no such decentralized monitoring. We show that there exist some properties which, in order to be monitored in a wait-free decentralized manner, inherently require that the monitors produce a number of opinions larger than two. More specifically, our main result is that, for every k, 1 ≤ k ≤ n, there exists a property that requires at least k opinions to be monitored by n monitors. We also present a corresponding distributed monitor using at most k + 1 opinions, showing that our lower bound is nearly tight.
Local Search is Underused in Genetic Programming
Genetic Programming Theory and Practice XIV (2018-01-01): 119-137 , January 01, 2018
By Trujillo, Leonardo; Z-Flores, Emigdio; Juárez-Smith, Perla S.; Legrand, Pierrick; Silva, Sara; Castelli, Mauro; Vanneschi, Leonardo; Schütze, Oliver; Muñoz, Luis Show all (9)
There are two important limitations of standard tree-based genetic programming (GP). First, GP tends to evolve unnecessarily large programs, what is referred to as bloat. Second, GP uses inefficient search operators that focus on modifying program syntax. The first problem has been studied extensively, with many works proposing bloat control methods. Regarding the second problem, one approach is to use alternative search operators, for instance geometric semantic operators, to improve convergence. In this work, our goal is to experimentally show that both problems can be effectively addressed by incorporating a local search optimizer as an additional search operator. Using real-world problems, we show that this rather simple strategy can improve the convergence and performance of tree-based GP, while also reducing program size. Given these results, a question arises: Why are local search strategies so uncommon in GP? A small survey of popular GP libraries suggests to us that local search is underused in GP systems. We conclude by outlining plausible answers for this question and highlighting future work.
Tree Species Classification Based on 3D Bark Texture Analysis
Image and Video Technology (2014-01-01) 8333: 279-289 , January 01, 2014
By Othmani, Ahlem; Piboule, Alexandre; Dalmau, Oscar; Lomenie, Nicolas; Mokrani, Said; Voon, Lew Fock Chong Lew Yan Show all (6)
Terrestrial Laser Scanning (TLS) technique is today widely used in ground plots to acquire 3D point clouds from which forest inventory attributes are calculated. In the case of mixed plantings where the 3D point clouds contain data from several different tree species, it is important to be able to automatically recognize the tree species in order to analyze the data of each of the species separately. Although automatic tree species recognition from TLS data is an important problem, it has received very little attention from the scientific community. In this paper we propose a method for classifying five different tree species using TLS data. Our method is based on the analysis of the 3D geometric texture of the bark in order to compute roughness measures and shape characteristics that are fed as input to a Random Forest classifier to classify the tree species. The method has been evaluated on a test set composed of 265 samples (53 samples of each of the 5 species) and the results obtained are very encouraging.
Potentialities of Chorems as Visual Summaries of Geographic Databases Contents
Advances in Visual Information Systems (2007-01-01) 4781: 537-548 , January 01, 2007
By Fatto, Vincenzo; Laurini, Robert; Lopez, Karla; Loreto, Rosalva; Milleret-Raffort, Françoise; Sebillo, Monica; Sol-Martinez, David; Vitiello, Giuliana Show all (8)
Chorems are schematized representations of territories, and so they can represent a good visual summary of spatial databases. Indeed for spatial decision-makers, it is more important to identify and map problems than facts. Until now, chorems were made manually by geographers based on the own knowledge of the territory. So, an international project was launched in order to automatically discover spatial patterns and layout chorems starting from spatial databases. After examining some manually-made chorems some guidelines were identified. Then the architecture of a prototype system is presented based on a canonical database structure, a subsystem for spatial patterns discovery based on spatial data mining, a subsystem for chorem layout, and a specialized language to represent chorems.
A Comparison between Hardware Accelerators for the Modified Tate Pairing over $\mathbb{F}_{2^m}$ and $\mathbb{F}_{3^m}$
Pairing-Based Cryptography – Pairing 2008 (2008-01-01) 5209: 297-315 , January 01, 2008
By Beuchat, Jean-Luc; Brisebarre, Nicolas; Detrey, Jérémie; Okamoto, Eiji; Rodríguez-Henríquez, Francisco Show all (5)
In this article we propose a study of the modified Tate pairing in characteristics two and three. Starting from the ηT pairing introduced by Barreto et al. [1], we detail various algorithmic improvements in the case of characteristic two. As far as characteristic three is concerned, we refer to the survey by Beuchat et al. [5]. We then show how to get back to the modified Tate pairing at almost no extra cost. Finally, we explore the trade-offs involved in the hardware implementation of this pairing for both characteristics two and three. From our experiments, characteristic three appears to have a slight advantage over characteristic two.
Biologically-Inspired Digital Architecture for a Cortical Model of Orientation Selectivity
Artificial Neural Networks - ICANN 2008 (2008-01-01) 5164: 188-197 , January 01, 2008
By Torres-Huitzil, Cesar; Girau, Bernard; Arias-Estrada, Miguel
This paper presents a biologically inspired modular hardware implementation of a cortical model of orientation selectivity of the visual stimuli in the primary visual cortex targeted to a Field Programmable Gate Array (FPGA) device. The architecture mimics the functionality and organization of neurons through spatial Gabor-like filtering and the so-called cortical hypercolumnar organization. A systolic array and a suitable image addressing scheme are used to partially overcome the von Neumann bottleneck of monolithic memory organization in conventional microprocessor-based system by processing small and local amounts of sensory information (image tiles) in an incremental way. A real-time FPGA implementation is presented for 8 different orientations and aspects such as flexibility, scalability, performance and precision are discussed to show the plausibility of implementing biologically-inspired processing for early visual perception in digital devices.
Software Implementation of Binary Elliptic Curves: Impact of the Carry-Less Multiplier on Scalar Multiplication
Cryptographic Hardware and Embedded Systems – CHES 2011 (2011-01-01) 6917: 108-123 , January 01, 2011
By Taverne, Jonathan; Faz-Hernández, Armando; Aranha, Diego F.; Rodríguez-Henríquez, Francisco; Hankerson, Darrel; López, Julio Show all (6)
The availability of a new carry-less multiplication instruction in the latest Intel desktop processors significantly accelerates multiplication in binary fields and hence presents the opportunity for reevaluating algorithms for binary field arithmetic and scalar multiplication over elliptic curves. We describe how to best employ this instruction in field multiplication and the effect on performance of doubling and halving operations. Alternate strategies for implementing inversion and half-trace are examined to restore most of their competitiveness relative to the new multiplier. These improvements in field arithmetic are complemented by a study on serial and parallel approaches for Koblitz and random curves, where parallelization strategies are implemented and compared. The contributions are illustrated with experimental results improving the state-of-the-art performance of halving and doubling-based scalar multiplication on NIST curves at the 112- and 192-bit security levels, and a new speed record for side-channel resistant scalar multiplication in a random curve at the 128-bit security level.
A Fixpoint Semantics of Event Systems With and Without Fairness Assumptions
Integrated Formal Methods (2005-01-01) 3771: 327-346 , January 01, 2005
By Barradas, Héctor Ruíz; Bert, Didier
We present a fixpoint semantics of event systems. The semantics is presented in a general framework without concerns of fairness. Soundness and completeness of rules for deriving leads-to properties are proved in this general framework. The general framework is instantiated to minimal progress and weak fairness assumptions and similar results are obtained. We show the power of these results by deriving sufficient conditions for leads-to under minimal progress proving soundness of proof obligations without reasoning over state-traces.
Empirical Evaluation of Collaborative Support for Distributed Pair Programming
By Favela, Jesus; Natsu, Hiroshi; Pérez, Cynthia; Robles, Omar; Morán, Alberto L.; Romero, Raul; Martínez-Enríquez, Ana M.; Decouchant, Dominique Show all (8)
Pair programming is an Extreme Programming (XP) practice where two programmers work on a single computer to produce an artifact. Empirical evaluations have provided evidence that this technique results in higher quality code in half the time it would take an individual programmer. Distributed pair programming could facilitate opportunistic pair programming sessions with colleagues working in remote sites. In this paper we present the preliminary results of the empirical evaluation of the COPPER collaborative editor, developed explicitly to support pair programming. The evaluation was performed on three different conditions: pairs working collocated on a single computer; distributed pairs working in application sharing mode; and distributed pairs using collaboration aware facilities. In all three cases the subjects used the COPPER collaborative editor. The results support our hypothesis that distributed pairs could find the same amount of errors as their collocated counterparts. However, no evidence was found that the pairs that used collaborative awareness services had better code comprehension, as we had also hypothesized.
An artificial life approach to dense stereo disparity
Artificial Life and Robotics (2009-03-01) 13: 585-596 , March 01, 2009
By Olague, Gustavo; Pérez, Cynthia B.; Fernández, Francisco; Lutton, Evelyne Show all (4)
This article presents an adaptive approach to improving the infection algorithm that we have used to solve the dense stereo matching problem. The algorithm presented here incorporates two different epidemic automata along a single execution of the infection algorithm. The new algorithm attempts to provide a general behavior of guessing the best correspondence between a pair of images. Our aim is to provide a new strategy inspired by evolutionary computation, which combines the behaviors of both automata into a single correspondence problem. The new algorithm will decide which automata will be used based on the transmission of information and mutation, as well as the attributes, texture, and geometry, of the input images. This article gives details about how the rules used in the infection algorithm are coded. Finally, we show experiments with a real stereo pair, as well as with a standard test bed, to show how the infection algorithm works.
Hardware Accelerator for the Tate Pairing in Characteristic Three Based on Karatsuba-Ofman Multipliers
Cryptographic Hardware and Embedded Systems - CHES 2009 (2009-01-01) 5747: 225-239 , January 01, 2009
By Beuchat, Jean-Luc; Detrey, Jérémie; Estibals, Nicolas; Okamoto, Eiji; Rodríguez-Henríquez, Francisco Show all (5)
This paper is devoted to the design of fast parallel accelerators for the cryptographic Tate pairing in characteristic three over supersingular elliptic curves. We propose here a novel hardware implementation of Miller's loop based on a pipelined Karatsuba-Ofman multiplier. Thanks to a careful selection of algorithms for computing the tower field arithmetic associated to the Tate pairing, we manage to keep the pipeline busy. We also describe the strategies we considered to design our parallel multiplier. They are included in a VHDL code generator allowing for the exploration of a wide range of operators. Then, we outline the architecture of a coprocessor for the Tate pairing over $\mathbb{F}_{3^m}$ . However, a final exponentiation is still needed to obtain a unique value, which is desirable in most of the cryptographic protocols. We supplement our pairing accelerator with a coprocessor responsible for this task. An improved exponentiation algorithm allows us to save hardware resources.
According to our place-and-route results on Xilinx FPGAs, our design improves both the computation time and the area-time trade-off compared to previoulsy published coprocessors.
Belief Merging in Dynamic Logic of Propositional Assignments
Foundations of Information and Knowledge Systems (2014-01-01) 8367: 381-398 , January 01, 2014
By Herzig, Andreas; Pozos-Parra, Pilar; Schwarzentruber, François
We study syntactical merging operations that are defined semantically by means of the Hamming distance between valuations; more precisely, we investigate the Σ-semantics, Gmax-semantics and max-semantics. We work with a logical language containing merging operators as connectives, as opposed to the metalanguage operations of the literature. We capture these merging operators as programs of Dynamic Logic of Propositional Assignments DL-PA. This provides a syntactical characterisation of the three semantically defined merging operators, and a proof system for DL-PA therefore also provides a proof system for these merging operators. We explain how PSPACE membership of the model checking and satisfiability problem of star-free DL-PA can be extended to the variant of DL-PA where symbolic disjunctions that are parametrised by sets (that are not defined as abbreviations, but are proper connectives) are built into the language. As our merging operators can be polynomially embedded into this variant of DL-PA, we obtain that both the model checking and the satisfiability problem of a formula containing possibly nested merging operators is in PSPACE.
Semantic Network Adaptation Based on QoS Pattern Recognition for Multimedia Streams
Signal Processing, Image Processing and Pattern Recognition (2009-01-01) 61: 267-274 , January 01, 2009
By Exposito, Ernesto; Gineste, Mathieu; Lamolle, Myriam; Gomez, Jorge Show all (4)
This article proposes an ontology based pattern recognition methodology to compute and represent common QoS properties of the Application Data Units (ADU) of multimedia streams. The use of this ontology by mechanisms located at different layers of the communication architecture will allow implementing fine per-packet self-optimization of communication services regarding the actual application requirements. A case study showing how this methodology is used by error control mechanisms in the context of wireless networks is presented in order to demonstrate the feasibility and advantages of this approach.
Ground-based tests of JEM-EUSO components at the Telescope Array site, "EUSO-TA"
We are conducting tests of optical and electronics components of JEMEUSO at the Telescope Array site in Utah with a ground-based "EUSO-TA" detector. The tests will include an engineering validation of the detector, cross-calibration of EUSO-TA with the TA fluorescence detector and observations of air shower events. Also, the proximity of the TA's Electron Light Source will allow for convenient use of this calibration device. In this paper, we report initial results obtained with the EUSO-TA telescope.
Erratum to: Performances of JEM-EUSO: angular reconstruction
Performances of JEM-EUSO: angular reconstruction
Mounted on the International Space Station(ISS), the Extreme Universe Space Observatory, on-board the Japanese Experimental Module (JEM-EUSO), relies on the well established fluorescence technique to observe Extensive Air Showers (EAS) developing in the earth's atmosphere. Focusing on the detection of Ultra High Energy Cosmic Rays (UHECR) in the decade of 1020eV, JEM-EUSO will face new challenges by applying this technique from space. The EUSO Simulation and Analysis Framework (ESAF) has been developed in this context to provide a full end-to-end simulation frame, and assess the overall performance of the detector. Within ESAF, angular reconstruction can be separated into two conceptually different steps. The first step is pattern recognition, or filtering, of the signal to separate it from the background. The second step is to perform different types of fitting in order to search for the relevant geometrical parameters that best describe the previously selected signal. In this paper, we discuss some of the techniques we have implemented in ESAF to perform the geometrical reconstruction of EAS seen by JEM-EUSO. We also conduct thorough tests to assess the performances of these techniques in conditions which are relevant to the scope of the JEM-EUSO mission. We conclude by showing the expected angular resolution in the energy range that JEM-EUSO is expected to observe.
Performances of JEM–EUSO: energy and X max reconstruction
The Extreme Universe Space Observatory (EUSO) on–board the Japanese Experimental Module (JEM) of the International Space Station aims at the detection of ultra high energy cosmic rays from space. The mission consists of a UV telescope which will detect the fluorescence light emitted by cosmic ray showers in the atmosphere. The mission, currently developed by a large international collaboration, is designed to be launched within this decade. In this article, we present the reconstruction of the energy of the observed events and we also address the Xmax reconstruction. After discussing the algorithms developed for the energy and Xmax reconstruction, we present several estimates of the energy resolution, as a function of the incident angle, and energy of the event. Similarly, estimates of the Xmax resolution for various conditions are presented.
Calibration aspects of the JEM-EUSO mission
Experimental Astronomy (2015-11-01) 40: 91-116 , November 01, 2015
The JEM-EUSO telescope will be, after calibration, a very accurate instrument which yields the number of received photons from the number of measured photo-electrons. The project is in phase A (demonstration of the concept) including already operating prototype instruments, i.e. many parts of the instrument have been constructed and tested. Calibration is a crucial part of the instrument and its use. The focal surface (FS) of the JEM-EUSO telescope will consist of about 5000 photo-multiplier tubes (PMTs), which have to be well calibrated to reach the required accuracy in reconstructing the air-shower parameters. The optics system consists of 3 plastic Fresnel (double-sided) lenses of 2.5 m diameter. The aim of the calibration system is to measure the efficiencies (transmittances) of the optics and absolute efficiencies of the entire focal surface detector. The system consists of 3 main components: (i) Pre-flight calibration devices on ground, where the efficiency and gain of the PMTs will be measured absolutely and also the transmittance of the optics will be. (ii) On-board relative calibration system applying two methods: a) operating during the day when the JEM-EUSO lid will be closed with small light sources on board. b) operating during the night, together with data taking: the monitoring of the background rate over identical sites. (iii) Absolute in-flight calibration, again, applying two methods: a) measurement of the moon light, reflected on high altitude, high albedo clouds. b) measurements of calibrated flashes and tracks produced by the Global Light System (GLS). Some details of each calibration method will be described in this paper.
The JEM-EUSO mission: An introduction
Experimental Astronomy (2015-11-01) 40: 3-17 , November 01, 2015
By Adams, J. H., Jr.; Ahmad, S.; Albert, J.-N.; Allard, D.; Anchordoqui, L.; Andreev, V.; Anzalone, A.; Arai, Y.; Asano, K.; Ave Pernas, M.; Baragatti, P.; Barrillon, P.; Batsch, T.; Bayer, J.; Bechini, R.; Belenguer, T.; Bellotti, R.; Belov, K.; Berlind, A. A.; Bertaina, M.; Biermann, P. L.; Biktemerova, S.; Blaksley, C.; Blanc, N.; Błȩcki, J.; Blin-Bondil, S.; Blümer, J.; Bobik, P.; Bogomilov, M.; Bonamente, M.; Briggs, M. S.; Briz, S.; Bruno, A.; Cafagna, F.; Campana, D.; Capdevielle, J-N.; Caruso, R.; Casolino, M.; Cassardo, C.; Castellini, G.; Catalano, C.; Catalano, O.; Cellino, A.; Chikawa, M.; Christl, M. J.; Cline, D.; Connaughton, V.; Conti, L.; Cordero, G.; Crawford, H. J.; Cremonini, R.; Csorna, S.; Dagoret-Campagne, S.; Castro, A. J.; Donato, C.; Taille, C.; Santis, C.; Peral, L.; Dell'Oro, A.; Simone, N.; Martino, M.; Distratis, G.; Dulucq, F.; Dupieux, M.; Ebersoldt, A.; Ebisuzaki, T.; Engel, R.; Falk, S.; Fang, K.; Fenu, F.; Fernández-Gómez, I.; Ferrarese, S.; Finco, D.; Flamini, M.; Fornaro, C.; Franceschi, A.; Fujimoto, J.; Fukushima, M.; Galeotti, P.; Garipov, G.; Geary, J.; Gelmini, G.; Giraudo, G.; Gonchar, M.; González Alvarado, C.; Gorodetzky, P.; Guarino, F.; Guzmán, A.; Hachisu, Y.; Harlov, B.; Haungs, A.; Hernández Carretero, J.; Higashide, K.; Ikeda, D.; Ikeda, H.; Inoue, N.; Inoue, S.; Insolia, A.; Isgrò, F.; Itow, Y.; Joven, E.; Judd, E. G.; Jung, A.; Kajino, F.; Kajino, T.; Kaneko, I.; Karadzhov, Y.; Karczmarczyk, J.; Karus, M.; Katahira, K.; Kawai, K.; Kawasaki, Y.; Keilhauer, B.; Khrenov, B. A.; Kim, Jeong-Sook; Kim, Soon-Wook; Kim, Sug-Whan; Kleifges, M.; Klimov, P. A.; Kolev, D.; Kreykenbohm, I.; Kudela, K.; Kurihara, Y.; Kusenko, A.; Kuznetsov, E.; Lacombe, M.; Lachaud, C.; Lee, J.; Licandro, J.; Lim, H.; López, F.; Maccarone, M. C.; Mannheim, K.; Maravilla, D.; Marcelli, L.; Marini, A.; Martinez, O.; Masciantonio, G.; Mase, K.; Matev, R.; Medina-Tanco, G.; Mernik, T.; Miyamoto, H.; Miyazaki, Y.; Mizumoto, Y.; Modestino, G.; Monaco, A.; Monnier-Ragaigne, D.; Morales de los Ríos, J. A.; Moretto, C.; Morozenko, V. S.; Mot, B.; Murakami, T.; Nagano, M.; Nagata, M.; Nagataki, S.; Nakamura, T.; Napolitano, T.; Naumov, D.; Nava, R.; Neronov, A.; Nomoto, K.; Nonaka, T.; Ogawa, T.; Ogio, S.; Ohmori, H.; Olinto, A. V.; Orleański, P.; Osteria, G.; Panasyuk, M. I.; Parizot, E.; Park, I. H.; Park, H. W.; Pastircak, B.; Patzak, T.; Paul, T.; Pennypacker, C.; Perez Cano, S.; Peter, T.; Picozza, P.; Pierog, T.; Piotrowski, L. W.; Piraino, S.; Plebaniak, Z.; Pollini, A.; Prat, P.; Prévôt, G.; Prieto, H.; Putis, M.; Reardon, P.; Reyes, M.; Ricci, M.; Rodríguez, I.; Frías, M. D. Rodríguez; Ronga, F.; Roth, M.; Rothkaehl, H.; Roudil, G.; Rusinov, I.; Rybczyński, M.; Sabau, M. D.; Sáez Cano, G.; Sagawa, H.; Saito, A.; Sakaki, N.; Sakata, M.; Salazar, H.; Sánchez, S.; Santangelo, A.; Santiago Crúz, L.; Sanz Palomino, M.; Saprykin, O.; Sarazin, F.; Sato, H.; Sato, M.; Schanz, T.; Schieler, H.; Scotti, V.; Segreto, A.; Selmane, S.; Semikoz, D.; Serra, M.; Sharakin, S.; Shibata, T.; Shimizu, H. M.; Shinozaki, K.; Shirahama, T.; Siemieniec-Oziȩbło, G.; Silva López, H. H.; Sledd, J.; Słomińska, K.; Sobey, A.; Sugiyama, T.; Supanitsky, D.; Suzuki, M.; Szabelska, B.; Szabelski, J.; Tajima, F.; Tajima, N.; Tajima, T.; Takahashi, Y.; Takami, H.; Takeda, M.; Takizawa, Y.; Tenzer, C.; Tibolla, O.; Tkachev, L.; Tokuno, H.; Tomida, T.; Tone, N.; Toscano, S.; Trillaud, F.; Tsenov, R.; Tsunesada, Y.; Tsuno, K.; Tymieniecka, T.; Uchihori, Y.; Unger, M.; Vaduvescu, O.; Valdés-Galicia, J. F.; Vallania, P.; Valore, L.; Vankova, G.; Vigorito, C.; Villaseñor, L.; Ballmoos, P.; Wada, S.; Watanabe, J.; Watanabe, S.; Watts, J., Jr.; Weber, M.; Weiler, T. J.; Wibig, T.; Wiencke, L.; Wille, M.; Wilms, J.; Włodarczyk, Z.; Yamamoto, T.; Yamamoto, Y.; Yang, J.; Yano, H.; Yashin, I. V.; Yonetoku, D.; Yoshida, K.; Yoshida, S.; Young, R.; Zotov, M. Yu.; Zuccaro Marchi, A.; The JEM-EUSO Collaboration Show all (289)
The Extreme Universe Space Observatory on board the Japanese Experiment Module of the International Space Station, JEM-EUSO, is being designed to search from space ultra-high energy cosmic rays. These are charged particles with energies from a few 1019 eV to beyond 1020 eV, at the very end of the known cosmic ray energy spectrum. JEM-EUSO will also search for extreme energy neutrinos, photons, and exotic particles, providing a unique opportunity to explore largely unknown phenomena in our Universe. The mission, principally based on a wide field of view (60 degrees) near-UV telescope with a diameter of ∼ 2.5 m, will monitor the earth's atmosphere at night, pioneering the observation from space of the ultraviolet tracks (290-430 nm) associated with giant extensive air showers produced by ultra-high energy primaries propagating in the earth's atmosphere. Observing from an orbital altitude of ∼ 400 km, the mission is expected to reach an instantaneous geometrical aperture of Ageo ≥ 2 × 105 km2 sr with an estimated duty cycle of ∼ 20 %. Such a geometrical aperture allows unprecedented exposures, significantly larger than can be obtained with ground-based experiments. In this paper we briefly review the history of space-based search for ultra-high energy cosmic rays. We then introduce the special issue of Experimental Astronomy devoted to the various aspects of such a challenging enterprise. We also summarise the activities of the on-going JEM-EUSO program.
JEM-EUSO: Meteor and nuclearite observations
Meteor and fireball observations are key to the derivation of both the inventory and physical characterization of small solar system bodies orbiting in the vicinity of the Earth. For several decades, observation of these phenomena has only been possible via ground-based instruments. The proposed JEM-EUSO mission has the potential to become the first operational space-based platform to share this capability. In comparison to the observation of extremely energetic cosmic ray events, which is the primary objective of JEM-EUSO, meteor phenomena are very slow, since their typical speeds are of the order of a few tens of km/sec (whereas cosmic rays travel at light speed). The observing strategy developed to detect meteors may also be applied to the detection of nuclearites, which have higher velocities, a wider range of possible trajectories, but move well below the speed of light and can therefore be considered as slow events for JEM-EUSO. The possible detection of nuclearites greatly enhances the scientific rationale behind the JEM-EUSO mission.
The infrared camera onboard JEM-EUSO
Experimental Astronomy (2015-11-01) 40: 61-89 , November 01, 2015
The Extreme Universe Space Observatory on the Japanese Experiment Module (JEM-EUSO) on board the International Space Station (ISS) is the first space-based mission worldwide in the field of Ultra High-Energy Cosmic Rays (UHECR). For UHECR experiments, the atmosphere is not only the showering calorimeter for the primary cosmic rays, it is an essential part of the readout system, as well. Moreover, the atmosphere must be calibrated and has to be considered as input for the analysis of the fluorescence signals. Therefore, the JEM-EUSO Space Observatory is implementing an Atmospheric Monitoring System (AMS) that will include an IR-Camera and a LIDAR. The AMS Infrared Camera is an infrared, wide FoV, imaging system designed to provide the cloud coverage along the JEM-EUSO track and the cloud top height to properly achieve the UHECR reconstruction in cloudy conditions. In this paper, an updated preliminary design status, the results from the calibration tests of the first prototype, the simulation of the instrument, and preliminary cloud top height retrieval algorithms are presented.
Science of atmospheric phenomena with JEM-EUSO
By Adams, J. H., Jr.; Ahmad, S.; Albert, J. -N.; Allard, D.; Anchordoqui, L.; Andreev, V.; Anzalone, A.; Arai, Y.; Asano, K.; Ave Pernas, M.; Baragatti, P.; Barrillon, P.; Batsch, T.; Bayer, J.; Bechini, R.; Belenguer, T.; Bellotti, R.; Belov, K.; Berlind, A. A.; Bertaina, M.; Biermann, P. L.; Biktemerova, S.; Blaksley, C.; Blanc, N.; Błȩcki, J.; Blin-Bondil, S.; Blümer, J.; Bobik, P.; Bogomilov, M.; Bonamente, M.; Briggs, M. S.; Briz, S.; Bruno, A.; Cafagna, F.; Campana, D.; Capdevielle, J. -N.; Caruso, R.; Casolino, M.; Cassardo, C.; Castellinic, G.; Catalano, C.; Catalano, G.; Cellino, A.; Chikawa, M.; Christl, M. J.; Cline, D.; Connaughton, V.; Conti, L.; Cordero, G.; Crawford, H. J.; Cremonini, R.; Csorna, S.; Dagoret-Campagne, S.; de Castro, A. J.; De Donato, C.; de la Taille, C.; De Santis, C.; del Peral, L.; Dell'Oro, A.; De Simone, N.; Di Martino, M.; Distratis, G.; Dulucq, F.; Dupieux, M.; Ebersoldt, A.; Ebisuzaki, T.; Engel, R.; Falk, S.; Fang, K.; Fenu, F.; Fernández-Gómez, I.; Ferrarese, S.; Finco, D.; Flamini, M.; Fornaro, C.; Franceschi, A.; Fujimoto, J.; Fukushima, M.; Galeotti, P.; Garipov, G.; Geary, J.; Gelmini, G.; Giraudo, G.; Gonchar, M.; González Alvarado, C.; Gorodetzky, P.; Guarino, F.; Guzmán, A.; Hachisu, Y.; Harlov, B.; Haungs, A.; Hernández Carretero, J.; Higashide, K.; Ikeda, D.; Ikeda, H.; Inoue, N.; Inoue, S.; Insolia, A.; Isgrò, F.; Itow, Y.; Joven, E.; Judd, E. G.; Jung, A.; Kajino, F.; Kajino, T.; Kaneko, I.; Karadzhov, Y.; Karczmarczyk, J.; Karus, M.; Katahira, K.; Kawai, K.; Kawasaki, Y.; Keilhauer, B.; Khrenov, B. A.; Kim, J. -S.; Kim, S. -W.; Kim, S. -W.; Kleifges, M.; Klimov, P. A.; Kolev, D.; Kreykenbohm, I.; Kudela, K.; Kurihara, Y.; Kusenko, A.; Kuznetsov, E.; Lacombe, M.; Lachaud, C.; Lee, J.; Licandro, J.; Lim, H.; López, F.; Maccarone, M. C.; Mannheim, K.; Maravilla, D.; Marcelli, L.; Marini, A.; Martinez, O.; Masciantonio, G.; Mase, K.; Matev, R.; Medina-Tanco, G.; Mernik, T.; Miyamoto, H.; Miyazaki, Y.; Mizumoto, Y.; Modestino, G.; Monaco, A.; Monnier-Ragaigne, D.; Morales de los Ríos, J. A.; Moretto, C.; Morozenko, V. S.; Mot, B.; Murakami, T.; Murakami, M. Nagano; Nagata, M.; Nagataki, S.; Nakamura, T.; Napolitano, T.; Naumov, D.; Nava, R.; Neronov, A.; Nomoto, K.; Nonaka, T.; Ogawa, T.; Ogio, S.; Ohmori, H.; Olinto, A. V.; Orleański, P.; Osteria, G.; Panasyuk, M. I.; Parizot, E.; Park, I. H.; Park, H. W.; Pastircak, B.; Patzak, T.; Paul, T.; Pennypacker, C.; Perez Cano, S.; Peter, T.; Picozza, P.; Pierog, T.; Piotrowski, L. W.; Piraino, S.; Plebaniak, Z.; Pollini, A.; Prat, P.; Prévôt, G.; Prieto, H.; Putis, M.; Reardon, P.; Reyes, M.; Ricci, M.; Rodríguez, I.; Rodríguez Frías, M. D.; Ronga, F.; Roth, M.; Rothkaehl, H.; Roudil, G.; Rusinov, I.; Rybczyński, M.; Sabau, M. D.; Sáez-Cano, G.; Sagawa, H.; Saito, A.; Sakaki, N.; Sakata, M.; Salazar, H.; Sánchez, S.; Santangelo, A.; Santiago Crúz, L.; Sanz Palomino, M.; Saprykin, O.; Sarazin, F.; Sato, H.; Sato, M.; Schanz, T.; Schieler, H.; Scotti, V.; Segreto, A.; Selmane, S.; Semikoz, D.; Serra, M.; Sharakin, S.; Shibata, T.; Shimizu, H. M.; Shinozaki, K.; Shirahama, T.; Siemieniec-Oziȩbło, G.; Silva López, H. H.; Sledd, J.; Słomińska, K.; Sobey, A.; Sugiyama, T.; Supanitsky, D.; Suzuki, M.; Szabelska, B.; Szabelski, J.; Tajima, F.; Tajima, N.; Tajima, T.; Takahashi, Y.; Takami, H.; Takeda, M.; Takizawa, Y.; Tenzer, C.; Tibolla, O.; Tkachev, L.; Tokuno, H.; Tomida, T.; Tone, N.; Toscano, S.; Trillaud, F.; Tsenov, R.; Tsunesada, Y.; Tsuno, K.; Tymieniecka, T.; Uchihori, Y.; Unger, M.; Vaduvescu, O.; Valdés-Galicia, J. F.; Vallania, P.; Valore, L.; Vankova, G.; Vigorito, C.; Villaseñor, L.; von Ballmoos, P.; Wada, S.; Watanabe, J.; Watanabe, S.; Watts, J., Jr; Weber, M.; Weiler, T. J.; Wibig, T.; Wiencke, L.; Wille, M.; Wilms, J.; Włodarczyk, Z.; Yamamoto, T.; Yamamoto, Y.; Yang, J.; Yano, H.; Yashin, I. V.; Yonetoku, D.; Yoshida, K.; Yoshida, S.; Young, R.; Zotov, M. Yu.; Zuccaro Marchi, A.; Słomiński, J.; The JEM-EUSO Collaboration Show all (290)
The main goal of the JEM-EUSO experiment is the study of Ultra High Energy Cosmic Rays (UHECR, 1019−1021eV), but the method which will be used (detection of the secondary light emissions induced by cosmic rays in the atmosphere) allows to study other luminous phenomena. The UHECRs will be detected through the measurement of the emission in the range between 290 and 430 m, where some part of Transient Luminous Events (TLEs) emission also appears. This work discusses the possibility of using the JEM-EUSO Telescope to get new scientific results on TLEs. The high time resolution of this instrument allows to observe the evolution of TLEs with great precision just at the moment of their origin. The paper consists of four parts: review of the present knowledge on the TLE, presentation of the results of the simulations of the TLE images in the JEM-EUSO telescope, results of the Russian experiment Tatiana–2 and discussion of the possible progress achievable in this field with JEM-EUSO as well as possible cooperation with other space projects devoted to the study of TLE – TARANIS and ASIM. In atmospheric physics, the study of TLEs became one of the main physical subjects of interest after their discovery in 1989. In the years 1992 – 1994 detection was performed from satellite, aircraft and space shuttle and recently from the International Space Station. These events have short duration (milliseconds) and small scales (km to tens of km) and appear at altitudes 50 – 100 km. Their nature is still not clear and each new experimental data can be useful for a better understanding of these mysterious phenomena.
JEM-EUSO observational technique and exposure
Designed as the first mission to explore the ultra-high energy universe from space, JEM-EUSO observes the Earth's atmosphere at night to record the ultraviolet tracks generated by the extensive air showers. We present the expected geometrical aperture and annual exposure in the nadir and tilt modes for ultra-high energy cosmic rays observation as a function of the altitude of the International Space Station.
Multivariate approximation: An overview
Numerical Algorithms (2005-07-01) 39: 1-6 , July 01, 2005
By Apprato, Dominique; Gout, Christian; Rabut, Christophe; Traversoni, Leonardo Show all (4)
The JEM-EUSO observation in cloudy conditions
The JEM-EUSO (Extreme Universe Space Observatory on-board the Japanese Experiment Module) mission will conduct extensive air shower (EAS) observations on the International Space Station (ISS). Following the ISS orbit, JEM-EUSO will experience continuous changes in the atmospheric conditions, including cloud presence. The influence of clouds on space-based observation is, therefore, an important topic to investigate from both EAS property and cloud climatology points of view. In the present work, the impact of clouds on the apparent profile of EAS is demonstrated through the simulation studies, taking into account the JEM-EUSO instrument and properties of the clouds. These results show a dependence on the cloud-top altitude and optical depth of the cloud. The analyses of satellite measurements on the cloud distribution indicate that more than 60 % of the cases allow for conventional EAS observation, and an additional ∼20 % with reduced quality. The combination of the relevant factors results in an effective trigger aperture of EAS observation ∼72 %, compared to the one in the clear atmosphere condition.
The atmospheric monitoring system of the JEM-EUSO instrument
The JEM-EUSO telescope will detect Ultra-High Energy Cosmic Rays (UHECRs) from space, detecting the UV Fluorescence Light produced by Extensive Air Showers (EAS) induced by the interaction of the cosmic rays with the earth's atmosphere. The capability to reconstruct the properties of the primary cosmic ray depends on the accurate measurement of the atmospheric conditions in the region of EAS development. The Atmospheric Monitoring (AM) system of JEM-EUSO will host a LIDAR, operating in the UV band, and an Infrared camera to monitor the cloud cover in the JEM-EUSO Field of View, in order to be sensitive to clouds with an optical depth τ ≥ 0.15 and to measure the cloud top altitude with an accuracy of 500 m and an altitude resolution of 500 m.
Space experiment TUS on board the Lomonosov satellite as pathfinder of JEM-EUSO
Space-based detectors for the study of extreme energy cosmic rays (EECR) are being prepared as a promising new method for detecting highest energy cosmic rays. A pioneering space device – the "tracking ultraviolet set-up" (TUS) – is in the last stage of its construction and testing. The TUS detector will collect preliminary data on EECR in the conditions of a space environment, which will be extremely useful for planning the major JEM-EUSO detector operation.
The EUSO-Balloon pathfinder
EUSO-Balloon is a pathfinder for JEM-EUSO, the Extreme Universe Space Observatory which is to be hosted on-board the International Space Station. As JEM-EUSO is designed to observe Ultra-High Energy Cosmic Rays (UHECR)-induced Extensive Air Showers (EAS) by detecting their ultraviolet light tracks "from above", EUSO-Balloon is a nadir-pointing UV telescope too. With its Fresnel Optics and Photo-Detector Module, the instrument monitors a 50 km2 ground surface area in a wavelength band of 290–430 nm, collecting series of images at a rate of 400,000 frames/sec. The objectives of the balloon demonstrator are threefold: a) perform a full end-to-end test of a JEM-EUSO prototype consisting of all the main subsystems of the space experiment, b) measure the effective terrestrial UV background, with a spatial and temporal resolution relevant for JEM-EUSO. c) detect tracks of ultraviolet light from near space for the first time. The latter is a milestone in the development of UHECR science, paving the way for any future space-based UHECR observatory. On August 25, 2014, EUSO-Balloon was launched from Timmins Stratospheric Balloon Base (Ontario, Canada) by the balloon division of the French Space Agency CNES. From a float altitude of 38 km, the instrument operated during the entire astronomical night, observing UV-light from a variety of ground-covers and from hundreds of simulated EASs, produced by flashers and a laser during a two-hour helicopter under-flight.
The JEM-EUSO instrument
In this paper we describe the main characteristics of the JEM-EUSO instrument. The Extreme Universe Space Observatory on the Japanese Experiment Module (JEM-EUSO) of the International Space Station (ISS) will observe Ultra High-Energy Cosmic Rays (UHECR) from space. It will detect UV-light of Extensive Air Showers (EAS) produced by UHECRs traversing the Earth's atmosphere. For each event, the detector will determine the energy, arrival direction and the type of the primary particle. The advantage of a space-borne detector resides in the large field of view, using a target volume of about 1012 tons of atmosphere, far greater than what is achievable from ground. Another advantage is a nearly uniform sampling of the whole celestial sphere. The corresponding increase in statistics will help to clarify the origin and sources of UHECRs and characterize the environment traversed during their production and propagation. JEM-EUSO is a 1.1 ton refractor telescope using an optics of 2.5 m diameter Fresnel lenses to focus the UV-light from EAS on a focal surface composed of about 5,000 multi-anode photomultipliers, for a total of ≃3⋅105 channels. A multi-layer parallel architecture handles front-end acquisition, selecting and storing valid triggers. Each processing level filters the events with increasingly complex algorithms using FPGAs and DSPs to reject spurious events and reduce the data rate to a value compatible with downlink constraints.
User Interface Plasticity for Groupware
Digital Information and Communication Technology and Its Applications (2011-01-01) 166: 380-394 , January 01, 2011
By Mendoza, Sonia; Decouchant, Dominique; Sánchez, Gabriela; Rodríguez, José; Mateos Papis, Alfredo Piero Show all (5)
Plastic user interfaces are intentionally developed to automatically adapt themselves to changes in the user's working context. Although some Web single-user interactive systems already integrate some plastic capabilities, this research topic remains quasi-unexplored in the domain of Computer Supported Cooperative Work. This paper is centered on prototyping a plastic collaborative whiteboard, which adapts itself: 1) to the platform, as being able to be launched from heterogeneous computer devices and 2) to each collaborator, when he is detected working from several devices. In this last case, if the collaborator agrees, the whiteboard can split its user interface among his devices in order to facilitate user-system interaction without affecting the other collaborators present in the working session. The distributed interface components work as if they were co-located within a unique device. At any time, the whiteboard maintains group awareness among the involved collaborators.
Parallel Approaches for Multiobjective Optimization
Multiobjective Optimization (2008-01-01) 5252: 349-372 , January 01, 2008
By Talbi, El-Ghazali; Mostaghim, Sanaz; Okabe, Tatsuya; Ishibuchi, Hisao; Rudolph, Günter; Coello Coello, Carlos A. Show all (6)
This chapter presents a general overview of parallel approaches for multiobjective optimization. For this purpose, we propose a taxonomy for parallel metaheuristics and exact methods. This chapter covers the design aspect of the algorithms as well as the implementation aspects on different parallel and distributed architectures.
Dealing with explicit preferences and uncertainty in answer set programming
Annals of Mathematics and Artificial Intelligence (2012-07-01) 65: 159-198 , July 01, 2012
By Confalonieri, Roberto; Nieves, Juan Carlos; Osorio, Mauricio; Vázquez-Salceda, Javier Show all (4)
In this paper, we show how the formalism of Logic Programs with Ordered Disjunction (LPODs) and Possibilistic Answer Set Programming (PASP) can be merged into the single framework of Logic Programs with Possibilistic Ordered Disjunction (LPPODs). The LPPODs framework embeds in a unified way several aspects of common-sense reasoning, nonmonotonocity, preferences, and uncertainty, where each part is underpinned by a well established formalism. On one hand, from LPODs it inherits the distinctive feature of expressing context-dependent qualitative preferences among different alternatives (modeled as the atoms of a logic program). On the other hand, PASP allows for qualitative certainty statements about the rules themselves (modeled as necessity values according to possibilistic logic) to be captured. In this way, the LPPODs framework supports a reasoning which is nonmonotonic, preference- and uncertainty-aware. The LPPODs syntax allows for the specification of (1) preferences among the exceptions to default rules, and (2) necessity values about the certainty of program rules. As a result, preferences and uncertainty can be used to select the preferred uncertain default rules of an LPPOD and, consequently, to order its possibilistic answer sets. Furthermore, we describe the implementation of an ASP-based solver able to compute the LPPODs semantics.
Visual Planning for Autonomous Mobile Robot Navigation
MICAI 2005: Advances in Artificial Intelligence (2005-01-01) 3789: 1001-1011 , January 01, 2005
By Marin-Hernandez, Antonio; Devy, Michel; Ayala-Ramirez, Victor
For autonomous mobile robots following a planned path, self-localization is a very important task. Cumulative errors derived from the different noisy sensors make it absolutely necessary. Absolute robot localization is commonly made measuring relative distance from the robot to previously learnt landmarks on the environment. Landmarks could be interest points, colored objects, or rectangular regions as posters or emergency signs, which are very useful and not intrusive beacons in human environments. This paper presents an active localization method: a visual planning function selects from a free collision path and a set of planar landmarks, a subset of visible landmarks and the best combination of camera parameters (pan, tilt and zoom) for positions sampled along the path. A visibility measurement and some utility measurements were defined in order to select for each position, the camera modality and the subset of landmarks that maximize these local criteria. Finally, a dynamic programming method is proposed in order to minimize saccadic movements all over the trajectory.
Online Scheduling of Multiprocessor Jobs with Idle Regulation
Parallel Processing and Applied Mathematics (2004-01-01) 3019: 131-144 , January 01, 2004
By Tchernykh, Andrei; Trystram, Denis
In this paper, we focus on on-line scheduling of multiprocessor jobs with emphasis on the regulation of idle periods in the frame of general list policies. We consider a new family of scheduling strategies based on two phases which successively combine sequential and parallel executions of the jobs. These strategies are part of a more generic scheme introduced in [6]. The main result is to demonstrate that it is possible to estimate the amount of resources that should remain idle for a better regulation of the load and to obtain approximation bounds.
The Reduced Automata Technique for Graph Exploration Space Lower Bounds
Theoretical Computer Science (2006-01-01) 3895: 1-26 , January 01, 2006
By Fraigniaud, Pierre; Ilcinkas, David; Rajsbaum, Sergio; Tixeuil, Sébastien Show all (4)
We consider the task of exploring graphs with anonymous nodes by a team of non-cooperative robots, modeled as finite automata. For exploration to be completed, each edge of the graph has to be traversed by at least one robot. In this paper, the robots have no a priori knowledge of the topology of the graph, nor of its size, and we are interested in the amount of memory the robots need to accomplish exploration, We introduce the so-called reduced automata technique, and we show how to use this technique for deriving several space lower bounds for exploration. Informally speaking, the reduced automata technique consists in reducing a robot to a simpler form that preserves its "core" behavior on some graphs. Using this technique, we first show that any set of q≥ 1 non-cooperative robots, requires $\Omega(\log(\frac{n}{q}))$ memory bits to explore all n-node graphs. The proof implies that, for any set of qK-state robots, there exists a graph of size O(qK) that no robot of this set can explore, which improves the O(KO(q)) bound by Rollik (1980). Our main result is an application of this latter result, concerning terminating graph exploration with one robot, i.e., in which the robot is requested to stop after completing exploration. For this task, the robot is provided with a pebble, that it can use to mark nodes (without such a marker, even terminating exploration of cycles cannot be achieved). We prove that terminating exploration requires Ω(log n) bits of memory for a robot achieving this task in all n-node graphs.
3D Parallel Elastodynamic Modeling of Large Subduction Earthquakes
Recent Advances in Parallel Virtual Machine and Message Passing Interface (2007-01-01) 4757: 373-380 , January 01, 2007
By Cabrera, Eduardo; Chavez, Mario; Madariaga, Raúl; Perea, Narciso; Frisenda, Marco Show all (5)
The 3D finite difference modeling of the wave propagation of M>8 earthquakes in subduction zones in a realistic-size earth is very computationally intensive task. We use a parallel finite difference code that uses second order operators in time and fourth order differences in space on a staggered grid. We develop an efficient parallel program using message passing interface (MPI) and a kinematic earthquake rupture process. We achieve an efficiency of 94% with 128 (and 85% extrapolating to 1,024) processors on a dual core platform. Satisfactory results for a large subduction earthquake that occurred in Mexico in 1985 are given.
A Multi-Objective Artificial Immune System Based on Hypervolume
Artificial Immune Systems (2012-01-01) 7597: 14-27 , January 01, 2012
By Pierrard, Thomas; Coello Coello, Carlos A.
This paper presents a new artificial immune system algorithm for solving multi-objective optimization problems, based on the clonal selection principle and the hypervolume contribution. The main aim of this work is to investigate the performance of this class of algorithm with respect to approaches which are representative of the state-of-the-art in multi-objective optimization using metaheuristics. The results obtained by our proposed approach, called multi-objective artificial immune system based on hypervolume (MOAIS-HV) are compared with respect to those of the NSGA-II. Our preliminary results indicate that our proposed approach is very competitive, and can be a viable choice for solving multi-objective optimization problems.
The Committee Decision Problem
By Gafni, Eli; Rajsbaum, Sergio; Raynal, Michel; Travers, Corentin Show all (4)
We introduce the (b,n)-Committee Decision Problem (CD) – a generalization of the consensus problem. While set agreement generalizes consensus in terms of the number of decisions allowed, the CD problem generalizes consensus in the sense of considering many instances of consensus and requiring a processor to decide in at least one instance. In more detail, in the CD problem each one of a set of n processes has a (possibly distinct) value to propose to each one of a set of b consensus problems, which we call committees. Yet a process has to decide a value for at least one of these committees, such that all processes deciding for the same committee decide the same value. We study the CD problem in the context of a wait-free distributed system and analyze it using a combination of distributed algorithmic and topological techniques, introducing a novel reduction technique.
We use the reduction technique to obtain the following results. We show that the (2,3)-CD problem is equivalent to the musical benches problem introduced by Gafni and Rajsbaum in [10], and both are equivalent to (2,3)-set agreement, closing an open question left there. Thus, all three problems are wait-free unsolvable in a read/write shared memory system, and they are all solvable if the system is enriched with objects capable of solving (2,3)-set agreement. While the previous proof of the impossibility of musical benches was based on the Borsuk-Ulam (BU) Theorem, it now relies on Sperner's Lemma, opening intriguing questions about the relation between BU and distributed computing tasks.
The Infection Algorithm: An Artificial Epidemic Approach for Dense Stereo Matching
Parallel Problem Solving from Nature - PPSN VIII (2004-01-01) 3242: 622-632 , January 01, 2004
By Olague, Gustavo; Vega, Francisco Fernández; Pérez, Cynthia B.; Lutton, Evelyne Show all (4)
We present a new bio-inspired approach applied to a problem of stereo images matching. This approach is based on an artifical epidemic process, that we call "the infection algorithm." The problem at hand is a basic one in computer vision for 3D scene reconstruction. It has many complex aspects and is known as an extremely difficult one. The aim is to match the contents of two images in order to obtain 3D informations which allow the generation of simulated projections from a viewpoint that is different from the ones of the initial photographs. This process is known as view synthesis. The algorithm we propose exploits the image contents in order to only produce the necessary 3D depth information, while saving computational time. It is based on a set of distributed rules, that propagate like an artificial epidemy over the images. Experiments on a pair of real images are presented, and realistic reprojected images have been generated.
Speeding scalar multiplication over binary elliptic curves using the new carry-less multiplication instruction
Journal of Cryptographic Engineering (2011-09-25) 1: 187-199 , September 25, 2011
The availability of a new carry-less multiplication instruction in the latest Intel desktop processors significantly accelerates multiplication in binary fields and hence presents the opportunity for reevaluating algorithms for binary field arithmetic and scalar multiplication over elliptic curves. We describe how to best employ this instruction in field multiplication and the effect on performance of doubling and halving operations. Alternate strategies for implementing inversion and half-trace are examined to restore most of their competitiveness relative to the new multiplier. These improvements in field arithmetic are complemented by a study on serial and parallel approaches for Koblitz and random curves, where parallelization strategies are implemented and compared. The contributions are illustrated with experimental results improving the state-of-the-art performance of halving and doubling-based scalar multiplication on NIST curves at the 112- and 192-bit security levels and a new speed record for side-channel-resistant scalar multiplication in a random curve at the 128-bit security level. The algorithms presented in this work were implemented on Westmere and Sandy Bridge processors, the latest generation Intel microarchitectures.
Shared Resource Availability within Ubiquitous Collaboration Environments
By García, Kimberly; Mendoza, Sonia; Olague, Gustavo; Decouchant, Dominique; Rodríguez, José Show all (5)
Most research works in ubiquitous computing remain in the domain of mono-user systems, which make assumptions such as: "nobody interferes, observes and hurries up". In addition, these systems ignore third-part contributions and do not encourage consensus achievement. This paper proposes a system for managing availability of distributed resources in ubiquitous cooperative environments. Particularly, the proposed system allows collaborators to publish resources that are intended to be shared with others collaborators and to subscribe to allowed resources depending on their interest in accessing or using them. Resource availability is determined according to several parameters: technical characteristics, roles, usage restrictions, and dependencies with other resources in terms of ownership, presence, location, and even availability. To permit or deny access to context-aware information, we develop a face recognition system, which is able to dynamically identify collaborators and to automatically locate them within the cooperative environment.
Specifying Concurrent Problems: Beyond Linearizability and up to Tasks
Distributed Computing (2015-01-01): 9363 , January 01, 2015
By Castañeda, Armando; Rajsbaum, Sergio; Raynal, Michel
Tasks and objects are two predominant ways of specifying distributed problems. A task specifies for each set of processes (which may run concurrently) the valid outputs of the processes. An object specifies the outputs the object may produce when it is accessed sequentially. Each one requires its own implementation notion, to tell when an execution satisfies the specification. For objects linearizability is commonly used, while for tasks implementation notions are less explored.
Sequential specifications are very convenient, especially important is the locality property of linearizability, which states that linearizable objects compose for free into a linearizable object. However, most well-known tasks have no sequential specification. Also, tasks have no clear locality property.
The paper introduces the notion of interval-sequential object. The corresponding implementation notion of interval-linearizability generalizes linearizability. Interval-linearizability allows to specify any task. However, there are sequential one-shot objects that cannot be expressed as tasks, under the simplest interpretation of a task. The paper also shows that a natural extension of the notion of a task is expressive enough to specify any interval-sequential object.
Benchmark Study of a 3d Parallel Code for the Propagation of Large Subduction Earthquakes
By Chavez, Mario; Cabrera, Eduardo; Madariaga, Raúl; Perea, Narciso; Moulinec, Charles; Emerson, David; Ashworth, Mike; Salazar, Alejandro Show all (8)
Benchmark studies were carried out on a recently optimized parallel 3D seismic wave propagation code that uses finite differences on a staggered grid with 2nd order operators in time and 4th order in space. Three dual-core supercomputer platforms were used to run the parallel program using MPI. Efficiencies of 0.91 and 0.48 with 1024 cores were obtained on HECToR (UK) and KanBalam (Mexico), and 0.66 with 8192 cores on HECToR. The 3D velocity field pattern from a simulation of the 1985 Mexico earthquake (that caused the loss of up to 30000 people and about 7 billion US dollars) which has reasonable agreement with the available observations, shows coherent, well developed surface waves propagating towards Mexico City.
Formal verification of secure group communication protocols modelled in UML
Innovations in Systems and Software Engineering (2010-03-01) 6: 125-133 , March 01, 2010
By Saqui-Sannes, P.; Villemur, T.; Fontan, B.; Mota, S.; Bouassida, M. S.; Chridi, N.; Chrisment, I.; Vigneron, L. Show all (8)
The paper discusses an experience in using Unified Modelling Language and two complementary verification tools in the framework of SAFECAST, a project on secured group communication systems design. AVISPA enabled detecting and fixing security flaws. The TURTLE toolkit enabled saving development time by eliminating design solutions with inappropriate temporal parameters.
Quadratic Optimization Fine Tuning for the Learning Phase of SVM
Advanced Distributed Systems (2005-01-01) 3563: 347-357 , January 01, 2005
By González-Mendoza, Miguel; Hernández-Gress, Neil; Titli, André
This paper presents a study of the Quadratic optimization Problem (QP) lying on the learning process of Support Vector Machines (SVM). Taking the Karush-Kuhn-Tucker (KKT) optimality conditions, we present the strategy of implementation of the SVM-QP following two classical approaches: i) active set, also divided in primal and dual spaces, methods and ii) interior point methods. We also present the general extension to treat large scale applications consisting in a general decomposition of the QP problem into smaller ones. In the same manner, we discuss some considerations to take into account to start the general learning process. We compare the performances of the optimization strategies using some well-known benchmark databases.
Reliable Shared Memory Abstraction on Top of Asynchronous Byzantine Message-Passing Systems
By Imbs, Damien; Rajsbaum, Sergio; Raynal, Michel; Stainer, Julien Show all (4)
This paper is on the construction and the use of a shared memory abstraction on top of an asynchronous message-passing system in which up to t processes may commit Byzantine failures. This abstraction consists of arrays of n single-writer/multi-reader atomic registers, where n is the number of processes. Differently from usual atomic registers which record a single value, each of these atomic registers records the whole history of values written to it. A distributed algorithm building such a shared memory abstraction it first presented. This algorithm assumes t < n/3, which is shown to be a necessary and sufficient condition for such a construction. Hence, the algorithm is resilient-optimal. Then the paper presents distributed algorithms built on top of this shared memory abstraction, which cope with up to t Byzantine processes. The simplicity of these algorithms constitutes a strong motivation for such a shared memory abstraction in the presence of Byzantine processes.
For a lot of problems, algorithms are more difficult to design and prove correct in a message-passing system than in a shared memory system. Using a protocol stacking methodology, the aim of the proposed abstraction is to allow an easier design (and proof) of distributed algorithms, when the underlying system is an asynchronous message-passing system prone to Byzantine failures.
|
CommonCrawl
|
SSE Light
SSE Tech
Welcome to SSE, This site is dedicated to making Data Science more approachable by blogging about topics that mean something to people who aren't that into maths and computer science, although we hope that after you have read a few blogs you might be a bit more interested.
Some Squared Error Data Science in everyday life
Apr 11 Gender Pay Gap
Jonno Bourne
The UK government made all companies that have more than 250 employee's publish data on their gender pay gap. The average pay gap in across the country is 9.1% which is down from last years gap of 10.5 %. This has resulted in a flood of data and generated a lot of coverage in the press. These articles from the Sun (Pay gap 22%), Economist (29.5%%) and FT (Pay gap 19.4) and Independent (Dodged the requirement as it uses lots of freelancers so technically has less than 250 employees) give and idea of the coverage.
"The Average pay gap in the UK is 9.1% which is down from last years gap of 10.5%"
As is pointed out by the ALL CAPS comments at the bottom of many of these articles, the gender pay gap is not comparing like for like. The gender pay gap is not the same as equal pay, which means same money for the same role. What it does show is whether men are being paid on average more than women. This video by the Guardian (Pay gap 12.1%) gives a good explanation.
"It's important to remember that the gender pay gap is not the same as equal pay"
In this article, we will be looking at the data from a sector perspective. A sector is a broad class of businesses like Agriculture, Manufacturing, or Education. We will also be focusing more on where men and women sit in the pay hierarchy of a company.
What is the overall picture?
The overall picture isn't great for pay parity. A total of 78% of companies have a median salary larger for men than women, 8% pay the same, and 14% pay women more. All industry sectors pay men more than women.
"A major reason for this gap is that men occupy a disproportionate amount of the higher positions in companies. "
A major reason for this gap is that men occupy a disproportionate amount of the higher positions in companies. We know this because we can see the gender ratio's in each earning quartile. These quartiles represent whether an employee is in the bottom 25% of earners, the lower middle, upper middle or top 25% of earners within a company. We don't know what the company pays, but we do know where men and women sit in the quartile hierarchy. The bottom quartile is typically entry-level roles or unskilled labour. The top quartile is the senior management and the C-suite executives.
Side effects of the Quartile representation
Using the quartiles can result in some strange looking results. As the widely seen tweet below shows.
Ok hello I am baffled.
Magazine publisher Condé Nast — Vogue, GQ, Vanity Fair, Glamour — quietly uploaded its gender pay data yesterday.
The mean gap is 36.9% (that's now the worst in UK media), median 23.3%.
Yet there are more women than men at EVERY SINGLE PAY LEVEL. pic.twitter.com/wL39KxR0HU
— Mark Di Stefano 🤙🏻 (@MarkDiStef) April 5, 2018
The question in the tweet is entirely reasonable. As has been pointed out this is a result of the unequal amount of men and women in the company (It is also known as Simpson's Paradox).
To understand how this happens, look at the diagram below showing two balances. The balance on the right has an equal number of weights at either end, meaning its centre of mass is in the middle. The balance on the left has the weights placed unequally meaning the centre of mass has shifted. The fact that the balance on the left has fewer weights than the balance on the right does not affect where the centre of mass is.
The positioning of staff is the same as the weights. In Conde Nast's case, although there are lots of women at all levels of the organization, the men are disproportionately concentrated at the top mean they earn on average more than the women.
The Male and Female Quartile center
If we find the centre of mass by gender across all companies, we can find the average centre of mass for the country. The average for men is 2.64, and the average for women is 2.34 (This difference is statistically significant). As some companies have 300 employees and others 30,000 we weighted the results by size of company. The figure below shows a histogram of how men and women are distributed across all companies. The figure shows the different peaks for male and female centre of mass.
In fact the male center of mass has 63% correlation with the median hourly pay difference.
Underrepresentation
Quartile
Mean Male Fract
Median Male Fract
1 Lower 44.18 41.20
2 Lower Middle 47.09 43.80
3 Upper Middle 51.35 49.60
4 Top 58.64 60.00
At some point, everyone needs to enter the workforce, and eventually, everyone retires. This means that there is a constant flow of workers moving into up and out of the labour market.
Given that workforce is approximately gender balanced, it's intuitive to assume that all quartiles would be more or less balanced but they are not. The fraction of males making up each quartile increases as the quartiles increases as shown in the table.
"Not all companies should have a 50-50 gender split. However, if the second highest quartile is 56% female it would be a surprise if the top quartile is only 5% female"
We will use the relative representation equation shown below (A detailed derivation will be in the tech version) to understand the changes in gender representation between quartiles. In the Relative representation equation, x1 and x2 represent the fraction of females (or males) in two adjacent quartiles. When the fraction in both quartiles is the same the equation returns 0, this means there the expected representation the two quartiles. This doesn't mean that all companies should have a 50-50 gender split. However, if say the second highest quartile is 56% female it would be a surprise if the top quartile is only 5% female (Here's looking at you TUI). The equation finds these changes in relative representation
$$R=\frac{x_2(1-x_1)}{x_1(1-x_2)}-1$$
"In total 70% of companies have an under representation of women in the top quartile compared to the next higest quartile."
What we find is that 61% of companies have an under representation of women in the the lower middle quartile, this increases to 65% of companies in the upper middle quartile. In the top pay quartile 70% of companies have an under representation of women. IT should be noted that this under representation is only relative to adjacent quartile not the total fraction of women that make up the company. If this were the case the value would be much higher.
Finding the company average female representation score shows that UK companies have a median underrepresentation of women of 12%. The figure below shows that this distribution is right skewed meaning that the bulk of companies have underrepresentation but some companies have a large overrepresentation.
The below table sums up the different sectors covered by the analysis
Relative Rep
Male Center
Female Center
Diff Median Hourly %
% Female
% of total work force
activities of households 1.1 2.56 2.44 0.6 46.8 0
Health and Social work 1.1 2.54 2.49 0.1 79 7
International Bodies 1.18 2.54 2.47 0 56 0
Food and Hospitality 1.23 2.61 2.39 3 50.4 7
Arts and entertainment 1.26 2.6 2.4 2.7 50.4 2
Government 1.31 2.59 2.3 7.8 40.3 1
Water Utilities 1.34 2.55 2.33 7.5 20.7 1
administration and support 1.34 2.61 2.37 5.1 42.5 15
Wholesale and retail 1.34 2.61 2.41 4.4 53.1 19
transportation and storage 1.35 2.56 2.35 8.1 20.8 6
information and communication 1.36 2.6 2.26 16.2 31.1 4
other service activities 1.41 2.63 2.33 11.1 38.8 2
Science and Tech 1.48 2.64 2.3 14 41.2 9
education 1.49 2.77 2.39 23 73.6 8
manufacturing 1.5 2.58 2.25 10 22.6 10
Electric utilities 1.53 2.58 2.29 17.9 38.1 1
real estate 1.61 2.73 2.27 16 53.6 1
finance and insurance 1.68 2.74 2.25 24.4 49.4 4
construction 1.97 2.61 1.96 23.8 17.8 3
When we take the average difference in hourly wage and the average representation by sector, we see the figure below. This figure shows an a linear relationship between Promotion Bias and Hourly wage difference. This is not at all surprising and shows again that the best way of having a small gender pay gap is having a proportional representation of genders at all levels of the company.
But we will never get absolute balance in all companies...
"Unless you believe that women are inherently less able than men, the gender pay gap indicates that in many cases the best man for the job is in fact a woman. This is bad for the economy and bad for our claim to be a meritocracy"
This is true but beside the point. In a balanced society all companies would have some over/under representation but the overall picture would be balanced. Pointing to a company like the Royal Mint (Pay gap -22%, i.e pays women more) as an example as anti-man bias doesn't make sense. In a balanced society we would in fact expect to see more companies like the Royal Mint and less companies like FujiFilm (Pay gap 38%). Unless you believe that women are inherently less able than men, the gender pay gap indicates that in many cases the best man for the job is in fact a woman. This is bad for the economy and bad for our claim to be a meritocracy. If you think that women are inherently less able than men you need evidence to back up you claims that women are 9.1% less productive than men (It would also be helpful to explain how womens ability has increased dramatically over the last 50 years as indicated by a shrinking pay gap).
Although equal pay for equal work is law. The gender pay gap data suggest that equal opportunities for jobs haven't yet occurred. Why not is an on going discussion (for a good overview see here). A good place to start fixing the income pay gap is to improve the relative representation of women in companies. This is done by developing talent and hiring senior managers who are more in line with the gender makeup of the company.
ERM ACTUALLY...
The next steps are to provide a more detailed explanation of the process in the SSE tech section. Meanwhile if you think that I am a total libtard or a mansplaining idiot, please feel free to fork my code and provide a reproducible counter argument.
Git hub repo
I would make a cool sortable table to lookup the relative representation of individual companies, but I am rubbish at html so can't sorry!
Aug 2 Empty Homes Part 2 Light
|
CommonCrawl
|
Toward a priori noise characterization for real-time edge-aware denoising in fluoroscopic devices
Emilio Andreozzi ORCID: orcid.org/0000-0003-4829-39411,
Antonio Fratini ORCID: orcid.org/0000-0001-8894-461X2,
Daniele Esposito ORCID: orcid.org/0000-0003-0716-84311,
Mario Cesarelli ORCID: orcid.org/0000-0001-9068-313X1 &
Paolo Bifulco ORCID: orcid.org/0000-0002-9585-971X1
Low-dose X-ray images have become increasingly popular in the last decades, due to the need to guarantee the lowest reasonable patient's exposure. Dose reduction causes a substantial increase of quantum noise, which needs to be suitably suppressed. In particular, real-time denoising is required to support common interventional fluoroscopy procedures. The knowledge of noise statistics provides precious information that helps to improve denoising performances, thus making noise estimation a crucial task for effective denoising strategies. Noise statistics depend on different factors, but are mainly influenced by the X-ray tube settings, which may vary even within the same procedure. This complicates real-time denoising, because noise estimation should be repeated after any changes in tube settings, which would be hardly feasible in practice. This work investigates the feasibility of an a priori characterization of noise for a single fluoroscopic device, which would obviate the need for inferring noise statics prior to each new images acquisition. The noise estimation algorithm used in this study was tested in silico to assess its accuracy and reliability. Then, real sequences were acquired by imaging two different X-ray phantoms via a commercial fluoroscopic device at various X-ray tube settings. Finally, noise estimation was performed to assess the matching of noise statistics inferred from two different sequences, acquired independently in the same operating conditions.
The noise estimation algorithm proved capable of retrieving noise statistics, regardless of the particular imaged scene, also achieving good results even by using only 10 frames (mean percentage error lower than 2%). The tests performed on the real fluoroscopic sequences confirmed that the estimated noise statistics are independent of the particular informational content of the scene from which they have been inferred, as they turned out to be consistent in sequences of the two different phantoms acquired independently with the same X-ray tube settings.
The encouraging results suggest that an a priori characterization of noise for a single fluoroscopic device is feasible and could improve the actual implementation of real-time denoising strategies that take advantage of noise statistics to improve the trade-off between noise reduction and details preservation.
Fluoroscopy is a medical imaging modality that provides continuous, real-time X-ray screening of patient's organs and of various radiopaque objects involved in surgical procedures (e.g., surgical instruments, catheters, wire-guides, prosthetic implants, implanted devices), which make it an invaluable tool for image-guided procedures in surgery [1, 2], as well as in diagnosis [3,4,5] and therapy [6]. However, its use in clinical practice should always be carefully evaluated, as X-rays are ionizing radiations that may cause serious damages to human tissues and organs [7,8,9,10], and that is why the rigorous monitoring of the X-ray dose delivered to the patients and to the exposed medical staff has gained progressively more attention in the last decades, also being subject to formal regulations from national and international health organizations [11,12,13]. The X-ray dose depends on a number of parameters and conditions, such as the X-ray tube settings (tube current and voltage), the exposure time, the distance between the X-ray source and the irradiated tissue, the additional filtration, the number of anti-scatter grids [14]. Generally, most of these parameters are selected to optimize determined features of the imaged scene, thus, only the tube current and, sometimes, the exposure time can be modified to reduce the overall dose delivered to the patient. As an example, a common practice to limit the overall exposure time in fluoroscopy during surgical procedures is to turn off the X-ray source periodically and/or to use pulsed protocols, which place a limitation on frame rate though [14]. However, the exposure times are still very long and unpredictable in interventional fluoroscopy [10, 15], as they depend on the particular needs of the surgeon in each procedure.
In practice, the dose is mainly limited by reducing the tube current, which implies a reduction of the X-ray radiation intensity, i.e., the number of X-ray photons that reach the detector. This low photons availability gives rise to a signal-dependent, Poisson-distributed noise, usually referred to as "quantum noise" or "Poisson noise" [16]. The signal-to-noise ratio (SNR) of quantum noise decreases as the square root of the mean luminance, which means that the lower the dose, the lower the image quality [16]. Moreover, quantum noise is inherent to the image formation process and cannot be avoided or even limited by improving detectors technology, thus requiring the application of proper denoising strategies in the digital domain [16].
Simple smoothing filters usually do not achieve acceptable results, as they introduce significant blurring effects (in space and time), thus accomplishing noise reduction to the detriment of fine image details (e.g., edges, textures, etc.). As for many denoising approaches devised for AWGN, the knowledge of noise statistics provides precious information that helps to improve the denoising performances [16,17,18,19,20,21,22,23,24,25,26,27,28,29]. While scientific literature is rich in approaches for AWGN estimation, much lower effort has been devoted to Poisson noise [16, 26,27,28, 30,31,32,33,34], even though it is by far the dominant noise source in low-dose X-ray images [16, 35,36,37], as well as in other low-light images, e.g., night photography, fluorescence microscopy, astronomical imaging.
Quantum noise estimation could be used, e.g., to allow denoising algorithms discriminate between the noisy pixels to be filtered and those lying on the edges, which need to be preserved as much as possible to maintain the image details. This is the case, as an example, for the noise variance conditioned average (NVCA) algorithm [16,17,18,19,20,21]. This denoising strategy is based on a conditioned moving average filter that acts on a determined spatio-temporal neighborhood by including in the average computation only those pixels whose difference in luminance with the central pixel is lower than a multiple of the local noise standard deviation (SD). NVCA derives local estimates of noise SD by assuming a linear relationship between the variance and the expected value of the noise (Poisson–Gaussian noise model), whose slope and intercept, referred to as noise parameters, must be determined prior to the filtering operation. Other approaches involves the use of variance stabilizing transformations [27, 38,39,40], the most common of which is the generalized Anscombe transform [41, 42]. This point-wise operation transforms a Poisson–Gaussian distribution in a practically Gaussian distribution with unit variance, thus allowing the use of any AWGN denoising scheme also for Poisson–Gaussian noise. However, the generalized Anscombe transform also requires the a priori knowledge of noise parameters.
Essentially, the denoising approaches that make direct use of Poisson statistics, as well as those based on the combination of generalized Anscombe transform with AWGN denoising schemes, both require the noise parameters to be accurately estimated from noisy images prior to their actual processing, in order to achieve a reasonable trade-off between noise reduction and edge preservation, especially in images that are heavily affected by noise (e.g., low-dose X-ray images). While it is not usually a major concern in offline implementations, it could be a serious limitation in real-time operation, which undoubtedly represents the most appealing application of fluoroscopic sequence denoising. Indeed, during an image-guided procedure the variations of tube settings and of detector gain would modulate the statistics of quantum noise. Hence, the noise parameters estimation should be repeated ideally after any change in X-ray tube settings to ensure the highest denoising performances, but this is hardly feasible in practice.
This study aims to test the hypothesis that the noise parameters mostly depend on the X-ray tube settings and presents a feasibility analysis of an a priori noise parameters characterization. Indeed, this approach would obviate the need for inferring noise statics prior to each new image sequence acquisition, thus enabling the effective real-time operation of edge-aware denoising strategies that take advantage of noise statistics to improve the image quality in fluoroscopic sequences. The influence of the X-ray tube settings on the noise parameters has never been investigated before in literature, although this greatly affects real applications. The study also suggests a practical approach to provide denoising algorithms with accurate noise estimates to ensure the highest performances in real-time.
The noise estimation algorithm has already been used in previous publications about the NVCA denoiser [16, 17, 20, 21], but its performances have never been assessed thoroughly. In this study, the algorithm was first tested in silico on several synthetic fluoroscopic sequences, which were corrupted by different levels of simulated mixed Poisson–Gaussian noise. The ability of the algorithm to retrieve the noise parameters with reasonable accuracy was assessed by varying the number and distribution of grey levels within the designed sequences, as well as the number of frames exploited for noise estimation. Afterward, real fluoroscopic sequences were acquired by imaging two different X-ray phantoms via the same commercial fluoroscopic device with various X-ray tube settings. Then, the matching of noise parameters was assessed between data acquired independently in the same operating conditions, as it would support the prospect of pre-calibrating the noise parameters at many different tube settings and using them directly in real-time denoising of new fluoroscopic sequences acquired in the same conditions.
Performance assessment on synthetic sequences
Figure 1 shows an example of a typical measured EVaR with its linear regression. In Additional file 1: Table S1 (available in the Additional file 1, along with Tables S2, S3, S4) the noise parameters estimates extracted from all the 14 synthetic sequences with variable number of grey levels are reported. For each sequence, the parameters were subdivided by the corresponding noise level and the number of frames used for noise estimation. Additional file 1: Table S2 outlines the relative estimation errors, except for the errors related to null nominal values of parameter b (i.e., noise levels 1 to 3), which were reported as absolute errors and highlighted in blue. A substantial difference was observed in the estimation errors obtained in sequences 1–7 and sequences 8–14, which turned out to be on average consistently higher than those of the former. Mean and standard deviation (SD) of the estimation errors are reported in Table 1.
An example of a typical measured EVaR with its linear regression and the estimated noise parameters (a, b)
Table 1 Mean and standard deviation of the noise parameters estimation errors
Since noise estimation serves as a support for improving noise suppression performances, its efficacy should be rather assessed by analyzing the effect of estimation errors on the final denoising results. To this aim, the noisy synthetic test sequences described in the "Materials and methods" section, and depicted in Fig. 10, were filtered via the NVCA algorithm by using the most inaccurate noise parameters estimates, so as to identify the worst cases from the denoising point of view. Then, the worst results obtained for estimates extracted by using 25 and 10 frames, from sequences 1–7 and 8–14, were identified according to measures of dissimilarity between the sequences filtered with inaccurate noise parameters, referred to as the sub-optimal filtered sequences, and the sequence filtered with the actual noise parameters, referred to as the optimal filtered sequence. Two well-established image quality assessment indices were adopted, namely the mean squared error (MSE), which is a global measure of dissimilarity between images, and the full width at half maximum (FWHM) of the edge spread function, which is a no-reference local measure of edge sharpness [20]. As a first dissimilarity measure to quantify the global deviation from the optimal denoising result, the MSE between the sub-optimal and the optimal filtered sequences was computed. However, MSE is known to have high sensitivity to the overall image noise, but poor sensitivity to edge blurring effects, especially in noisy conditions like those encountered in low-dose fluoroscopy [22]. Since the edge-awareness is a major concern of medical image denoising, the local loss of edge sharpness, due to the noise parameters estimation errors, was considered as a further measure of dissimilarity, and was evaluated by estimating the Δ FWHM, that is the difference in FWHM between the sub-optimal and the optimal filtered sequences. The quantitative results of this analysis are summarized in Tables 2 and 3, where it could be noticed that the worst results were always obtained by using the parameters extracted from the sequences 8–14, which were also those affected by the highest estimation errors.
Table 2 Results of the denoising performance analysis on the sequence with the moving rectangle
Table 3 Results of the denoising performance analysis on the sequence with the moving circle
Figures 2 and 3 depict the sub-optimal filtered sequences, as well as the corresponding image differences with the optimal filtered sequence, where it can be observed that the pixels with the highest differences in luminance are almost all distributed in the edges neighborhood, which is consistent with the measured increase in Δ FWHM.
Synthetic sequences with the moving rectangle filtered via the NVCA algorithm by considering the noise parameters estimates reported in Table 2. The images in each row were obtained by using the noise parameters in the corresponding row of Table 2. On the first column the end frames of the sub-optimal filtered sequences were depicted, while the differences of the same images with the end frame of the optimal filtered sequence were reported on the second column
Synthetic sequences with the moving circle filtered via the NVCA algorithm by considering the noise parameters estimates reported in Table 3. The images in each column were obtained by using the noise parameters in the corresponding row of Table 3. On the first row the end frames of the sub-optimal filtered sequences were depicted, while the differences of the same images with the end frame of the optimal filtered sequence were reported on the second row
However, it can be assessed by visual inspection that the sub-optimal results shown in Figs. 2a–c and 3a, b are very similar the corresponding optimal results shown in Fig. 10c and d, respectively.
The results of the noise parameters extraction from the four synthetic sequences designed via the X-ray simulator are reported in Additional file 1: Tables S3 and S4. The mean and SD of relative errors, outlined in Table 4, turned out to be almost comparable with those obtained in the 14 synthetic sequences, thus proving that the presence of clinically relevant structures does not alter the estimates accuracy, which, more generally, is not influenced by the particular informational content of the scene.
Table 4 Mean and standard deviation of the errors on noise parameters estimated in the sequences designed via the X-ray simulator by using 25 frames
Noise estimation in real sequences
Figure 4 shows four frames of the real fluoroscopic sequences, acquired as described in paragraph 5 of the "Materials and methods" section. In particular, the frames in the left column depict the TOR-18FG phantom, while the ones in the right column refer to the TOR-CDR phantom.
Static frames from the real fluoroscopic sequences. The frames shown in the first and second rows were acquired at 50 mA and 10 mA (40 kVp), respectively. The frames in the left column depict the TOR-18FG phantom, while those in the right column depict the TOR-CDR
The frames in the first row were acquired with X-ray tube setting #5 (40 kVp, 50 mA), while those in the second row with setting #1 (40 kVp, 10 mA). Due to the very low tube currents involved, the original images turned out to be too dark for practical visualization, as indeed the luminance values were confined within very narrow ranges in the lower part of the representation interval. For this reason, the images in Fig. 4 have been processed with a full-scale histogram stretch, disregarding the grey levels of the few lightest pixels in the leftmost part of the images. This processing obviously altered the mean luminance, which was originally much lower in the images acquired at 10 mA compared to those acquired at 50 mA, but made the noise much more visible, allowing easier comprehension of the effect of X-ray tube current reduction on the SNR of the images.
The noise parameters extracted from the real fluoroscopic sequences are reported in Table 5, along with the relative errors of the parameters retrieved from TOR-CDR sequences with respect to those obtained from the TOR-18FG sequences for corresponding X-ray tube settings.
Table 5 Noise parameters estimates retrieved from the real fluoroscopic sequences, with relative errors on single parameters extracted from TOR-CDR sequences with respect to TOR-18FG ones for each tube setting
The relative errors (mean and SD) were -0.36% ± 0.90% and 0.30% ± 2.8%, for parameters a and b, respectively, and turned out to be comparable to those obtained in the analyses of the performances of the noise parameters estimation algorithm. The noise parameters extracted from the two phantoms sequences are also plotted in Fig. 5, where it can be verified that their trends with the tube current are very similar.
Noise parameters estimated from real fluoroscopic sequences. Static frames from the real fluoroscopic sequences. The frames shown in the first and second rows were acquired at 50 mA and 10 mA (40 kVp), respectively. The frames in the left column depict the TOR-18FG phantom, while those in the right column depict the TOR-CDR
This study investigated the feasibility of an a priori noise characterization at different X-ray tube settings for a single fluoroscopic device, which would obviate the need for inferring noise statics prior to each new image sequence acquisition, in order to enable the implementation of real-time algorithms that exploit the a priori knowledge of noise statistics to provide an effective, edge-aware denoising. To this aim, first the accuracies of the noise parameters provided by the considered noise estimation algorithm were assessed to ascertain their reliability. In particular, a first set of 7 synthetic sequences with increasing number of grey levels equally spaced in a 128-wide interval, and a further set of 7 synthetic sequences with 8 grey levels equally spaced in intervals of decreasing width were designed via software and, then, corrupted by six different levels of simulated Poisson–Gaussian noise, for a total of 84 noisy sequences. The noise parameters were extracted from each sequence, at each distinct noise level by considering four different numbers of available frames (i.e., 100, 50, 25, 10). The algorithm achieved very low estimation errors in the first seven sequences, while performing substantially worse on the last seven sequences. Indeed, it is worth noting that, even with only 10 turned out to exceed these value, even for estimations performed by using the maximum number of available frames. Furthermore, it could be assessed by visual inspection that, even the worst errors achieved in the sequences 1–7, produced sub-optimal filtered sequences which were very similar to the optimal one, as opposed to the sub-optimal sequences produced by the estimates from sequences 8–14, which clearly showed edge blurring (confirmed by increases in Δ FWHM). These results clarify that ensuring a reasonable contrast in the test sequences to be used for noise characterization is mandatory to achieve reliable estimates. Moreover, the very small number of frames required by the noise estimation algorithm to achieve a reasonable accuracy allows for its application also to very short static scenes. The algorithm accomplished comparable performances also on four synthetic sequences designed via an X-ray simulator to include realistic medical information, thus proving that the presence of clinically relevant structures does not alter the performances of the noise estimation algorithm.
Once accuracy and reliability of the noise parameters estimation had been assessed, the algorithm was applied to the real fluoroscopic sequences acquired by imaging two commercial X-ray phantoms with different tube settings. The noise parameters extracted from pairs of sequences acquired independently with the same tube settings turned out to be comparable, as the mean relative errors turned out to be less than 1%.
The noise estimation algorithm considered in this study proved reliable in extracting noise parameters estimates with a reasonable accuracy, even from very short static scenes of only 10 frames. The tests performed on the real fluoroscopic sequences confirmed that the estimated noise parameters are independent of the particular informational content of the scene from which they have been extracted, as they turned out to be consistent in sequences acquired independently with the same X-ray tube settings. To the best of our knowledge, this is the first attempt to pre-characterize the noise of a single fluoroscopic device at different operating conditions, to obviate the need to repeat noise estimation after any change in X-ray tube settings. Moreover, it is also the first time that the trends of Poisson–Gaussian noise parameters with the X-ray tube settings are reported in literature. The encouraging results of this study suggest that an a priori characterization of noise for a single fluoroscopic device is feasible and could support the actual implementation of real-time edge-aware denoising strategies that take advantage of noise statistics to improve the trade-off between noise reduction and details preservation. Future studies could focus on a further characterization of noise, e.g., on an extended grid of X-ray tube settings (mA, kVp), also evaluating the possibility of obtaining part of these estimates via interpolation, as well as on an hardware implementation of the proposed approach, to directly assess its performances in real-time denoising of low-dose fluoroscopic sequences.
Noise model
In an X-ray system, the number of photons that emerge from a patient and reach a single pixel of the detector plane can be modeled by a temporally stochastic Poisson process [16, 36, 37, 43], whose probability density function (pdf) is described in Eq. (1):
$$p\left(n\right)=\frac{{\uplambda }^{n}}{n!}{e}^{-\uplambda },$$
where \(\uplambda\) is the expected photon count. Simple calculations allow deriving a very important feature of Poisson distribution, namely the expected value – variance relationship (EVaR), which is reported in Eq. (2):
$${\upsigma }_{p}^{2}\left({\upmu }_{p}\right)={\upmu }_{p}.$$
Therefore, the variance of the number of photons that reach a single pixel is equal to the expected photon count. However, in practice, the information carried by this random process occurring at a single detector pixel is usually coded in a digital image, and particularly in the grey level of the corresponding image pixel, which is proportional to the actual photon count, thus being characterized by a modified EVaR, as reported in Eq. (3):
$${\mathrm{g}\left(\uplambda \right)=a\cdot \mathrm{p}\left(\uplambda \right) \to \sigma }_{g}^{2}\left({\mu }_{g}\right)=a\cdot {\mu }_{g},$$
where g is the grey level of the digital image pixel corresponding to the detector pixel that is reached by a number of photons described by p, and a is the coefficient of proportionality between g and p, also known as "detector gain". The EVaR clarify the signal-dependent nature of quantum noise (heteroscedasticity), which, unlike the well-known AWGN, cannot be characterized by a single, global noise variance estimate (homoscedasticity), but rather requires the estimation of the detector gain, in order to be able to estimate the local, signal-dependent noise variance from the local mean luminance.
X-ray images are also affected by other sources of noise that are usually modeled as AWGN, hence, they introduce a constant noise floor, i.e., a constant contribution to the noise variance, which can be included in the noise model, as shown in Eq. (4):
$${\sigma }_{g}^{2}\left({\mu }_{g}\right)=a\cdot {\mu }_{g} + b,$$
where \(b\) corresponds to the variance of the AWGN component. This model is known as Poisson–Gaussian mixture and has been used in various denoising approaches devised for low-intensity images [16,17,18,19,20,21,22,23,24, 26, 32,33,34, 42]. However, it requires the knowledge of the EVaR parameters, referred to as noise parameters, which are generally unknown and, thus, need to be estimated from the X-ray images.
Noise parameters estimation
The algorithm analyzed in this work infers the statistics of noise by taking advantage of the temporal dimension that is available in image sequences, such as those acquired in fluoroscopy. This approach can be applied only to static scenes, as it assumes the ideal, noiseless luminance of each pixel to be constant in time and ascribes all its fluctuations to the noise. Based on this assumption, the algorithm first calculates the sample mean and variance for each pixel along the temporal dimension, which describe the EVaR of the noise, and, then, it performs a linear regression to estimate the slope (a) and intercept (b) of the EVaR, i.e., the noise parameters. The number of frames available for noise characterization (i.e., the length of the static scene extracted from the fluoroscopic sequence of interest) poses a limitation on the actual number of observations of the random processes that describe each pixel luminance. This results in a certain variability of the variance values corresponding to the same mean value. This issue has been addressed in the performance analysis presented in this work, by evaluating the accuracy of noise parameters estimation also as a function of the number of available frames.
Synthetic sequences design
Static sequences with variable number of grey levels
The estimation of noise parameters depends on the number and distribution of EVaR points (i.e., the expected value–variance couples) on which the linear regression is performed, i.e., on the number and distribution of grey levels within the scene. For this reason, 14 synthetic sequences were designed to represent static scenes with different number and distribution of grey levels. Each sequence was composed by 100 frames of 128 × 128 pixels represented on 8 bits. The grey levels were assigned to the 128 columns of the scenes in a periodic fashion, from the darkest to the lightest level and then starting again from the darkest one. The first seven sequences, depicted in Fig. 6, included 2 up to 128 grey levels in powers of 2, equally spaced in the interval [64;192], which is 128 wide and centered at the half of the whole representation interval.
Static frames of the seven synthetic sequences with increasing number of grey levels (2 to 128 in power of 2) equally spaced in the range [64; 192]
The other seven sequences, shown in Fig. 7, included 8 grey levels, equally spaced in the intervals described in (5), which are centered at the half of the representation interval and have a decreasing width from 48 down to 16:
Static frames of the seven synthetic sequences with 8 grey levels equally spaced in narrowing ranges
$$[64 + 8k; 192-8k], \quad k = 1,2, \ldots 7.$$
Sequences with moving objects
Considering that the actual concern of denoising is not the mere errors on noise parameters, but rather the reconstruction errors on the processed images, a qualitative and quantitative performance assessment of the noise estimation algorithm analyzed in this study was carried out by comparing the denoising results achieved via the NVCA algorithm with the actual and the estimated noise parameters. To this aim, two synthetic noiseless test sequences were designed, which represented a dark rectangle and a dark circle moving from the left to the right at a speed of 1 pixel per frame over a brighter, uniform background. The noiseless sequences are depicted in Fig. 8.
Synthetic sequences with: a a moving rectangle and b a moving circle, adopted to test the effect of the noise estimation errors on NVCA filtering performances
Sequences from X-ray simulator
The 14 synthetic sequences described in the previous paragraph were characterized by scenes that are very uncommon in medical applications, therefore four additional sequences (see Fig. 9) were produced via an X-ray simulator [44,45,46], which allowed testing the noise estimation algorithm on scenes with content of clinical relevance, while still having a ground truth to derive quantitative measures for performance assessment.
Static frames of the four synthetic sequences devised via the X-ray simulator
Each of the 14 sequences with variable number of grey levels was corrupted with six different levels of simulated mixed Poisson–Gaussian noise, by using all the combinations of values considered for noise parameters (reported in Table 6). Noise estimation was performed in each of the resulting 84 noisy sequences by considering 4 different number of available frames, i.e., 10, 25, 50, 100. Therefore, a total of 336 noise estimates were actually retrieved (i.e., 56 for each noise level).
Table 6 Noise parameters of the noise levels used to corrupt the synthetic scenes
The sequences with moving objects were corrupted with two different levels of mixed Poisson–Gaussian noise, corresponding to level 3 and level 6 reported in Table 6. The contrast in the two noiseless sequences was set to obtain a contrast-to-noise ratio (CNR) of 4 for both sequences. Figure 10 shows a frame from the noisy versions of the sequences with moving objects and the result of NVCA filtering with the actual noise parameters.
Synthetic sequences with moving objects adopted to test the effect of the noise estimation errors on NVCA filtering performances. In the first row, panels a, b, the noisy sequences are reported, in which the contrast was set in order to obtain a CNR = 4. In the last row, panels c, d, the sequences denoised via the NVCA algorithm by using the actual noise parameters are shown
The five sequences devised via the X-ray simulator were corrupted with the same noise levels reported in Table 6 (an example of noiseless and noisy versions of a sequence is depicted in Fig. 11), and noise estimation was performed by using 25 frames, which turned out to be the minimum number of frames to retrieve noise parameters with a reasonable accuracy, according to the results reported in paragraph 1 of the "Results" section.
Comparison of noiseless and noisy versions (noise level 6) of the same static frame of sequence #2 from X-ray simulator, depicted in Fig. 9b
Real fluoroscopic sequences
The real fluoroscopic sequences were acquired by imaging two commercial X-ray phantoms, namely TOR-18FG [47] and TOR-CDR [48] (Leeds Test Objects, 7 Becklands Cl, Roecliffe, York YO51 9NR, UK), via a commercial fluoroscopic device (INTERMEDICAL S.r.l. IMD Group, Via E. Fermi 26, 24,050 Grassobbio (BG), Italy). The fluoroscope acquired frames of 1536 × 1536 pixels, represented on 16 bit, with a pulsed protocol at 15 frames per second (fps). Each phantom was placed over an anti-scatter grid, just above the flat panel detector, between two blocks of five Plexiglass square sheets of 25 cm × 25 cm × 1 cm (see Fig. 12), which were used to produce an equivalent Compton scattering noise that would occur when imaging the human body. Five sequences were acquired for each phantom by using the X-ray tube settings reported in Table 7.
Pictures of the X-ray phantoms with Plexiglass sheets: a top view of TOR-18FG; b top view of TOR-CDR; c side view of TOR-18FG; d side view of TOR-CDR
Table 7 X-ray tube settings used to acquire the real fluoroscopic sequences
The noise estimation algorithm was applied to extract the noise parameters estimates from the ten acquired fluoroscopic sequences. Then, a comparison was carried out between parameters extracted from each couple of sequences acquired with the same X-ray tube settings.
All data generated or analyzed during this study are included in this published article, apart from the acquired fluoroscopic sequences. The data are available from Technix S.p.A. (Via Fermi, 45 24050 Grassobbio (BG), Italy), but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are, however, available from the authors upon reasonable request and with permission of Technix S.p.A.
Moradi M, Mahdavi SS, Dehghan E, Lobo JR, Deshmukh S, Morris WJ, et al. Seed localization in ultrasound and registration to C-Arm fluoroscopy using matched needle tracks for prostate brachytherapy. IEEE Trans Biomed Eng. 2012;59:2558–67. https://doi.org/10.1109/TBME.2012.2206808.
Weese J, Penney GP, Desmedt P, Buzug TM, Hill DLG, Hawkes DJ. Voxel-based 2-D/3-D registration of fluoroscopy images and CT scans for image-guided surgery. IEEE Trans Inf Technol Biomed. 1997;1:284–93. https://doi.org/10.1109/4233.681173.
Bifulco P, Cesarelli M, Cerciello T, Romano M. A continuous description of intervertebral motion by means of spline interpolation of kinematic data extracted by video fluoroscopy. J Biomech. 2012;45:634–41. https://doi.org/10.1016/j.jbiomech.2011.12.022.
Andreozzi, E., Pirozzi, M. A., Fratini, A., Cesarelli, G., P. Bifulco: Quantitative performance comparison of derivative operators for intervertebral kinematics analysis, 2020 IEEE International Symposium on Medical Measurements and Applications (MeMeA), Bari, Italy, 2020, pp. 1-6, https://doi.org/10.1109/MeMeA49120.2020.9137322.
Yamazaki T, Watanabe T, Nakajima Y, et al. Improvement of depth position in 2-D/3-D registration of knee implants using single-plane fluoroscopy. IEEE Trans Med Imaging. 2004;23:602–12. https://doi.org/10.1109/tmi.2004.826051.
Wang J, Zhu L, Xing L. Noise reduction in low-dose X-ray fluoroscopy for image-guided radiation therapy. Int J Radiat Oncol Biol Phys. 2009;74:637–43. https://doi.org/10.1016/j.ijrobp.2009.01.020.
Dörr W. Radiobiology of tissue reactions. Ann ICRP. 2015;44(1 Suppl):58–68. https://doi.org/10.1177/0146645314560686.
Shin E, Lee S, Kang H, et al. Organ-specific effects of low dose radiation exposure: a comprehensive review. Front Genet. 2020. https://doi.org/10.3389/fgene.2020.566244.
Loganovsky KN, Marazziti D, Fedirko PA, et al. Radiation-induced cerebro-ophthalmic effects in humans. Life. 2020;10(4):41. https://doi.org/10.3390/life10040041.
Jinnai Y, Baba T, Zhuang X, Tanabe H, Banno S, Watari T, Homma Y, Kaneko K. Does a fluoro-assisted direct anterior approach for total hip arthroplasty pose an excessive risk of radiation exposure to the surgeon? SICOT-J. 2020;6:6. https://doi.org/10.1051/sicotj/2020004.
European Society of Radiology (ESR) Summary of the European Directive 2013/59/Euratom: essentials for health professionals in radiology. Insights into imaging, 6(4), 411–417 (2015) https://doi.org/10.1007/s13244-015-0410-4
Killewich LA, Terrell A. Singleton, Governmental regulations and radiation exposure. J Vasc Surg. 2011;53(1):44S-46S. https://doi.org/10.1016/j.jvs.2010.06.177.
Bjarnason TA, Rees R, Kainz J, et al. COMP Report: a survey of radiation safety regulations for medical imaging X-ray equipment in Canada. J Appl Clin Med Phys. 2020;21(3):10–9. https://doi.org/10.1002/acm2.12708.
Heidbuchel H, Wittkampf FHM, Vano E, Ernst S, Schilling R, Picano E. Practical ways to reduce radiation dose for patients and staff during device implantations and electrophysiological procedures. EP Europace. 2014;16(7):946–64. https://doi.org/10.1093/europace/eut409.
Ozbir S, Atalay HA, Canat HL, Culha MG, Cakır SS, Can O, Otunctemur A. Factors affecting fluoroscopy time during percutaneous nephrolithotomy: impact of stone volume distribution in renal collecting system. Int Braz J Urol. 2019;45(6):1153–60. https://doi.org/10.1590/S1677-5538.IBJU.2019.0111.
Cesarelli M, Bifulco P, Cerciello T, Romano M, Paura L. X-ray fluoroscopy noise modeling for filter design. Int J Comput Assist Radiol Surg. 2013;8:269–78. https://doi.org/10.1007/s11548-012-0772-8.
Cerciello T, Bifulco P, Cesarelli M, Fratini A. A comparison of denoising methods for X-ray fluoroscopic images. Biomed Signal Process Control. 2012;7:550–9. https://doi.org/10.1016/j.bspc.2012.06.004.
Genovese M, Bifulco P, De Caro D, Napoli E, Petra N, Romano M, Cesarelli M, Strollo AGM. Hardware implementation of a spatio-temporal average filter for real-time denoising of fluoroscopic images. J VLSI. 2015;49:114–24. https://doi.org/10.1016/j.vlsi.2014.10.004.
Castellano G, De Caro D, Esposito D, Bifulco P, Napoli E, Petra N, Andreozzi E, Cesarelli M, Strollo AGM. An FPGA-oriented algorithm for real-time filtering of Poisson Noise in video streams, with application to X-ray fluoroscopy. Circuits Syst Signal Process. 2019;38:3269–94. https://doi.org/10.1007/s00034-018-01020-x.
Sarno A, Andreozzi E, De Caro D, Di Meo G, Strollo AGM, Cesarelli M, Bifulco P. Real-time algorithm for Poissonian noise reduction in low-dose fluoroscopy: performance evaluation. BioMed Eng OnLine. 2019;18:94. https://doi.org/10.1186/s12938-019-0713-7.
Andreozzi, E., Pirozzi, M.A., Sarno, A., Esposito, D., Cesarelli, M., Bifulco, P.: A Comparison of Denoising Algorithms for Effective Edge Detection in X-Ray Fluoroscopy. In: Henriques J., Neves N., de Carvalho P. (eds) XV Mediterranean Conference on Medi-cal and Biological Engineering and Computing – MEDICON 2019. MEDICON 2019. IFMBE Proceedings, vol 76, pp 405–413. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-31635-8_49
Andreozzi, E., Pirozzi, M. A., Fratini, A., Cesarelli, G., Cesarelli, M., Bifulco, P.: A Novel Image Quality Assessment Index for Edge-Aware Noise Reduction in Low-Dose Fluoroscopy: Preliminary Results, 2020 International Conference on e-Health and Bioengineering (EHB), IASI, 2020, pp. 1–5, https://doi.org/10.1109/EHB50910.2020.9280107
Dabov K, Foi A, Katkovnik V, Egiazarian K. Image denoising by sparse 3D transform-domain collaborative filtering. IEEE Trans Image Process. 2007;16(8):2080–95. https://doi.org/10.1109/TIP.2007.901238.
Article MathSciNet Google Scholar
Maggioni M, Boracchi G, Foi A, Egiazarian K. Video denoising, deblocking and enhancement through separable 4-D nonlocal spatiotemporal transforms. IEEE Trans Image Process. 2012;21(9):3952–66. https://doi.org/10.1109/TIP.2012.2199324.
Vieira MAC, Bakic PR, Maidment ADA, Schiabel H, Mascarenhas NDA. Filtering of Poisson Noise in digital mammography using local statistics and adaptive Wiener Filter. In: Maidment ADA, Bakic PR, Gavenonis S, editors. Breast imaging. IWDM 2012 Lecture Notes in Computer Science, vol. 7361. Berlin: Springer; 2012. https://doi.org/10.1007/978-3-642-31271-7_35
Luisier F, Blu T, Unser M. Image denoising in mixed Poisson-Gaussian noise. IEEE Trans Image Process. 2010;20(3):696–708.
Bo Z, Jalal MF, Jean-Luc S. Wavelets ridgelets and curvelets for Poisson noise removal. IEEE Transaction on image processing. 2008;17(7):1093–108.
Sutour C, Deledalle CA, Aujol JF. Estimation of the noise level function based on a nonparametric detection of homogeneous image regions. SIAM J Imag Sci. 2015;8(4):2622–61.
Tomic M, Loncaric S, Sersic D. Adaptive spatio-temporal denoising of fluoroscopic X-ray sequences. Biomed Signal Process Control. 2012;7(2):173–9.
Hensel, M., Pralow, T., Grigat, R.R.: Modeling and real-time estimation of signal-dependent noise in quantum-limited imaging. In Proceedings of the 6th WSEAS International Conference on Signal Processing, Robotics and Automation (ISPRA'07). World Scientific and Engineering Academy and Society (WSEAS), Stevens Point, Wisconsin, USA, 183–191. (2007).
Foi A, Alenius S, Katkovnik V, Egiazarian K. Noise measurement for raw-data of digital imaging sensors by automatic segmentation of non-uniform targets. IEEE Sens J. 2007;7:1456–61. https://doi.org/10.1109/JSEN.2007.904864.
Foi A, Trimeche M, Katkovnik V, Egiazarian K. Practical Poissonian-Gaussian noise modeling and fitting for single-image raw-data. IEEE Trans Image Process. 2008;17:1737–54. https://doi.org/10.1109/tip.2008.2001399.
Makitalo M, Foi A. Noise parameter mismatch in variance stabilization, with an application to Poisson-Gaussian noise estimation. IEEE Trans Image Process. 2014;23(12):5348–59. https://doi.org/10.1109/tip.2014.2363735.
Lee S, Lee MS, Kang MG. Poisson-Gaussian noise analysis and estimation for low-dose X-ray images in the NSCT domain. Sensors. 2018;18(4):1019. https://doi.org/10.3390/s18041019.
Lefkimmiatis S, Maragos P, Papandreou G. Bayesian inference on multiscale models for Poisson intensity estimation: applications to photon-limited image denoising. IEEE Trans Image Process. 2009;18:1724–41. https://doi.org/10.1109/TIP.2009.2022008.
Tapiovaara MJ. SNR and noise measurements for medical imaging: II. Application to fluoroscopic X-ray equipment. Phys Med Biol. 1993;38:1761–88. https://doi.org/10.1088/0031-9155/38/12/006.
Aufrichtig R, Wilson DL. X-ray fluoroscopy spatio-temporal filtering with object detection. IEEE Trans Med Imaging. 1995;14:733–46. https://doi.org/10.1109/42.476114.
J. Boulanger, J. B. Sibarita, C. Kervrann and P. Bouthemy, "Non-parametric regression for patch-based fluorescence microscopy image sequence denoising," in Fifth IEEE International symposium on Biomedical Imaging, Paris, 2008.
Amiot C, Girard C, Chanussot J, Pescatore J, Desvignes M. Curvelet based contrast enhancement in fluoroscopic sequences. IEEE Trans Med Imaging. 2014;34(1):137–47.
Amiot C, Girard C, Chanussot J, Pescatore J, Desvignes M. Spatio-temporal multiscale denoising of fluoroscopic sequence. IEEE Trans Med Imaging. 2016;35(6):1565–74.
Anscombe FJ. The transformation of Poisson binomial and negative-binomial data. Biometrika. 1948;35:246–54.
Mäkitalo M, Foi A. Optimal inversion of the generalised Anscombe for Poisson-Gaussian noise. IEEE Trans Image Process. 2013;22(1):91–103. https://doi.org/10.1109/TIP.2012.2202675.
Wang J, Blackburn TJ. The AAPM/RSNA physics tutorial for residents: X-ray image intensifiers for fluoroscopy. Radiographics. 2000;20:1471–7.
Vidal FP, Villard P-F. Development and validation of real-time simulation of X-ray imaging with respiratory motion. Comput Med Imaging Graph. 2016;49:1–15. https://doi.org/10.1016/j.compmedimag.2015.12.002.
Sújar, A., Meuleman, A., Villard, P.-F., Garcia, M., Vidal, F. : gVirtualXRay: Virtual X-Ray Imaging Library on GPU. CGVC, 61–68 (2017). https://doi.org/10.2312/cgvc.20171279.
Sújar, A., Kelly, G., García, M., Vidal, F.: Projectional Radiography Simulator: an Interactive Teaching Tool. CGVC. 125 - 128 (2019). https://doi.org/10.2312/cgvc.20191267.
Leeds Test Objects: TOR 18FG product specifications (2015). https://www.leedstestobjects.com/wp-content/uploads/TOR-18FG-product-specifications-1.pdf?x78567 Accessed 14 Oct 2020.
Leeds Test Objects: TOR CDR product specifications (2017). https://www.leedstestobjects.com/wp-content/uploads/TOR-CDR-product-specifications-1.pdf?x78567 Accessed 14 Oct 2020.
The authors would like to thank General Medical Italia Ltd. for their technical support.
The authors declare that this research has not been supported by any fund.
Department of Electrical Engineering and Information Technology, University of Naples Federico II, Via Claudio 21, 80125, Naples, Italy
Emilio Andreozzi, Daniele Esposito, Mario Cesarelli & Paolo Bifulco
Biomedical Engineering, School of Life and Health Sciences, Aston University, Birmingham, B4 7ET, UK
Antonio Fratini
Emilio Andreozzi
Daniele Esposito
Mario Cesarelli
Paolo Bifulco
EA conceived and designed the experiments; EA and PB acquired the experimental data; EA performed the analyses and wrote the draft manuscript; PB, AF and DE contributed to issue the final version of the manuscript; PB led the research group. All authors read and approved the final manuscript.
Correspondence to Paolo Bifulco.
Table S1. Noise parameters estimates extracted from the synthetic sequences with variable number of grey levels. Table S2. Errors on noise parameters estimates extracted from the synthetic sequences with variable number of grey levels. All values are expressed as relative errors, except for those reported in blue, which are expressed as absolute errors, since they refer to a null parameter (b = 0). Table S3. Noise parameters estimates extracted from the synthetic sequences designed via the X-ray simulator. Table S4. Errors on noise parameters estimates extracted from the synthetic sequences designed via the X-ray simulator. All values are expressed as relative errors, except for those reported in blue, which are expressed as absolute errors, since they refer to a null parameter (b = 0).
Andreozzi, E., Fratini, A., Esposito, D. et al. Toward a priori noise characterization for real-time edge-aware denoising in fluoroscopic devices. BioMed Eng OnLine 20, 36 (2021). https://doi.org/10.1186/s12938-021-00874-8
Received: 12 January 2021
Quantum noise
Poisson noise
Noise estimation
Noise characterization
Edge-aware denoising
Real-time denoising
|
CommonCrawl
|
Modeling brain connectivity dynamics in functional magnetic resonance imaging via particle filtering
Pierfrancesco Ambrosi ORCID: orcid.org/0000-0002-1972-05561,
Mauro Costagli2,3,
Ercan E. Kuruoğlu4,6,
Laura Biagi3,5,
Guido Buonincontri3,5 &
Michela Tosetti3,5
Interest in the studying of functional connections in the brain has grown considerably in the last decades, as many studies have pointed out that alterations in the interaction among brain areas can play a role as markers of neurological diseases. Most studies in this field treat the brain network as a system of connections stationary in time, but dynamic features of brain connectivity can provide useful information, both on physiology and pathological conditions of the brain. In this paper, we propose the application of a computational methodology, named Particle Filter (PF), to study non-stationarities in brain connectivity in functional Magnetic Resonance Imaging (fMRI). The PF algorithm estimates time-varying hidden parameters of a first-order linear time-varying Vector Autoregressive model (VAR) through a Sequential Monte Carlo strategy. On simulated time series, the PF approach effectively detected and enabled to follow time-varying hidden parameters and it captured causal relationships among signals. The method was also applied to real fMRI data, acquired in presence of periodic tactile or visual stimulations, in different sessions. On these data, the PF estimates were consistent with current knowledge on brain functioning. Most importantly, the approach enabled to detect statistically significant modulations in the cause-effect relationship between brain areas, which correlated with the underlying visual stimulation pattern presented during the acquisition.
The understanding of brain functioning is linked to the study of the dynamic interaction among anatomically segregated brain areas. These interactions are labeled functional and effective connectivity and refer to distinct ways of considering connections among brain region. While complementary to structural connectivity, which describes anatomical connections between brain regions [1], they concern functional connections that are not necessarily achieved through a direct anatomical link between brain areas. Functional connectivity regards connections as statistical codependencies between the signal of different brain regions, and consequently it is a non-directional and model-free description of the brain network. On the contrary, effective connectivity defines the temporal relationship and causal influences among brain regions in a given network model [2].
Functional Magnetic Resonance Imaging (fMRI) is frequently employed in brain connectivity studies, given its non-invasiveness and satisfactory spatiotemporal resolution, both in physiology and pathology (e.g. Alzheimer's disease [3,4,5], schizophrenia [6] and Major Depression Disorder [7]). From brain connectivity studies it emerged that brain dynamics, in particular effective connectivity, may provide a biological marker for specific brain disease and a tool for monitoring responses to treatments of these pathologies [8,9,10,11].
Granger Causality (GC) and Dynamic Causal Modeling (DCM) are methods to investigate effective connectivity. Granger causality is present when knowledge on temporal evolution of the signal in a certain brain region A improves the predictability of another brain region B [12, 13]. This approach is based on the evaluation of a linear codependence among time series, and it is therefore limited to a stationary framework or needs a sliding-window approach to address time-varying coupling between regions, which has limitations [14]. Differently, in DCM the predicted relationship between neural activity and observed fMRI signal needs to be specified in a pre-determined model, hence requiring previous knowledge about the timing and effect on signals of the connectivity modulation [15].
The Sequential Monte Carlo (SMC) methodology [16] is crucially different from these two strategies. SMC approaches estimate the hidden states of a dynamic system with only partial and noisy observations, without further assumptions on the presence of variations in connectivity. A specific SMC methodology called Particle Filter (PF) employs Monte Carlo sampling to approximate probability density functions and it updates the posteriors with the arrival of new samples.
The SMC algorithm proposed here was recently developed by Ancherbak et al. [17], originally for time-varying gene network modeling. We adapted it for the study of brain connectivity using fMRI data and the feasibility and behaviour of the proposed approach has been studied on synthetic data mimicking fMRI time-series. When applied to real fMRI datasets, results were compared to correlation between delayed time series, considered as a proxy measure for stationary effective connectivity. Two different experimental paradigms were tested: the first one, whose preliminary results were presented in abstract form [18], involved tactile stimulation during an fMRI acquisition with temporal resolution of 2 s. The second experiment employed a periodic visual stimulation with significantly improved time resolution of 0.8 s.
Model and algorithm
Particle filter [17, 19,20,21,22] is a sequential Monte Carlo methodology based on the Bayes theorem on conditional probability. Particle filters estimate the probability distributions of hidden variables of interest, modeled according to a hypothesized state-space equation. The probability density function (pdf) of the hidden variables is allowed to be time-varying and is therefore sequentially updated when new data become available. Such probability distribution is estimated from the data, modeled according to a hypothesized observation equation. In brain connectivity studies based on fMRI data, the relationship among the time-series of R different brain Regions of Interest (ROIs) \(\mathbf{x }_t=\{x_1(t),\dots ,x_R(t)\}\) can be modeled as a first order linear Vector Autoregressive (VAR) model [12, 21, 23,24,25] as:
$$x_i(t)=\sum _j^R a_{ij}(t)x_j(t-1)+\eta _i(t) \quad i=1,\ldots ,R $$
or in matrix notation:
$$\begin{aligned} \mathbf{x }(t)=\mathbf{a }(t)\mathbf{x }(t-1)+{\varvec{\eta }}(t) \end{aligned}$$
$$\begin{aligned} \mathbf{a }(t)= \begin{bmatrix} a_{11}(t) &{} a_{12}(t) &{} \dots &{} a_{1R}(t) \\ a_{21}(t) &{} a_{22}(t) &{} \dots &{} a_{2R}(t) \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ a_{R1} (t) &{} a_{R2}(t) &{} \dots &{} a_{RR}(t) \end{bmatrix} \end{aligned}$$
which is employed as the observation equation describing the relationship between the observations \(\mathbf{x }(t)\) at time t and those at time \(t-1\) (that is, \(\mathbf{x }(t-1)\)); \({\varvec{\eta }}(t)\) is the vector of observation noise; the matrix of hidden parameters of interest \(\mathbf{a }(t)\) represents the causal influence exerted between different areas, and its elements \(a_ij(t)\) are the coefficients which represent conditional dependence. In particular, it can be assumed that elements of \(\mathbf{a }(t)\) are allowed to be time-varying:
$$\begin{aligned} a_{ij}(t)=a_{ij}(t-1)+\nu _{ij}(t) \end{aligned}$$
where \(a_{ij}(t)\) is the ijth element of the matrix \(\mathbf{a }(t)\), describing the influence of the jth region over the ith region, and \(\nu _{ij}(t)\) is the process noise (innovation) term.
The adoption of a linear model was supported by the well-established body of literature on fMRI data modelling and brain connectivity analysis at the temporal scales of fMRI data [25, 26]. The adopted autoregressive model was first-order, which was optimal on the basis of the Schwartz criterion, in accordance with previous findings [12, 25, 27].
The PF algorithm evolves from an initial probability distribution for \(a_{ij}(t-1)\), which we chose to be uniform at \(t=1\), and through Eq. (4) it generates new possible values for \(a_{ij}(t)\). The N particles are generated from the probability distributions of the elements of \(\mathbf{a }_i(t)\): the distributions are adapted with each new observation through the mechanism provided by particle filtering, to describe for the set of coefficients \(a_{ij}\) at every time-step. The algorithm generates N particles, by updating those at the previous time point (initialized to zero at t=0) using a noise innovation term, by following Eq. 4. In our implementation, the innovation values are drawn from a Gaussian distribution. The standard deviation of this distribution follows the absolute difference between the two previous coefficient values estimated by the algorithm and constrained between a minimum of 0.1 and a maximum of 0.4: these values were chosen empirically to prevent divergence and, at the same time, capture non-stationarities.
With Eq. (2), the PF algorithm generates predicted values of the observations at time t. The desired probability density function of the parameters of interest \(\mathbf{a }(t)\) can be estimated via Bayes theorem as follows:
$$\begin{aligned} p(\mathbf{a }(t)|\mathbf{x }(1,\dots ,t))=\frac{p(\mathbf{x }(t)|\mathbf{a }(t)) p(\mathbf{a }(t)|\mathbf{x }(1,\dots ,t-1))}{p(\mathbf{x }(t)|\mathbf{x }(1,\dots ,t-1))} \end{aligned}$$
and with the assumption of Gaussian noise we have
$$\begin{aligned} p(x_i(t)|{\mathbf {a}}_{i}(t))=\frac{1}{(2\pi \sigma _\eta ^2)^{R/2}} \text {exp}\Big (-\frac{(x_i(t)-{\hat{x}}_i(t))^2}{2\sigma _\eta ^2}\Big ) \end{aligned}$$
where \({\hat{x}}_i(t)\) are the data estimated through Eq. (1) at time t for the i-th ROI and \({\mathbf {a}}_i(t)=\{a_{i1},\dots ,a_{iR}\}\) is the vector of hidden variables associated with the i-th ROI at time t, that is, the i-th row of matrix \({\mathbf {a}}_t\). In Eq. 6 the value of the \(\mathbf{a }_i(t)\) elements determines the value of the estimate \({\hat{x}}_i(t)\), and the weight of the i-th particle.
In most applications, Eq. (5) cannot be solved analytically [28], but it can be computed through the Sequential Monte Carlo sampling scheme, which consists in representing the pdf \(p(\mathbf{a }(t)|\mathbf{x }(1,\dots ,t))\) as a discrete set of N particles:
$$\begin{aligned} p(\mathbf{a }_i(t)|\mathbf{x }(1,\dots ,t))\approx \sum _{n=1}^Nw_t^{(n)} \delta (\mathbf{a }_i(t)-\mathbf{a }^{(n)}_i(t)) \end{aligned}$$
where \(w_t^{(n)}\) is the weight associated to the n-th particle vector \(\mathbf{a }^{(n)}_i(t)\) for the ith row of matrix \(\mathbf{a }(t)\) at time t. The Sequential Importance Sampling (SIS) [28] methodology provides a strategy to compute the weights. It has been shown [20] that the weights can be sequentially updated as follows:
$$\begin{aligned} w_t^{(n)}\propto w_{t-1}^{(n)} p(\mathbf{x }(t)|\mathbf{a }_i(t)^{(n)}) \end{aligned}$$
where the proportionality takes into account normalization factors. With this approach, at each time instant t we have a sample set \(\{\mathbf{a }_i(t)^{(n)},w_t^{(n)}\}\) for \(n=1,\dots ,N\) and for \(i=1,\dots ,R\) which can be used to estimate the pdf of the parameters and to infer information about the network. However, after some iterations, most of the particles will have a very low statistical weight, resulting in a lower exploration efficiency of the algorithm. To overcome this typical problem of sequential Monte Carlo methodologies, a step called Resampling is performed. The number of effective particles was defined in [29] as
$$\begin{aligned} N_{\text{eff}}=\frac{1}{\sum _{n=1}^N(w_t^{(n)})^2} \end{aligned}$$
If \(N_{\text{eff}}\) is below a certain arbitrary threshold the Resampling is performed: particles with weight below a certain threshold are substituted by copies of particles with sufficiently high weights and to each of the new particles set is assigned the same weight 1/N. This results in a more effective exploration of the solution space, because only statistically relevant particles remain after this step. The VAR process estimation problem using SMC was developed in [30], and it was extended and applied to time-varying gene expression networks in [17].
Table 1 Schematic description of the PF algorithm
To sum up, the resulting algorithm can be schematically expressed as in Table 1. In our implementation the procedure is repeated \(N_r=100\) times, all independent from each other, to provide a better exploration of the solution space, and resampling was performed when \(N_{\text{eff}}<30\%\) of the total number of particles. The final outputs of the algorithm are the \(\mathbf{a }_t\) computed as the average of the \(N_r\) repetitions. In this implementation, the running time was proportional to the length T of the time series, the number of particles N, the number of repetitions \(N_r\), and it increased quadratically with the number of network nodes [31]. The algorithm was implemented in MATLAB (Mathworks, Natick, MA, U.S.A.) R2017b.
To validate the proposed approach, two different synthetic networks were used.
One network with \(R=6\) nodes, each with \(T=100\) time points, stationary coefficients generated with the MATLAB function varm() with a Signal-to-Noise Ratio (SNR) set to either \(\infty \) (\(\sigma _\eta =0\), ideal case) or 6dB.
Another network with \(R=2\) and \(T=250\) was used to assess the PF capability to capture time-varying hidden parameters. In this case, \(a_{ij}\) coefficients were zero except for coefficient \(a_{21}\), whose value switched from 1 to \(-1\) with a period of 125 time points. The SNR was 10dB.
The first synthetic dataset allowed us to verify the reliability of the results of the PF and to decide the optimal values of the parameters; the second one allowed us to address the capability of the PF to track variations through time of the AR coefficients.
Real fMRI data
The proposed approach was also retrospectively applied to real fMRI data acquired on healthy volunteers in two different experimental set-ups, with two and four participants respectively, acquired with two-dimensional single-shot echo-planar imaging (EPI) on a 7T MRI system (MR950, GE Healthcare, Chicago, IL, U.S.A.).
Motor task Time-series consisting of 240 time points with a temporal resolution of 2s were acquired on two subjects with the following acquisition parameters: Time of Echo \((\hbox {TE}) = 23\,\hbox {ms}\), Flip Angle \((\hbox {FA}) = 60^{\circ }\), Field of View \((\hbox {FoV}) = (192\,\hbox {mm})^2\), acquisition matrix size = \(128\times 128\), 32 slices of thickness = 1.5mm, resulting in isotropic spatial resolution of \((1.5 \,\hbox {mm})^3\) and Time of Repetition \((\hbox {TR}) = 2\,\hbox {s}\). During acquisition, the subjects' thumb- and index-fingertips were stimulated via a pneumatic device (Linari Engineering, Pisa, Italy). The subjects' task was to move the finger whenever it was stimulated. The fMRI data were motion-corrected using MCFLIRT [32]. Spatial smoothing was applied by using a Gaussian kernel of FWHM 3.0mm; each 4D dataset was demeaned and normalized by a single multiplicative factor; high-pass temporal filtering was applied to remove slow temporal drifts of the fMRI signal. Four ROIs were studied covering primary somatosensory (S1), primary motor (M1), supplementary motor (SM) and parietal (P) cortices. All ROIs consisted in four voxels and were manually drawn on each subject on one slice only, to avoid potential slice timing confounds (Fig. 1). The resultant time-series of each ROI were obtained by averaging the four time-series of individual voxels.
Visual task Time-series with a temporal resolution of 0.8s were acquired on four trials of either 300 time points (two subjects) or 600 (two subjects). Scanning parameters were \(\hbox {TE} = 21\,\hbox {ms}\), \(\hbox {FA} = 48^{\circ }\), \(\hbox {FoV} = (192\,\hbox {mm})^2\), acquisition matrix size \(= 64\times 64\), 22 slices of thickness \(= 3\,\hbox {mm}\), resulting in isotropic spatial resolution of (3 mm)\(^3\) and \(\hbox {TR} = 800\,\hbox {ms}\). The same preprocessing steps described above for the motor task, including motion correction, spatial smoothing, normalization and signal drift removal, were adopted. Subjects underwent a periodic visual stimulation alternating between black and white dots moving along spiral trajectories over a gray background (stimulation ON) and presentation of the gray background alone (stimulation OFF). Four ROIs were studied covering the Lateral Geniculate Nucleus (LGN), the Middle temporal cortex (MT), the Primary Visual area (V1) and one control ROI in the tempo-parietal cortex (CTRL). Four voxels wide ROIs were manually drawn on each subject on one slice only.
ROIs drawn on one representative subject, representing primary somatosensory (S1), primary motor (M1), supplementary motor (SM) and parietal (P) cortices
The optimal order of the autoregressive model describing the time series was 1, as estimated by the Schwartz criterion [12, 25, 27].
The particle filter was applied not only to the original time series but also to the same fMRI data shuffled in time. This test allowed to address the effective dependency of the results from the temporal order of the data, i.e. to address causal dependency through time in the data. Subsequently, the \(a_{ij}\) coefficients estimated by particle filtering were compared to the delayed correlation (DC) \(c_{ij}\) between signals \(x_i(t)\) and \(x_j(t-1)\) which reflect the time-invariant conditional correlation between network node (ROI) j over node i. Furthermore, the results of the PF were also compared to coefficients estimated in a stationary framework fitting a multivariate linear regression model with stationary coefficients to the data.
The plot on top shows the causal dependency of area MT from area V1 for one of the 4 subjects in the visual task as an example of PF estimates of conditional dependency coefficients between time-series. Plot in the center shows the actual stimulation pattern as a square wave, where the presence and the absence of stimulation are represented by 1 and 0 respectively. The bottom plot shows a time shifted stimulation pattern, which has a half-period offset from the actual stimulation pattern. The t-test was run comparing connectivity values in correspondence of ones (stimulation ON) and zeros (stimulation OFF) in both cases. The second test allowed to exclude spurious changes in connectivity values not due to the stimulation
With a t test between coefficient values in the presence and in the absence of stimulation we searched statistically significant (p-value \(<0.05\)) changes in connectivity following the underlying stimulation pattern. Also, as a control analysis, a second t-test was performed, simulating a different stimulation pattern from the actual one, which allowed to exclude variations deriving from spurious fluctuations of the results, as explained in Fig. 2.
Experimental results and discussions
Scatter plots in Fig. 3 demonstrate that the conditional dependency coefficients estimated by PF in a stationary network satisfactorily correlate with the true coefficients, both in the noiseless synthetic dataset (Pearson's \(\rho = 0.96\)) and in the noisy scenario with \(\hbox {SNR} = 6\,\hbox {dB}\) (Pearsons' \(\rho = 0.59\)).
Scatter plots that relate PF estimates (x axis) and true values (y axis) of the autoregressive model for a 6-node network with 100 time samples, in the absence of noise (left) and with \(\hbox {SNR} = 6\,\hbox {dB}\) (right). The lines are the results of a linear fit of the data: in the noiseless case the slope m and the offset q were 1.39 and \(-1.62\cdot 10^{-2}\), respectively; in the noisy case, \(m = 1.62\) and \(q = 8\cdot 10^{-3}\)
Time courses of the hidden parameters \(a_{ij}\) in the case of a 2-node network with non-stationary coefficient \(a_{21}\) alternating between 1 and \(-1\). Red lines represent the true values, while blue lines represent the estimates obtained by PF
The case of a network with one time-varying coefficient is shown in Fig. 4. The PF tracks the changes of the non-stationary coefficient \(a_{21}\), although the estimated values do not immediately follow the abrupt changes between 1 and \(-1\) and viceversa. All the other coefficients are correctly estimated to be close to the nominal null value.
Motor network
Red lines in Fig. 5 represent average values of the \(a_{ij}\) coefficients obtained on fMRI time series and the blue histogram represents the corresponding distribution of mean values of causal interactions for the permuted time series. Since temporal permutation suppresses the causal dependency between subsequent values, it was expected to observe zero-mean Gaussian distributed results, that is, no causal dependency. On the other hand, coefficients representing the causal relationship between two interacting brain areas should have values lying far away from the null distribution. This is what Fig. 5 shows, demonstrating that the particle filter effectively discriminates between unrelated and causally related time series.
Mean conditional dependencies between real fMRI data computed through the particle filter: in blue, the histogram of mean values obtained on time-series randomly permuted in time, while the red line shows the corresponding mean value obtained from non-permuted time-series. Many of these values lie well outside of the null distribution, therefore their value reflects the effective mean causal interaction and it is not produced by chance
Scatter plot showing the relationship between mean PF estimates (horizontal axis) and delayed correlation (vertical axis) on the two sensory-motor experiments. On both results taken altogether the Pearson's correlation coefficient \(\rho \) is 0.74, which corresponds to a statistically significant correlation with \(p < 0.001\). Slope and offset of the linear fit were 0.83 and 0.24 respectively
Plots in blue color show the PF-estimated time courses of three representative hidden parameters \(a_{ij}\) in the case of a 4-node motor network, estimated in real fMRI data in one subject. Top panel depicts the coefficient describing the negligible causal effect exerted by area S1 over P. Central and bottom panels represent the causal effect exerted by the SM area over M1 and viceversa respectively
The PF captured causal interactions between brain areas, which significantly correlated with a proxy measure of effective connectivity, that is, delayed correlation (DC) (\({p} < 0.001\), Pearson's correlation coefficient \(\rho = 0.74\), Fig. 6). In particular, in both subjects, the highest \(a_{ij}\) coefficients in both PF and delayed correlation were those which represent the conditional dependency of areas M1 and S1, in agreement with current knowledge of brain functioning during a sensory-motor task.
Figure 7 exemplifies the temporal evolution of brain connectivity through three representative \(a_{ij}\) coefficients in Subject 2. The top panel displays one representative coefficient involving the control ROI P, which is approximately 0. Our finding that the S1-on-P coefficient time series exhibits only small fluctuations around zero is in agreement with ground truth that the parietal node is not involved in the motor task. The two bottom panels demonstrate the expected reciprocal influence between M1 and SM areas. It is worth noting that the influences in the two directions, i.e. M1 over SM and vice versa, have different values, as a consequence of the adopted model that allows non-symmetric matrix of coefficients.
Visual network
The PF detected causal dependencies between brain areas of the visual network which also correlated with the delayed correlation (Pearson's \(\rho =0.70\) and \(p<0.001\), as shown in Fig. 8a). Interestingly, all four fMRI datasets showed statistically significant dynamic changes (\(p<0.05\)) with a pattern following the underlying stimulation in the effective connectivity coefficient regarding the influence of MT on V1, which are known to take part in the processing of visual stimuli. As an example, Fig. 9 represents the statistically significant differences in the causal influence of MT on V1 in the presence or absence of the visual stimulus, observed consistently in all four fMRI datasets. Blue crosses show the quality parameter \(Q=T/\sigma \), where T is the temporal length of the time series and \(\sigma \) is the standard deviation of the data, taken as a measure of noise. Q values indicate a possible explanation for inter-dataset variability: the particle filter performance are enhanced by the timeseries length (T) and reduced by noise (estimated as \(1/\sigma \)). Not all connectivity coefficients share this same behavior, therefore other factors might influence the results. Figure 10 represents one example of absence of detected causal relationship between area MT and the control region CTRL, and one example of non-symmetrical causality exerted by area V1 over MT and viceversa.
a Scatter plot between mean PF estimates (horizontal axis) and delayed correlation (vertical axis) on the four visual experiments fMRI data. A \(p<0.001\) was obtained with a Pearson's correlation coefficient \(\rho =0.70\). The black line is the result of a linear fit. Slope and offset of the linear fit were 1.10 and 0.25 respectively. b Scatter plot between average PF estimates (horizontal axis) and stationary AR coefficients (vertical axis). The resulting Pearson's correlation coefficient \(\rho \) was 0.94, with a \(p<0.001\). The black line is the result of a linear fit, with slope and offset 1.2 and \(-0.004\) respectively
Comparison between presence (green bar) and absence (red bar) of visual stimulation for the mean \(a_{ij}\) coefficient representing the causal influence of MT on V1 in the four different datasets. Bars indicate the mean values and the standard deviation of the mean. Higher values of the coefficients are obtained in data sets with better quality parameter Q, as shown by blue crosses
Fig. 10
Plots in blue color show the PF-estimated time courses of three representative hidden parameters \(a_{ij}\) in the case of the 4-node visual network, estimated in real fMRI data in one subject. Top panel depicts the coefficient describing the negligible causal effect exerted by area MT over the control area. Central and bottom panels represent the non-symmetrical causal effect exerted by the V1 area over MT and viceversa respectively
Figure 8b shows the comparison between results of the PF and stationary AR coefficients (Pearson's \(\rho =0.94\) and \(p<0.001\)). A good agreement was found between the two estimates. In addition, changes through time of the AR coefficient were searched through a sliding-window approach: a multivariate linear regression model was fitted to consecutive and overlapping blocks of 20 time points, and variations in the regression coefficients in phase with the visual stimulation were searched with the same strategy used for the estimates of the PF (Fig. 2). While in this way some coefficients were found to vary, those that did were not the same across the four datasets. Instead, the conditional dependency coefficients estimated with the PF representing the influence of area MT on area V1 were consistently varying in all subjects, suggesting that the PF more reliably detects non-stationarity.
Figure 8a with ROI-based encoding, explained in the legend above the figure: same color refer to the same causing ROI and same symbols to same caused ROI. Black crosses are clusters centroids found through the Matlab function kmeans(), called A, B and C as in figure. The horizontal red line and vertical green line intersect in the point equally distant from centroids A and B
Figure 11 shows the relationship between coefficients obtained by PF and DC with explicit reference to the causal influence they are referred to. We searched for the appropriate clustering of the data in Fig. 11. The number of clusters was chosen as the knee of the number-of-clusters vs distance-to-every-centroid curve. The optimal number was 3 and each cluster's centroid was plotted as a black cross in Fig. 11. All coefficients representing the causal interactions involving the CTRL region (black symbols) belong to cluster A, that is, small values of DC and mean PF estimates, except for the self-causality terms, which all lay in the vicinity of the C centroid. Every non-diagonal coefficient involving ROIs LGN, MT and V1 belong to the cluster B. In particular, in Fig. 11, the horizontal red line and vertical green line intersect in the point equally distant to the centroids A and B. All values representing the causal influence of MT on V1 (which vary following the stimulation pattern, represented by highlighted red diamonds in the figure) and that of V1 on MT (green stars) lay right to the green line and above the red line.
Other implementations of SMC algorithms were previously proposed to investigate brain connectivity in fMRI data. Murray and Storkey [33] proposed a forward-backward Particle Filter using, as observation equation, a stochastic extension of the balloon model, which was proposed to describe the haemodynamics that follow brain activity [34]. In their study, the hidden parameters of the model resulted approximately constant, probably as a consequence to the complexity of the model itself [35, 36].
In a different study, Ahmad et al. [37] adopted a symmetric, linear, first-order, time-varying Autoregressive (TVAR) model and used a Rao-Blackwellized PF to estimate the temporal relationships among fMRI time-series representing four brain regions during resting state. The assumption of symmetry in coupling coefficients, that is \(a_{ij}=a_{ji}\), reduced model complexity, but did not permit to infer neither the directionality of the network nor any possibly asymmetric cause/effect interaction between brain areas. Therefore their approach cannot be used to investigate effective connectivity. Also, the results were not benchmarked with the outcome results from different analyses and the resting-state paradigm did not allow any analysis on the temporal evolution of the results.
In our implementation on fMRI data, the time-averaged PF estimates were in agreement with a proxy measure of causality, that is, delayed correlation. Part of the mismatch between the proposed method and delayed correlation could be explained by the fact that the PF algorithm studies the network as a whole and produces estimates of \(a_{ij}\) coefficients that update at every time instant, while delayed correlation is a measure of pair-wise causality that does not take into account possible non-stationarities and spurious cause-effect relationships mediated by other nodes of the network.
In the first experimental setup involving the motor network, statistically significant changes in connectivity were not found, but the poor temporal resolution of the data (2s) may have prevented the detection of these changes.
On the contrary, in our study of the visual network statistically significant changes in connectivity were found in four different datasets with a pattern following the underlying stimulation, without requiring any previous knowledge on the actual stimulation paradigm during the estimation process. These variations interested the influence of MT on V1 which is consistent with our understanding of cortical processing in the early visual cortex [24]. The average behavior of PF's results and a simpler regression estimate were in good agreement, but the PF also enabled to report consistent results across different datasets in the same experimental paradigm. Furthermore, while the PF is "blind", sliding window analysis requires the inclusion of additional pieces of information, such as the stimulation paradigm timing, which in some cases may not be available.
The stability of the PF-estimated coefficients at larger time scales is in agreement with the regime of stationarity, commonly adopted in functional and effective connectivity fMRI studies: indeed, our experiments demonstrate a good agreement between time-averaged values of \(a_{ij}\) coefficients and both delayed correlation and stationary AR coefficients. This finding suggests that on large temporal scales the brain network has stable interactions within its nodes, under the assumption of linear first order autoregression.
Mean conditional dependency coefficients were found to vary between different datasets. As Fig. 9 shows, in some cases these differences can be explained through the joint contribution of noise and temporal length of the data, i.e. the quality parameter \(Q=T/\sigma \). The brain haemodynamic responses may also be involved, which vary not only among subjects but also between different areas in the same subject [38], which was not taken into account in this study.
We used Particle Filter to test and identify the time-varying brain connectivity as evidenced in fMRI images. Our experiments confirmed the hypothesis of time-varying brain connectivity pattern and gave evidence for non-symmetric connectivity. It was possible to detect statistically significant changes in cortical cause-effect relationships correlated with the underlying task-rest pattern during the fMRI acquisition.
Future studies should test the performance of the proposed algorithm in fMRI experiments with higher time resolution, namely \(<0.8 \,s\), and they should aim to unveil possibly asymmetric changes in effective connectivity among brain regions. Also, to minimize the impact of vascular dynamics and highlight neural ones, future studies should use more sophisticated experimental designs that enable a better control over the non-uniformity of brain haemodynamics across different areas [38,39,40].
As suggested by Bugallo and Djuric [41], the PF can be improved by a parallel implementation when dealing with complex system, such as the brain. Ordinary brain connectivity analysis represents ROIs as time series obtained by averaging signals originating from more voxels of that region. This helps improving the Signal-to-Noise Ratio (SNR) but assumes some wide-scale connectivity features. Because of this wide-scale connectivity assumption, more reliable results may be achieved with a parallel combination of Particle Filters carried over single-voxels time series.
Our results, together with the possibility to refine the methodology, suggest that the proposed computational method, the Particle Filter, can be capable to infer time-varying effective connectivity on acquisitions with acceptable scan duration, without the need of any constraint or previous knowledge about the examined network or timing of the underlying brain processes.
The fMRI data used in this study will be made available upon direct request to the authors.
Friston KJ (2011) Functional and effective connectivity: a review. Brain Connect 1:13–36
Stephan KE, Friston KJ (2010) Analyzing effective connectivity with functional magnetic resonance imaging. WIREs Cognit Sci 1:446–459
Greicius MD, Srivastava G, Reiss AL, Menon V (2004) Default mode network activity distinguishes Alzheimer's disease from healthy aging: evidence from functional MRI. Proc Natl Acad Sci USA 101:4637–4642
Greicius MD (2008) Resting-state functional connectivity in neuropsychiatric disorders. Curr Opin Neurol 21:424–430
Jones DT, Vemuri P, Murphy MC, Gunter JL, Senjem ML, Machulda MM et al (2012) Non-stationarity in the "resting brain's''' modular architecture." PLoS ONE 7:e39731
van den Heuvel MP (2010) Exploring the brain network: a review on resting-state fMRI functional connectivity. Eur Neuropsychopharmacol 20:519–534
Greicius MD, Flores BH, Menon V, Glover GH, Solvason HB, Kenna H et al (2007) Resting-state functional connectivity in major depression: abnormally increased contributions from subgenual cingulate cortex and thalamus. Biol Psychiatry 62:429–437
Jirsa V, McIntosh AR (2007) Models of effective connectivity in neural systems. In: Jirsa V, McIntosh AR (eds) Handbook of brain connectivity, 5th edn. Springer, Berlin, pp 303–326
Honey G, Bullmore E (2004) Human pharmacological MRI. Trends Pharmacol Sci 25(7):366–374
Stephan KE, Harrison LM, Penny WD, Friston KJ (2004) Biophysical models of fMRI responses. Curr Opin Neurobiol 14:629–635
Gottesman II, Gould TD (2003) The endophenotype concept in psychiatry: etymology and strategic intentions. Am J Psychiatry 160:636–645
Roebroeck A, Formisano E, Goebel R (2005) Mapping directed influence over the brain using Granger causality and fMRI. Neuroimage 25:230–242
Desphande G, Sathian K, Hu X (2010) Effect of hemodynamic variability on Granger causality analysis of fMRI. Neuroimage 52:884–896
Leonardi N, Van De Ville D (2015) On spurious and real fluctuations of dynamic functional connectivity during rest. NeuroImage 104:430–436
Friston KJ, Harrison L, Penny W (2003) Dynamic causal modelling. Neuroimage 19:1273–1302
Geyer CJ (2011) Introduction to Markov chain Monte Carlo. In: Brooks S, Gelman A, Jones G, Meng X (eds) Handbook of Markov chain Monte Carlo. CRC Press, Taylor and Francis Group, Boca Raton
MATH Google Scholar
Ancherbak S, Kuruoğlu EE, Vingron M (2016) Time-dependent gene network modeling by Sequential Monte Carlo. IEEE/ACM Trans Comput Biol Bioinform 13:1183–1193
Ambrosi P, Costagli M, Kuruoglu EE, Biagi L, Buonincontri G, Tosetti M (2019) Investigating time-varying brain connectivity with functional magnetic resonance imaging using sequential Monte Carlo. In: 2019 27th European Signal Processing Conference (EUSIPCO). A Coruna: IEEE (p. 1–5)
Djuric PM, Kotecha JH, Zhang J, Huang Y, Ghirmai T, Bugallo MF, Miguez J (2003) Particle filtering. IEEE Signal Process Mag 20:19–38
Arulampalam MS, Maskell S, Gordon N, Clapp T (2002) A tutorial on particle filters for nonlinear/non-gaussian bayesian tracking. IEEE Trans Signal Process 50:174–188
Costagli M, Kuruoğlu EE (2007) Image separation using particle filters. Digital Signal Process 17:935–946
Wang Z, Kuruolu EE, Yang X, Xu Y, Huang TS (2011) Time varying dynamic Bayesian network for nonstationary events modeling and online inference. IEEE Trans Signal Process 59(4):1553–1568
Valdés-Sosa PA, Roebroeck A, Danizeau J, Friston K (2011) Effective connectivity: influence, causality and biophysical modeling. Neuroimage 58:339–361
Gaglianese A, Costagli M, Bernardi G, Ricciardi E, Pietrini P (2012) Evidence of a direct influence between the thalamus and hMT+ independent of V1 in the human brain as measured by fMRI. Neuroimage 60:1440–1447
Casorso J, Kong X, Chi W, Van De Ville D, Yeo BTT, Liègeois R (2019) Dynamic mode decomposition of resting-state and task fMRI. Neuroimage 194:2–54
Bressler SL, Seth AK (2011) Wiener-Granger causality: a well-established methodology. Neuroimage 58(2):323–9
Gaglianese A, Costagli M, Ueno K, Ricciardi E, Bernardi G, Pietrini P, Cheng K (2015) The direct, not V1-mediated, functional influence between the thalamus and middle temporal complex in the human brain is modulated by the speed of visual motion. Neuroscience 284:833–844
Doucet A, Godsill S, Andrieu C (2000) On sequential Monte Carlo sampling methods for Bayesian filtering. Stat Comput 10:197–208
Liu JS, Chen R (1995) Blind deconvolution via sequential imputations. J Am Stat Assoc 90:567–576
Gençağa D, Kuruoğlu EE, Ertüzün A (2010) Modeling non-Gaussian time-varying vector autoregressive processes by particle filtering. Multidim Syst Sign Process 21:73
Mohammadi A, Asif A (2013) Distributed particle filter implementation with intermittent/irregular consensus convergence. IEEE Trans Signal Process 61(10):2572–2587
Jenkinson M, Bannister P, Brady M, Smith S (2002) Improved optimisation for the robust and accurate linear registration and motion correction of brain images. NeuroImage 17(2):825–841
Murray L, Storkey AJ (2008) Continuous time particle filtering for fMRI. Adv Neural Inf Process Syst 20:1049–1056
Buxton RB, Wong EC, Frank LR (1998) Dynamics of blood flow and oxygenation changes during brain activation: the balloon model. Magn Reson Med 39(6):855–864
Hettiarachchi IT, Mohamed S, Nahavandi S (2012) Identification of nonlinear fMRI models using auxiliary particle filter and Kernel smoothing method. In: 34th Annual International Conference of the IEEE EMBS, San Diego, California: IEEE. 28 Aug–1 Sept
Chambers M, Wyatt C (2011) An analysis of blood-oxygen-level-dependent signal parameter estimation using particle filters. In: 2011 IEEE International Symposium on Biomedical Imaging. New York: IEEE. Mar 30 (p. 250–253)
Ahmad MF, Murphy J, Vatansever D, Stamatakis EA, Godsill S. (2015) Tracking changes in functional connectivity of brain networks from resting-state fMRI using Particle Filters. IEEE International Conference on Acoustics, Speech and Signal Processing. New York: IEEE
Handwerker DA, Ollinger JO, D'Esposito M (2004) Variation of BOLD hemodynamic responses across subjects and brain regions and their effects on statistical analyses. Neuroimage 21:1639–1651
Duggento A, Passamonti L, Valenza G, Barbieri R, Guerrisi M, Toschi N (2018) Multivariate Granger causality unveils directed parietal to prefrontal cortex connectivity during task-free MRI. Sci Rep 8:1–11
Aguirre GK, Zarahn E et al (1997) Empirical analyses of BOLD fMRI statistics: II. Spatially smoothed data collected under null-hypothesis and experimental conditions. NeuroImage 5(3):199–212
Bugallo MF, Djuric PM. Complex systems and particle filtering, 42nd Asilomar Conference on Signals, Systems and Computers. New York: IEEE; 2008
The authors thank Dr Sergiy Ancherbak for having shared with them his particle filtering code for time-dependent gene network modeling.
This work has been partially supported by grants "RC 2018-2020" and "5 per mille" to IRCCS Fondazione Stella Maris, funded by the Italian Ministry of Health.
Department of Neuroscience, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy
Pierfrancesco Ambrosi
Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, and Maternal-and-Child Sciences, University of Genoa, Genova, Italy
Mauro Costagli
Laboratory of Medical Physics and Magnetic Resonance, IRCCS Stella Maris, Pisa, Italy
Mauro Costagli, Laura Biagi, Guido Buonincontri & Michela Tosetti
Tsinghua-Berkeley Shenzhen Institute, Data Science and Information Technology Center, Shenzhen, China
Ercan E. Kuruoğlu
Imago 7 Research Center, Pisa, Italy
Laura Biagi, Guido Buonincontri & Michela Tosetti
Information Science and Technology Institute (ISTI), National Council of Research (CNR), Pisa, Italy
Laura Biagi
Guido Buonincontri
Michela Tosetti
Conceived and designed the experiments: PA, MC, MT. Performed the experiments: MC, LB, GB. Analyzed the data: PA, MC. Contributed analysis tools: EK. Wrote the paper: PA, MC, EK. All authors read and approved the final manuscript
Correspondence to Pierfrancesco Ambrosi.
Ethics approval was issued by "Comitato per la sperimentazione clinica dei medicinali dell'Azienda Ospedaliero Universitaria Pisana", reference number 12697, and by "Comitato Etico Regionale per la Sperimentazione Clinica della Regione Toscana", reference number 32891. All subjects provided written informed consent.
Ambrosi, P., Costagli, M., Kuruoğlu, E.E. et al. Modeling brain connectivity dynamics in functional magnetic resonance imaging via particle filtering. Brain Inf. 8, 19 (2021). https://doi.org/10.1186/s40708-021-00140-6
Brain connectivity
Particle filter
Sequential Monte Carlo
VAR model
|
CommonCrawl
|
Which updates during an equity crowdfunding campaign increase crowd participation?
Jörn Block1,2,3,
Lars Hornuf4,5 &
Alexandra Moritz1
Small Business Economics volume 50, pages 3–27 (2018)Cite this article
Start-ups often post updates during equity crowdfunding campaigns. However, little is known about the effects of such updates on crowd participation. We investigate this question by using hand-collected data from 71 funding campaigns and 39,399 investment decisions on two German equity crowdfunding portals. Using a combination of different empirical research techniques, we find that posting an update has a significant positive effect on the number of investments made by the crowd and the investment amount collected by the start-up. This effect does not occur immediately in its entirety; rather, it lags the update by a few days. Furthermore, the effect of updates loses statistical significance with the number of updates posted during a campaign. We also find that an easier language used in updates increases crowd participation, whereas the length of updates has no effects. With respect to the update's content, we find that the positive effect can be attributed to updates about new developments of the start-up such as campaign developments, new funding, business developments, and cooperation projects. Updates on the start-up team, business model, product developments, and promotional campaigns do not have meaningful effects. Our paper contributes to the literature on the effects of information disclosure on equity crowdfunding participation. Furthermore, our results have practical implications for start-ups and their investor communication during equity crowdfunding campaigns.
Equity crowdfunding is an important tool for young and innovative start-ups to collect early-stage funding. Prior research has investigated the success drivers of equity crowdfunding campaigns and has shown that information provided by the start-up, such as the human and social capital of the founders, risks involved, and financial projections, have a positive influence on campaign success (Ahlers et al. 2015; Moritz et al. 2015; Vismara 2016b; Polzin et al. 2017). This information usually does not change during a crowdfunding campaign and is typically provided by the start-up before a campaign starts.
Our paper takes a more dynamic perspective than prior research by investigating the role of updates provided by start-ups during an equity crowdfunding campaign. We analyze how start-ups can use updates during the campaign to encourage the crowd to provide funding. This particular determinant of equity crowdfunding participation has been overlooked in the literature so far, and as such, there is an important gap on the effects of information disclosure on crowd participation (Ahlers et al. 2015; Moritz et al. 2015; Bernstein et al. 2017; Vismara 2016b; Polzin et al. 2017). Updates enable start-ups to signal their value to the crowd and to establish credibility and legitimacy during a crowdfunding campaign. We investigate three research questions: First, we analyze whether updates and their frequency have an influence on crowd participation and whether the effect occurs immediately or in a lagged form (Research Question 1 (RQ1)). Second, we investigate how the language complexity used in the updates and the length of the updates affect crowd participation (Research Question 2 (RQ2)). And finally, we look at the content of these updates to determine how the crowd reacts to different signals and information communicated via updates (Research Question 3 (RQ3)). Thus, we not only look at the effects of updates on funding participation per se but also at the effects of specific update characteristics and contents.
To answer our research questions, we investigate updates posted by start-ups during an equity crowdfunding campaign by using hand-collected data from 71 funding campaigns and 39,399 investment decisions on two German equity crowdfunding portals. We find an overall positive effect of posting an update on the number of investments by the crowd and the investment amount collected by the start-up. However, this positive effect does not occur immediately in its entirety; rather, it lags a few days behind the respective update. The effect increases with the ease of language used in the update. Furthermore, we find that the first updates have positive but only marginally significant effects, while the later updates have no significant effects on crowd participation. Large differences exist when distinguishing updates according to their content. Updates that deal with the start-up team, business model, product developments, and campaign promotions do not have meaningful effects on crowd participation. Instead, positive effects on funding participation can be attributed to updates about campaign development, new funding, business developments, and cooperation projects.
Our paper contributes to the entrepreneurial finance literature (for recent overviews see Block et al. 2017a, b). In particular, we contribute to research on the selection criteria of early-stage investors looking at a new type of investor—the crowd. It has been found that specific information, such as education of the entrepreneurial team, protection of intellectual property rights, the venture's network, and firm alliances, are important drivers for the investment decisions of professional early-stage investors such as venture capital funds (Audretsch et al. 2012; Baum and Silverman 2004; Block et al. 2014, 2017a, b; Busenitz et al. 2005; Franke et al. 2008; Jell et al. 2011). It has also been shown that start-ups use this information to signal their value to investors (Audretsch et al. 2012; Block et al. 2014; Connelly et al. 2011). Hence, our paper contributes to research about signals in entrepreneurial finance by looking at the specific context of crowdfunding and crowdinvestors as a new type of venture investor. Furthermore, we add to the growing research on crowdfunding and in particular on equity crowdfunding. Our paper extends this literature by taking a dynamic perspective, investigating how start-ups can signal their value during a crowdfunding campaign using updates as communication tools to increase the likelihood of successful campaigns.
In addition to its contribution to the academic literature, our paper's results also have practical implications for start-ups and crowdfunding platforms. For start-ups, it is worthwhile to learn more about the effects of updates on equity crowdfunding participation. By posting updates, start-ups can actively influence the chances of successfully completing their equity crowdfunding campaigns. Our results show, for example, that the specific content of an update is key, while simply posting more updates has little effect. Knowing which updates drive funding participation is crucial for start-ups to design an effective and successful communication in equity crowdfunding campaigns. For platforms, this information is important to encourage start-ups to publish updates with content valued by the crowd to increase the likelihood of a successful campaign and ultimately, the platforms' own business success.
The remainder of the paper is organized as follows. The next section provides the theoretical framework of our study and develops hypotheses. Section 3 introduces the data sources and the research techniques used to code and categorize the updates posted by the start-ups during the campaigns. Based on this, we introduce the variables used in the regression analysis and explain our empirical model. Section 4 presents the descriptive and multivariate results. The final two sections discuss our results, link them to the crowdfunding and entrepreneurial finance literatures, and summarize our contributions to theory and practice.
Theoretical framework and hypotheses
Signaling theory
Our theoretical framework is based on signaling theory, which is primarily concerned with reducing information asymmetries between two parties, where the better informed party sends a quality signal to the less informed party (Connelly et al. 2011). In a seminal article, Spence (1973) applied this theory to the labor market, demonstrating how job applicants can use their higher education as effective signals to reduce their potential employers' information deficits. Since then, signaling theory has been used in various research fields such as strategic management, entrepreneurship, labor economics, and human resource management (Connelly et al. 2011). The core concept of signaling theory is summarized in Fig. 1. The key elements are the signaler, the signal, the receiver, and the signaling environment. Signalers are information insiders who possess private information about an individual, a product, or an organization that is not available to outsiders (Spence 1973; Connelly et al. 2011). Signalers deliberately send positive signals to information outsiders to reduce information asymmetries and cause a reaction by the receiver, for example, the investment in a company (Certo 2003; Busenitz et al. 2005). However, for signals to be effective, they need to fulfill two main characteristics: First, they need to be observable because otherwise they would not be perceived by the receiver. Second, signals need to be costly, otherwise they would be too easy to fake or imitate (Spence 1973). Signaler and receiver have—at least in part—conflicting interests: The signaler would gain from sending inferior signals and therefore has an incentive to deceive the receiver (Ross 1977). As receivers are disadvantaged by acting on false signals, they learn to ignore these signals and perceive the signaler as dishonest (Connelly et al. 2011).
Concept of signaling theory
Signal effectiveness can be enhanced by communicating signals frequently and with a high signal consistency (Janney and Folta 2003; Fischer and Reuber 2014). This increases the chances that receivers capture the signal and are not confused by different signal contents (Gulati and Higgins 2003; Gao et al. 2008). This is directly related to the role of receivers' characteristics for signal effectiveness (Perkins and Hendry 2005). In addition to the required attention of receivers to capture the signal, different receivers are likely to interpret signals differently (Perkins and Hendry 2005). This signal translation might even result in a diversion of the signals' original intent (Branzei et al. 2004; Highhouse et al. 2007). Hence, signal clarity is another important characteristic of a signal so that the signaler can achieve the desired effect (Certo 2003; Warner et al. 2006). In this context, countersignals send by receivers as feedback to the signaler can provide additional information about the effectiveness of the signal (Srivastava 2001; Connelly et al. 2011). Finally, the signaling environment can influence the signals' effectiveness. Distortions of the signal can occur, for example, whenever the signal medium reduces its observability (Carter 2006; Fischer and Reuber 2014). In addition, other receivers' interpretation can affect the effectiveness of signals. If a number of receivers interpret signals in a specific way, this might lead to imitation by others (McNamara et al. 2008; Connelly et al. 2011).
Updates by the start-ups as signals in crowdfunding
Visibility of updates and its effects on crowd participation
In the context of entrepreneurial finance, information asymmetries between a start-up's management team and potential investors play a major role. Ventures need to find a way to signal their quality to potential investors to establish legitimacy and credibility and to receive financing (Rao et al. 2008; Zimmerman and Zeitz 2002). In the specific setting of crowdfunding, start-ups aim to collect capital from a large number of mostly anonymous investors, who contribute small amounts of money via the Internet (Moritz et al. 2015; Belleflamme et al. 2014; Hemer et al. 2011; Hornuf and Schwienbacher 2016). The average crowdfunding investor is not likely to have the time, capacity, and incentive to investigate firms and their business model in detail (Ahlers et al. 2015; Lukkarinen et al. 2016). Due to the specific characteristics of crowdfunding, establishing personal relationships to reduce information asymmetries typical for business angel or venture capital investments (Landström 1992; Sapienza and Korsgaard 1996; Kollmann and Kuckertz 2006) is not feasible in equity crowdfunding markets. Hence, companies need to find alternative ways to communicate their value to the crowd.
Prior research found that updates provided by start-ups can increase funding success (Hornuf and Schwienbacher 2015; Kuppuswamy and Bayus 2017; Mollick 2014; Xu et al. 2014; Wu et al. 2015). Updates are a one-sided communication tool often used during a campaign as it can be applied flexibly by the start-up to provide additional information about the product, the start-up, or the campaign. Hence, referring to the concept of signaling theory (see Fig. 1), our focus in this study is on the signal communicated via updates to convey the start-up's value to the crowd. In line with prior research on reward-based crowdfunding (Mollick 2014; Kromidha and Robson 2016), we propose that updates in general have a positive effect on equity crowdfunding participation as they typically are highly visible and observable for potential investors. Even though updates might not always be costly for the signaler,Footnote 1 they reduce search costs for investors. Hence, we expect:
H1: Updates provided by the start-up have a positive effect on crowd participation.
However, as updates are posted on the campaign website of the crowdfunding portal, potential investors only see the update if they visit the website. Therefore, start-ups and crowdfunding portals typically also communicate these updates in their social media channels or via newsletters to increase investors' awareness of the update. Furthermore, posting an update has typically no immediate effect on crowd participation because investors' need some time to learn about the update and to pledge their money (Wheat et al. 2013; Mollick 2014; Kromidha and Robson 2016; Vismara 2016b). Hence, the visibility of updates and their effect on crowd participation is likely to be delayed by a few days.
H2: The effect of updates on crowd participation does not occur immediately in its entirety but is delayed by a few days.
In addition, it has been shown that the communication of credible signals is not a static but an ongoing process (Janney and Folta 2003). Signaling can be used to inform investors about the developments of the start-up. The optimal number of signals provided depends on the progress of the start-up since communicating the last credible signal (Janney and Folta 2003). Therefore, we expect using updates regularly to send signals to the crowd has a positive effect on equity crowdfunding participation. However, during a crowdfunding campaign which typically has a funding period of around two months, new developments which can be communicated to investors are limited. An increasing number of updates might even be perceived by investors as unreliable or cheap talk as no further information value can be delivered (Perkins and Hendry 2005; Block et al. 2014). Therefore, we expect that the marginal value of updates will decrease as the updates no longer provide much additional value to potential investors (Janney and Folta 2003, 2006; Block et al. 2014). Hence, we suggest that a negative relationship exists between the number of updates posted and their effect on crowd participation:
H3: The effect of updates on crowd participation decreases with the number of updates posted by the start-up.
Clarity of updates and its effects on crowd participation
Signaling theory has shown that signals need to be visible and clear so that market participants are able to capture the information content of the signal (Certo 2003; Warner et al. 2006). The clarity of the signal directly relates to the interpretation by receivers: Members of a group of very heterogeneous receivers are more likely to translate the signal differently (Perkins and Hendry 2005; Connelly et al. 2011). As the receivers of signals in crowdfunding markets have been found to be very heterogeneous (Ahlers et al. 2015), the clarity of the signal is particularly important. Clarity, however, depends on the complexity of the language used in the updates. Hence, we propose that updates using a complex language are more difficult to understand, loose clarity and therefore their effectiveness as a signal:
H4: The effect of updates on crowd participation decreases with the complexity of the language used in the update.
Furthermore, previous research found that the length of descriptions in crowdfunding campaigns have a significant positive effect on the campaign outcome (Greiner and Wang 2010; Gao and Lin 2014). Longer descriptions can deliver more information about the project, the start-up, or the product and can help to reduce information asymmetries between the start-up and potential investors. Hence, we propose:
H5: The effect of updates on crowd participation increases with the length of the update.
Content of updates and its effects on crowd participation
Prior research in entrepreneurial finance found that the content of signals provided by the start-up plays an important role. Ventures can use a number of different signals to reduce information asymmetries by communicating their value to potential investors, such as the entrepreneurial team education, intellectual property rights, and the share of retained equity (Audretsch et al. 2012; Baum and Silverman 2004; Block et al. 2014; Busenitz et al. 2005).
Even though crowdfunding research is still young, a number of different signals have been found to have a positive effect on crowd participation. However, it needs to be considered that investors' motivations have been shown to depend on the specific crowdfunding model (Cholakova and Clarysse 2015; Lukkarinen et al. 2016; Vulkan et al. 2016; Polzin et al. 2017), which suggests that the effects of updates and the signals used also differ according to the crowdfunding model. Focusing on findings in relation to venture financing with a profit participation of investors, the content of these signals can be roughly summarized into information about the start-up's quality (i.e., the management team, its preparedness and openness, and the start-up's financials) and external credentials provided by third parties (i.e., through social networks, reputable investors, protection of intellectual property, reception of grants, and the reaction by the crowd). Table 1 provides an overview of the findings from prior crowdfunding research.
Table 1 Prior research about the effects of signals in equity crowdfunding
However, none of these studies focuses on the dynamic aspects of providing new and ongoing signals to investors using updates during equity crowdfunding campaigns. Our study is—to the best of our knowledge—the first to look into this important research question.
We refrain from ex ante assumptions and use an exploratory approach and formulate the following open research question: "How does the type of content provided in the update influence crowd participation?". Figure 2 summarizes our three research questions and hypotheses.
Research questions (RQ) and hypotheses
Data and method
Our empirical analysis uses data from two German equity crowdfunding portals over the period from June 7, 2012, to April 27, 2015. The two portals are Seedmatch and Companisto, which are important players in the German equity crowdfunding market and together represent about 75% of the overall crowdfunding capital raised during our observation period. For Companisto, we hand collected data on all 36 campaigns that were completed until the end of the observation period. For Seedmatch, we were able to hand collect data on 29 of 78 campaigns. We could collect investment data on only about half of the campaigns for Seedmatch because the portal takes information about individual investments immediately off the website once the campaign terminates. We therefore could not collect data for the campaigns that ended before June 7, 2012. For some campaigns, we were simply too slow to hand collect the data from the website.
Some start-ups such as Meine-Spielzeugkiste ran two campaigns on the same portal. Furthermore, Aoterra, Controme, Ledora, Payme, Protonnet, and Riboxx reached their respective funding limits quickly and subsequently decided to raise more capital. On average, it took these start-ups six days to initiate the campaign again. We have counted these rounds as independent campaigns, as investors could not anticipate that a second round would quickly follow the end of the first round and thus most likely did not adapt their investment behavior accordingly. Overall, we were able to analyze 39,399 investment decisions within 71 unique funding campaigns. In line with Kuppuswamy and Bayus (2017), we then constructed a panel data set that aggregates the number of investments in a particular campaign on a given day. The time dimension of the panel data set is the duration of the campaign in days, while the cross-sectional dimension refers to the campaigns.
Dependent variables
In our empirical analysis, we use two different dependent variables: the number of investments and the amount of capital pledged during an equity crowdfunding campaign on a given day. This allows us to investigate the effects of updates on the number of crowd investments as well as the amount of money pledged.
Explanatory variables
To investigate H1 and H2, we consider the variable Update, which measures the number of updates posted during a campaign on a given day. In different specifications, the variable is lagged by 1 day or alternatively measures the number of updates that were posted during the course of one week. To investigate the frequency by which a start-up posts updates during the course of a campaign as outlined in H3, we consider the variable Update Number, which captures the number of updates that have been previously posted by the start-up during a particular crowdfunding campaign. Furthermore, to investigate H4, we use the Flesch Readability Index (Flesch Index) that measures the language complexity of an update (Flesch 1948). More precisely, we use the "reading ease" rating of the Flesch index defining a seven-item scale, where 1 corresponds to a Flesch index of 0–30 (very difficult language) and 7 to a Flesch index of 91–100 (very easy language) (Courtis 1995; Flesch 1948). Finally, in order to test H5 we consider the variable Words, which captures the text length of an update.
To identify the information included in the updates posted by the start-up, we develop a coding system that categorizes the information contained in the campaign updates. For this purpose, we used the software package MaxQDA, which allowed us to analyze qualitative data. In a first step, we generated an initial list of update categories based on our prior knowledge and previous research on investment decisions in equity crowdfunding (Hornuf and Schwienbacher 2015; Moritz et al. 2015; Moritz and Block 2015; Vismara 2016a, 2016b). During the coding process, we expanded this initial coding system by using an iterative and inductive process to cover all relevant information provided by the updates (Miles and Huberman 1994). Then, we merged similar categories and finally developed a system of categories with higher dimensions (Gioia et al. 2012; Miles and Huberman 1994). Our final coding system consists of nine categories of updates: Team, Business Model, External Certification, Product Development, Cooperation Projects, Campaign Development, New Funding, Business Development, and Promotions.
The category Team contains all the information about the start-up's founders and employees, such as their education, age, and personal interests. In the category Business Model, we coded updates on the start-up's business model, market, business idea, future business orientation, and expansion aspirations. External Certification comprises updates where the start-ups informed investors about external certification through expert opinions, recommendations, awards won by the start-up, patent applications, press coverage, and participations at trade fares, conferences, or organized talks. The category Product Development contains information about the start-up's product, target customers, new product innovations, and introduction of prototypes. Information about new cooperation projects by the start-up is coded in the category Cooperation Projects. Campaign Development contains information about developments of the crowdfunding campaign, such as the current number of investors, funding amount, and announcements about increases in the funding limit. Financing provided by other market participants, such as business angels, venture capitalists, or the government (i.e., public grants or subsidies), is included in the category New Funding. The category Business Development contains information about the financial development of the start-up (e.g., sales development and turnover) as well as customer updates (e.g., the number of customers or new customers). Finally, the category Promotion contains information about promotions, networking via social media, current events to meet crowd investors and appeals to investors to support the company with marketing activities or recommendations. A detailed overview of the categories, including some examples, is provided in Table 5 of the Appendix.
To ensure that our coding system is reliable and coherent, detailed explanations were provided for each category. Then, a second researcher, who was not involved in the project, coded 20% of the updates. This allowed us to ensure that the coding categories were exhaustive and that they have a high degree of objectivity. The inter-rater reliability using Cohen's Kappa indicated good agreement between us and the external researcher (the average Cohen's Kappa for all categories was 0.65) (Fleiss et al. 2003; Landis and Koch 1977). To permit even higher consistency in the coding, the coding system was then discussed with the external researcher and adapted when necessary. Afterwards, both researchers coded again all 234 updates of the 71 equity crowdfunding campaigns. Once again, an inter-rater reliability analysis was conducted to ensure coding consistency between the researchers. Again, we used Cohen's Kappa as a statistical measure of inter-rater reliability for the coding of the nine main update categories. Cohen's Kappa for the individual categories ranged from 0.70 to 0.96; the average Cohen's Kappa of 0.84 for all categories indicates excellent agreement between us and the external researcher.
Control variables and fixed effects
Following prior research on funding dynamics in equity crowdfunding (Hornuf and Schwienbacher 2015; Vismara 2016b), we included several control variables in our baseline regression. To account for campaign participation before the focal day, we control for the amount of capital raised during the crowdfunding campaign until the previous day (Ln(Amount)0 → t −1). While this variable indicates how much capital has already been invested, it does not capture how many investors supported the campaign and whether more investors might provide a signal regarding the collective wisdom of the crowd. Since we cannot uniquely identify investors across portals by using their name and location (i.e., there might be two or more Thomas Mueller living in Munich and investing on the two portals), we consider the number of investments to be the best available proxy for the number of investors that have invested until the previous day (#Investments0 → t −1).
Hornuf and Schwienbacher (2015) show that investments slow down under first-come, first-serve funding mechanism once the funding goal is reached. We therefore include the dummy variable Post Funded, which equals 1 if the funding goal is reached and 0 otherwise. In line with Cumming and Zhang (2016) as well as Kuppuswamy and Bayus (2017), we include a variable that captures the number of active campaigns across four major German equity crowdfunding portals, including the two portals in our data set as well as Innovestment and United Equity (Active Campaigns).Footnote 2 Similarly, we include a variable that captures the number of investments made on these four portals on a given day (Competing Investments). This variable is included to capture potential "Blockbuster Effects" (Kickstarter 2012; Doshi 2014), where a popular and widely visible campaign steals investors away from other campaigns. Vismara (2016a) shows that equity retention influences crowdfunding success. Since start-ups on German equity crowdfunding portals do not issue equity shares but some mezzanine form of investment (equity shares are too expensive to transfer as a costly notary must be involved and the platform requires an authorization by the German Securities Regulator), we calculate the quasi-equity share offered to the crowd. This is the percentage of the minimum amount of capital requested over the pre-money valuation of the start-up (Equity Share). Finally, to control for portal characteristics, we include a dummy variable (Seedmatch) that is equal to 1 if the campaign is run on Seedmatch and 0 if it is run on Companisto.
However, given that we might not have controlled for all relevant explanatory variables, we also consider a range of fixed effects. First, we include campaign fixed effects as they help us to remove any time-invariant heterogeneity from the focal campaign, such as the type of financial contract used, specific clauses that have been defined, or the industry of the start-up. Second, we include various fixed effects that capture the time of the investments, such as the day of the week, the month of the year, the respective year, and the day of the funding cycle. While endogeneity in the form of missing variables is an inevitable problem in empirical research, the controls we consider here should capture the most relevant observable and unobservable missing variables.
Empirical models
Because the first dependent variable is measured as a count variable and because its unconditional variance suffers from overdispersion, we estimate a negative binomial regression model. The results of a Hausman test led us to dismiss the random-effects estimator as being inconsistent. We therefore estimate a fixed effects negative binomial (FENB) model, which is a pseudo panel estimator that allows us to include time-invariant measures into the regression, such as the variables Equity Share and Seedmatch. In our baseline specification, we estimate the following FENB model:
$$ \Pr \left({y}_{i1},{y}_{i2},\dots {y}_{i\mathrm{T}}\right)= F\left(\mathrm{Ln}{\left(\mathrm{Amount}\right)}_{i,0\to t-1}+\#{\mathrm{Investments}}_{i,0\to t-1}+{\mathrm{PostFunded}}_{i t}+\mathrm{Numberof}{\mathrm{ActiveCampaigns}}_t+\mathrm{Numberof}{\mathrm{CompetingInvestments}}_t+{\mathrm{PostFunded}}_{i t}+{\mathrm{EquityShare}}_i+{\mathrm{Seedmatch}}_i+{\mathrm{Update}}_{i t}+{\mathrm{Update}\mathrm{Number}}_{i t}+{\mathbf{DoW}}_t+{\mathbf{MoY}}_t+{\mathbf{Year}}_t+{\mathbf{DoIC}}_{i t}+{\mathrm{Campaign}}_i\right) $$
where y is the number of investments in campaign i on day t. F(.) represents a negative binomial distribution function as in Baltagi (2008). We specify campaign fixed effects denoted by Campaign. DoW is a vector of dummies that indicates the day of the week. MoY is a vector of dummies for the month of the year. Year is a vector of dummies for the respective years. In line with Kuppuswamy and Bayus (2017), DoIC is a vector of dummies that indicates the first and the last 7 days of the funding campaign.
For the second dependent variable, which measures the amount of capital that was pledged on a given day, we run a simple OLS panel regression. The results of a modified Hausman test again led us to dismiss the random-effects estimator as being inconsistent. We therefore run a standard OLS fixed effects panel data model. However, this model does not allow us to identify time-invariant campaign effects, as the time-invariant heterogeneity will be differenced out by the estimator. We therefore can no longer identify the effect of the variables Equity Share and Seedmatch. The baseline OLS model takes the following form:
$$ \mathrm{Ln}{\left(\mathrm{Amount}\right)}_{it}=\mathrm{Ln}{\left(\mathrm{Amount}\right)}_{\mathrm{i},0\to t-1}+\#{\mathrm{Investments}}_{\mathrm{i},0\to t-1}+{\mathrm{PostFunded}}_{it}+\mathrm{Number}\mathrm{of}{\mathrm{ActiveCampaigns}}_t+\mathrm{Number}\mathrm{of}{\mathrm{CompetingInvestments}}_t+{\mathrm{PostFunded}}_{it}+{\mathrm{Update}}_{it}+{\mathrm{Update}\mathrm{Number}}_{it}+{\mathbf{DoW}}_t+{\mathbf{MoY}}_t+{\mathbf{Year}}_t+{\mathbf{DoIC}}_{it}+{\mathrm{Campaign}}_{it}+{\mathrm{u}}_{it}. $$
Descriptive statistics
For the 71 equity crowdfunding campaigns over the period from June 7, 2012, to April 27, 2015, we observe 5210 campaign days, which are defined as days when investors had the opportunity to invest in a specific equity crowdfunding campaign. Overall, the start-ups running these campaigns posted 234 updates, with an average of 3.30 updates per campaign. However, while some start-ups did not post a single update, others have extensively used this tool to inform the crowd and encourage investor participation. During the campaign of MyParfum, for instance, a total of 14 updates were posted. Interestingly, some update categories were posted more frequently than others. For example, investors were more often informed about the business model, promotional campaigns, the latest product developments, and the external certifications of the start-up than about recent campaign developments or the start-up team. Start-ups rarely disclosed updates on new funding. During most of the campaign days, no update was posted. Every 25 days, start-ups posted an update and occasionally even two updates were posted on the same day. The mean update contained 289 words (median: 248 words).
The 71 campaigns in our sample were run by 63 unique start-ups. Some start-ups ran multiple campaigns on different or sometimes the same portal. All of these start-ups are located in Germany. Most of them operate in the information and communication, wholesale and retail, as well as manufacturing sectors. Regarding the campaign development, on 86% of the campaign days, the start-ups had already surpassed the funding goal, and the founders of the start-up thus knew that they would ultimately receive the capital (Post Funded). Table 2 also shows that, on average, 7.56 investments were made on a campaign day and that 5886.74 € were pledged by the crowd. On some days, the crowd invested as much as 1.5 million € in a single campaign, while on other days, they withdrew 10,000 € of investments. On average, 436.85 investments were made before an investor decided to invest. On a given campaign day, 40.37 investments were made in the overall market, and 6.55 campaigns were run in addition to the campaign under consideration. Table 6 in the Appendix shows a correlation table that includes the dependent variables and the main explanatory variables.
Table 2 Summary statistics
Results of the baseline regression models
Table 3 shows the regression results for our baseline models. For the FENB model, we report incident rate ratios, which can be interpreted as multiplicative effects or semi-elasticities.Footnote 3 In line with prior research (Hornuf and Schwienbacher 2015), we find that 100 additional investments until the previous day reduce the number of investments on a given day by 9% and the amount invested by 32%. Once the campaign was successfully funded, the investment amounts on a given day decrease on average by 63%. Moreover, when other campaigns received 100 additional investments, the campaign under consideration received 8% more investments and 24% more capital was pledged. This finding may result from a general boom in the equity crowdfunding sector after periods of extensive media coverage positively reporting about this method of financing. Portal differences exist, with Seedmatch campaigns—depending on the specification—attracting on average 60 to 82% fewer investments than Companisto campaigns, which is most likely due to the fact that the minimum investment ticket of Seedmatch is 50 times larger than the 5 € minimum ticket of Companisto. Furthermore, while the day of the week dummies show that less investment activities take place during the weekend and that the campaign days follow the L-shaped pattern as described in Hornuf and Schwienbacher (2015), no consistent pattern emerges for any of the other fixed effects.
Table 3 Baseline regressions
In accordance with H1, we find that updates positively influence crowd participation. While the effect does not take place immediately, we locate a significant effect for the number of investments the following day. Furthermore, updates posted over the course of 1 week do not only influence the number of investments but also the amount invested, with one more update increasing the number of investments by 16% and the amount invested by 40%. We interpret this as strong support for our H1 and H2. In a next step, we investigate whether the frequency by which updates are posted exhibits a particular relationship. Figure 3 reports the predictive margins for the number of updates posted during a campaign. It shows that while the effect is positive except for such high numbers as 14 updates, the standard errors are steadily increasing with the number of updates, stifling any statistically significant effect as more updates are posted. Thus, we do not find support for H3.
Predictive margins regarding the effects of updates on crowd participation. The figure reports predictive margins for the number of an update in an equity crowdfunding campaign. It reveals that the first updates have a positive but only marginally significant effect, while the latter updates have no significant effect on crowd participation
Update categories and their effect on crowd participation
First, as outlined in RQ2 and RQ3, Table 4 investigates how the complexity of the language used in updates, the length of the updates, and the content of updates influences crowd participation. As in the previous regressions, we do not find any immediate effect for our explanatory variables. The evidence shows, however, that updates with an easier language increase crowd participation as measured by the number of investments the following day. No such effect, however, exists for the amount invested. Furthermore, the average ease of the language over the course of the last week did neither affect the number nor the amount of investments, indicating that an easier language attracts more investors right after the update was posted but not over a longer time period. Hence, we only find partial support for H4. Regarding the length of updates, we do not find any statistically significant effect on crowd participation. Hence, H5 is not supported by our results.
Table 4 Effects of update categories on crowd participation
In a next step, we investigate RQ3 by analyzing the type of information communicated via updates. In line with our previous findings, none of the different update categories had an immediate effect on crowd participation. However, we find a positive and significant effect for New Funding, with one more update of this category increasing the number of investments by 45% the following day. Furthermore, Cooperation Projects also has a positive effect on the amount invested by the crowd, leading to a 52% increase of the amount invested the following day. When analyzing the update activities that took place over the course of 1 week, we find that information about Campaign Developments, New Funding, and Business Development attract additional investors, thereby increasing the number of investments by 17%, 51%, and 19%, respectively. When looking at the long-run effects of updates over the course of an entire week, we also find that information about New Funding and Business Development both increase the amount of funding on subsequent days by 58%. External Certification, in contrast, has a negative effect on the amount invested, which might arise because updates on external certificates provide a dubious signal to the crowd: these start-ups are unable to obtain funding other than equity crowdfunding even though they have obtained an external certificate such as a patent.
Finally, in Tables 7 and 8 in the Appendix, we investigate RQ2 and RQ3 in more detail by analyzing the effect of updates in different industries (communication, wholesale, and retail, as well as manufacturing) and different portals (Companisto and Seedmatch). The results indicate that a simple language is particularly important in crowdfunding campaigns from the manufacturing domain, while the information content of an update appears to be less important there. By contrast, during the course of one week, information about Cooperation Projects, New Funding, Business Development, and Promotional Campaigns had a particularly positive and statistically significant effect in the wholesale and retail industry. Finally, while Cooperation Projects had a positive effect on the amount pledged on Seedmatch, information about Campaign Development and Business Development appeared to be more important for the crowd that invested on Companisto. These results show that start-ups must consider whether a specific information content works for the campaign under consideration and whether the crowd on a particular portal is likely to respond to it.
Discussion, limitations, and further research
We began with the question whether and to what extent updates posted by start-ups during an equity crowdfunding campaign influence crowd participation. We argued that updates are a tool to signal the start-up's quality to potential investors during a crowdfunding campaign. Based on this main research objective, we further investigated whether the frequency of updates has a positive effect on crowd participation and whether the effect occurs immediately or in a lagged form (RQ1). Our results show that there is indeed a statistically and economically significant effect of updates on crowd participation. Posting an update increases both the number of investments by the crowd and the investment amount collected. However, this effect does not occur immediately in its entirety; rather, it lags behind the update by a few days. In addition, our findings suggest that even though investors value signals provided by start-ups, an increasing number of updates seem to result in a loss of credibility and might even be perceived as cheap talk as additional updates no longer have a statistically significant effect on crowd participation.
Furthermore, we argued that the clarity of updates is important for crowd participation (RQ2). We measured the clarity of updates in terms of language complexity and update length. We find that the clarity of updates does not seem to be of particular relevance to the crowd. Even though our findings suggest that an easier readability has a positive effect on crowd participation the day after the update was posted, this effect is lost after a few days. This result suggests that crowd investors do not seem particularly concerned about language complexity. However, the readability of most updates was relatively homogenous with a Flesch index between 40 and 65 (categories 2–4) targeting readers with a good or very good education (Courtis 1995). Only a small number of updates had a readability index in the category "very difficult" and none in the categories "easy" and "very easy." This result might be due to a good education of crowd investors and the entrepreneur posting the update as well as the expectation of the crowd that start-ups communicate in a more sophisticated way to demonstrate their preparedness to establish and run a successful company (Mollick 2014; Ahlers et al. 2015).
While recent research shows that entrepreneurs strategically engage in update communication (Dorfleitner et al. 2017), our results reveal that the type of information provided in the update plays an important role for equity crowdfunding participation. Updates that inform the crowd about new funding and business developments seem to be valued highly by investors. Updates providing information about campaign developments and cooperation projects also have a positive effect on crowd participation. In contrast to previous findings, investors did not seem to value information about the start-up team (Ahlers et al. 2015; Moritz et al. 2015; Bernstein et al. 2017). This result might be explained by the fact that the start-up team typically does not change during a crowdfunding campaign and that investors expect to receive information about consistent factors of the start-up directly at the beginning of the campaign, e.g., in the business plan. This interpretation is supported by the results regarding the business model. Altogether, our results suggest that investors seem to value updates signaling additional and dynamic aspects about the start-up's quality during a crowdfunding campaign and do not value information which should have been provided at the funding start. The negative effect of external certifications on crowd investments is rather surprising and indicates that the crowd does not find expert opinions, success stories, awards received, and patents obtained credible and valuable. However, a deeper analysis of this category with a larger data set is required to better understand the crowds' reaction to this information.
Our paper contributes to the entrepreneurial finance and crowdfunding literatures. We contribute to research on the selection criteria of early-stage investors. It has been found that start-ups use specific information such as the quality of their management, intellectual property, the venture's network, and firm alliances to signal their quality to investors (Audretsch et al. 2012; Baum and Silverman 2004; Block et al. 2014; Franke et al. 2008; Jell et al. 2011). In our analysis, we have shown that specific signals in crowdfunding campaigns also seem to enhance the likelihood of a successful campaign. Hence, our paper expands research on signaling theory by analyzing effective signals within updates during equity crowdfunding campaigns. In addition, our paper contributes to the small but growing literature on the effects of information disclosure on equity crowdfunding participation (Vismara 2016b; Ahlers et al. 2015; Moritz et al. 2015; Moritz and Block 2015; Bernstein et al. 2017). So far, this literature has not taken into account that start-ups can also provide or disclose information to the crowd while running an equity crowdfunding campaign. Our analysis takes a dynamic approach to this issue and investigates these disclosure effects, considering updates that are given during ongoing crowdfunding campaigns.
This paper is not without limitations, which provide fruitful avenues for further research. Although we consider two different portals, the sample size of 71 funding campaigns and 39,399 investment decisions is still relatively small. Our dataset is slightly biased. Extremely positive crowdfunding campaigns, where the funding limit was reached within a few hours, simply had no time (or need) to publish updates. The sample size does not allow us to build larger subgroups of start-ups from different industries, countries, and development stages. Future research could collect larger samples of funding campaigns and investigate potential moderation effects related to start-up or campaign characteristics. We would expect, for example, to see stronger positive effects of updates on patents and successful prototypes in technology-intensive industries than in other industries. Our subsample of start-ups in technology-intensive industries is too small to investigate such moderation effects. Moreover, with a larger sample of start-ups and campaigns, one could compare lone founder start-ups with team start-ups. It might very well be that updates on new team members have particularly meaningful effects for lone founder start-ups, especially when the founder lacks technological and/or business competences. Another possible avenue for further research is to extend the research about the effects of updates on crowd participation to reward-based crowdfunding (Colombo et al. 2015; Mollick 2014; Xu et al. 2014). Mollick (2014), for example, has shown that projects with updates are more likely than other projects to attract funding from the crowd. However, he does not distinguish between different types of updates. Given the particularities of reward-based crowdfunding and its strong focus on products and projects, we would expect updates with information about project and product developments to have particularly strong effects.
Implications for practice
Our paper's results are important for start-ups seeking equity crowdfunding. Knowing which updates drive funding participation is crucial for start-ups when designing an effective and successful investor communication and social media strategy for their equity crowdfunding campaigns. By posting updates, start-ups can actively influence their campaigns' chances of success. The crowd seems particularly sensitive to verifiable and business-related information about the development of the start-up since funding start such as new fundings and business developments, whereas information about the underlying business model, team, and promotional activities does not provide much additional value. In this sense, the crowd seems to behave like professional investors who focus on verifiable, business-related, and cash-flow relevant additional information as decision criteria for their investments (Boocock and Woods 1997). This information is also important for crowdfunding platforms. By encouraging start-ups to publish specific types of updates that can increase the likelihood of successful crowdfunding campaigns, the platforms' own business success will be improved.
As we do not observe the start-ups over a longer time period, we cannot evaluate if the signals send during the campaign are reliable and costly for the signaller. Hence, we have to exclude the cost dimension from our analysis.
We do not consider the portals Innovestment and United Equity in our analysis, as the former does not allow founders to post updates on the portal website and because we simply did not observe updates during the running of the campaigns for the latter.
For example, the coefficient of Competing Investments in Table 3 Model 1 is 1.08. It indicates that an increase of the explanatory variable (which is measured in 100 competing investments) corresponds to a 1.08 times change in the dependent variable. In this case, the dependent variable—the number of investments per day—increases by 8% if 100 more competing investments are made in other campaigns. On the other hand, the coefficient of #Investments (which is measured in 100 previous investments) is 0.91. This time, the coefficient indicates that an increase of the explanatory variable corresponds to a 0.91 times change in the dependent variable. Thus, the dependent variable decreases by 9% if 100 more investments are made by the crowd until the previous day.
Agrawal, A., Catalini, C., & Goldfarb, A. (2015). Crowdfunding: geography, social networks, and the timing of investment decisions. Journal of Economics & Management Strategy, 24(2), 253–274. doi:10.1111/jems.12093.
Ahlers, G. K., Cumming, D., Günther, C., & Schweizer, D. (2015). Signaling in equity crowdfunding. Entrepreneurship Theory and Practice, 39(4), 955–980. doi:10.2139/ssrn.2161587.
Audretsch, D. B., Bönte, W., & Mahagaonkar, P. (2012). Financial signaling by innovative nascent ventures: the relevance of patents and prototypes. Research Policy, 41(8), 1407–1421. doi:10.1016/j.respol.2012.02.003.
Baltagi, B. (2008). Econometric analysis of panel data. West Sussex: Wiley.
Baum, J. A. C., & Silverman, B. S. (2004). Picking winners or building them? Alliance, intellectual, and human capital as selection criteria in venture financing and performance of biotechnology startups. Journal of Business Venturing, 19(3), 411–436. doi:10.1016/S0883-9026(03)00038-7.
Belleflamme, P., Lambert, T., & Schwienbacher, A. (2014). Crowdfunding: tapping the right crowd. Journal of Business Venturing, 29(5), 585–609. doi:10.1016/j.jbusvent.2013.07.003.
Bernstein, S., Korteweg, A. G., & Laws, K. (2017). Attracting early stage investors: evidence from a randomized field experiment. Journal of Finance, 72(2), 509–538. doi:10.1111/jofi.12470.
Block, J. H., De Vries, G., Schumann, J. H., & Sandner, P. (2014). Trademarks and venture capital valuation. Journal of Business Venturing, 29(4), 525–542. doi:10.1016/j.jbusvent.2013.07.006.
Block, J., Colombo, M., Cumming, D., & Vismara, S. (2017a). New players in entrepreneurial finance and why they are there. Small Business Economics, forthcoming. doi:10.1007/s11187-016-9826-6.
Block, J., Fisch, C., & van Praag, M. (2017b). The Schumpeterian entrepreneur: a review of the empirical evidence on the antecedents, behavior, and consequences on innovative entrepreneurship. Industry and Innovation, 31(8), 793–801. doi:10.1080/13662716.2016.1216397.
Boocock, G., & Woods, M. (1997). The evaluation criteria used by venture capitalists: evidence from a UK venture fund. International Small Business Journal, 16(1), 36–57. doi:10.1177/0266242697161003.
Branzei, O., Ursacki-Bryant, T. J., Vertinsky, I., & Zhang, W. (2004). The formation of green strategies in chinese firms: matching corporate environmental responses and individual principles. Strategic Management Journal, 25(11), 1075–1095. doi:10.1002/smj.409.
Busenitz, L. W., Fiet, J. O., & Moesel, D. D. (2005). Signaling in venture capitalist—new venture decisions: does it indicate long-term venture outcomes? Entrepreneurship Theory and Practice, 29(1), 1–12. doi:10.1111/j.1540-6520.2005.00066.x.
Carter, S. M. (2006). The interaction of top management group, stakeholder, and situational factors on certain corporate reputation management activities. Journal of Management Studies, 43(3), 1145–1176. doi:10.1111/j.1467-6486.2006.00632.x.
Certo, S. T. (2003). Influencing initial public offering investors with prestige: signaling with board structures. Academy of Management Review, 28(3), 432–446. doi:10.5465/AMR.2003.10196754.
Cholakova, M., & Clarysse, B. (2015). Does the possibility to make equity investments in crowdfunding projects crowd out reward-based investments? Entrepreneurship Theory and Practice, 39(1), 145–172. doi:10.1111/etap.12139.
Colombo, M. G., Franzoni, C., & Rossi-Lamastra, C. (2015). Internal social capital and the attraction of early contributions in crowdfunding. Entrepreneurship Theory and Practice, 39(1), 75–100. doi:10.1111/etap.12118.
Connelly, B. L., Certo, S. T., Ireland, R. D., & Reutzel, C. R. (2011). Signaling theory: a review and assessment. Journal of Management, 37(1), 39–67. doi:10.1177/0149206310388419.
Courtis, J. K. (1995). Readability of annual reports: Western versus Asian evidence. Accounting, Auditing & Accountability Journal, 8(2), 4–17. doi:10.1108/09513579510086795.
Cumming, D. & Zhang, Y. (2016). Are crowdfunding platforms active and effective intermediaries? SSRN Working Paper, No. 2882026. Available at SSRN: https://ssrn.com/abstract=2882026.
Dorfleitner, G., Hornuf, L. & Weber, M. (2017). Dynamics of investor communication in equity crowdfunding, mimeo.
Doshi, A. (2014). The impact of high performance outliers on two-sided platforms: evidence from crowdfunding, SSRN Working Paper, No. 2422111. Available at SSRN: http://ssrn.com/abstract=2422111.
Fischer, E., & Reuber, A. R. (2014). Online entrepreneurial communication: mitigating uncertainty and increasing differentiation via twitter. Journal of Business Venturing, 29(4), 565–583. doi:10.1016/j.jbusvent.2014.02.004.
Fleiss, J. L., Levin, B., & Paik, M. C. (2003). Statistical methods for rates and proportions (3rd ed.). Hoboken: Wiley.
Flesch, R. (1948). A new readability yardstick. Journal of Applied Psychology, 32(3), 221–233. doi:10.1037/h0057532.
Franke, N., Gruber, M., Harhoff, D., & Henkel, J. (2008). Venture capitalists' evaluations of start-up teams: trade-offs, knock-out criteria, and the impact of VC experience. Entrepreneurship Theory and Practice, 32(3), 459–483. doi:10.1111/j.1540-6520.2008.00236.x.
Gao, Q., & Lin, M. (2014). Linguistic features and peer-to-peer loan quality: a machine learning approach. SSRN Working Paper, No. 2446114. Available at SSRN: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2446114.
Gao, H., Darroch, J., Mather, D., & MacGregor, A. (2008). Signaling corporate strategy in IPO communication: a study of biotechnology IPOs on the NASDAQ. The Journal of Business Communication, 45(1), 3–30. doi:10.1177/0021943607309349.
Gioia, D. A., Corley, K. G., & Hamilton, A. L. (2012). Seeking qualitative rigor in inductive research: notes on the Gioia methodology. Organizational Research Methods, 16(1), 15–31. doi:10.1177/1094428112452151.
Greiner, M. E., & Wang, H. (2010). Building consumer-to-consumer trust in E-finance marketplaces: an empirical analysis. International Journal of Electronic Commerce, 15(2), 105–136. doi:10.2753/JEC1086-4415150204.
Gulati, R., & Higgins, M. C. (2003). Which ties matter when? The contingent effects of interorganizational partnerships on IPO success. Strategic Management Journal, 24(2), 127–144. doi:10.1002/smj.287.
Hemer, J., Schneider, U., Dornbusch, F., & Frey, S. (2011). Crowdfunding und andere Formen Informeller Mikrofinanzierung in der Projekt- und Innovationsfinanzierung. Stuttgart: Fraunhofer Verlag.
Highhouse, S., Thornbury, E. E., & Little, I. S. (2007). Social-identity functions of attraction to organizations. Organizational Behavior and Human Decision Processes, 103(1), 134–146. doi:10.1016/j.obhdp.2006.01.001.
Hornuf, L., & Schwienbacher, A. (2015). Portal design and funding dynamics in crowdinvesting. Research Papers in Economics, 9(15) and SSRN Working Paper, No. 2612998. Available at SSRN: http://ssrn.com/abstract=2612998.
Hornuf, L., & Schwienbacher, A. (2016). Crowdinvesting: angel investing for the masses? In H. Landström & C. Mason (Eds.), Handbook of Research on Business Angels (pp. 381–397). Cheltenham: Edward Elgar.
Janney, J. J., & Folta, T. B. (2003). Signaling through private equity placements and its impact on the valuation of biotechnology firms. Journal of Business Venturing, 18(3), 361–380. doi:10.1016/S0883-9026(02)00100-3.
Janney, J. J., & Folta, T. B. (2006). Moderating effects of investor experience on the signaling value of private equity placements. Journal of Business Venturing, 21(1), 27–44. doi:10.1016/j.jbusvent.2005.02.008.
Jell, F., Block, J. H., & Henkel, J. (2011). Innovativität als Kriterium bei Venture-Capital-Investitionsentscheidungen. Kredit und Kapital, 44(4), 509–541. doi:10.3790/kuk.44.4.509.
Kickstarter (2012). Blockbuster Effects. Available at: http://www.kickstarter.com/blog/blockbuster-effects.
Kim, K., & Viswanathan, S.. (2013). The experts in the crowd: the role of reputable investors in a crowdfunding market. TPRC 41: The 41st Research Conference on Communication, Information and Internet Policy. Available at SSRN: http://ssrn.com/abstract=2258243.
Kollmann, T., & Kuckertz, A. (2006). Investor relations for start-ups: an analysis of venture capital investors' communicative needs. International Journal of Technology Management, 34(1), 47–62. doi:10.1504/IJTM.2006.009447.
Kromidha, E., & Robson, P. (2016). Social identity and signalling success factors in online crowdfunding. Entrepreneurship & Regional Development, 28(9–10), 605–629. doi:10.1080/08985626.2016.1198425.
Kuppuswamy, V., & Bayus, B. L. (2017). Crowdfunding creative ideas: The dynamics of project backers. In D. Cumming & L. Hornuf (Eds.), The Economics of Crowdfunding: Startups, Portals, and Investor Behavior (forthcoming). London: Palgrave Macmillan.
Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1), 159–174. doi:10.2307/2529310.
Landström, H. (1992). The relationship between private investors and small firms: an agency theory approach. Entrepreneurship & Regional Development, 4(3), 199–223. doi:10.1080/08985629200000012.
Lukkarinen, A., Teich, J. E., Wallenius, H., & Wallenius, J. (2016). Success drivers of online equity crowdfunding campaigns. Decision Support Systems, 87, 26–38. doi:10.1016/j.dss.2016.04.006.
McNamara, G. M., Haleblian, J., & Dykes, B. J. (2008). The performance implications of participating in an acquisition wave: early mover advantages, bandwagon effects, and the moderating influence of industry characteristics and acquirer tactics. Academy of Management Journal, 51(1), 113–130. doi:10.5465/AMJ.2008.30755057.
Miles, M., & Huberman, A. M. (1994). Qualitative data analysis: an expanded sourcebook (2nd ed.). Thousand Oaks: Sage.
Mohammadi, A., & Shafi, K. (2017). Gender differences in the contribution patterns of equity-crowdfunding investors gender differences in the contribution patterns of equity-crowdfunding investors. Small Business Economics, forthcoming. doi:10.1007/s11187-016-9825-7.
Mollick, E. R. (2014). The dynamics of crowdfunding: an exploratory study. Journal of Business Venturing, 29(1), 1–16. doi:10.1016/j.jbusvent.2013.06.005.
Moritz, A., & Block, J. H. (2015). Crowdfunding: a literature review and research directions. In: Block, J. H., & Kuckertz, A. (Series Eds.), Brüntje, D., & Gajda, O. (Vol. Eds.), FGF Studies in Small Business and Entrepreneurship: Vol. 1. Crowdfunding in Europe – State of the Art in Theory and Practice (pp. 25–53). Cham: Springer Science & Business Media.
Moritz, A., Block, J. H., & Lutz, E. (2015). Investor communication in equity-based crowdfunding: a qualitative-empirical study. Qualitative Research in Financial Markets, 7(3), 309–342. doi:10.1108/QRFM-07-2014-0021.
Perkins, S. J., & Hendry, C. (2005). Ordering top pay: interpreting the signals. Journal of Management Studies, 42(7), 1443–1468. doi:10.1111/j.1467-6486.2005.00550.x.
Polzin F., Toxopeus H., Stam E. (2017). The wisdom of the crowd in funding. Information heterogeneity and social networks of crowdfunders. Small Business Economics, forthcoming. doi:10.1007/s11187-016-9829-3.
Ralcheva, A., & Roosenboom, P. (2016). On the road to success in equity crowdfunding. SSRN Working Paper, No. 2727742, Available on SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2727742.
Rao, R. S., Chandy, R. K., & Prabhu, J. C. (2008). The fruits of legitimacy: why some new ventures gain more from innovation than others. Journal of Marketing, 72(4), 58–75.
Ross, S. A. (1977). The determination of financial structure: the incentive signalling approach. Bell Journal of Economics, 8(1), 23–40. doi:10.2307/3003485.
Sapienza, H., & Korsgaard, M. (1996). Procedural justice in entrepreneur-investor relations. Academy of Management Journal, 39(3), 544–574. doi:10.2307/256655.
Spence, M. (1973). Job market signaling. The Quarterly Journal of Economics, 87(3), 355–374. doi:10.2307/1882010.
Srivastava, J. (2001). The role of inferences in sequential bargaining with one-sided incomplete information: some experimental evidence. Organizational Behavior and Human Decision Processes, 85(1), 166–187. doi:10.1006/obhd.2000.2936.
Vismara, S. (2016a). Equity retention and social network theory in equity crowdfunding. Small Business Economics, 46(4), 579–590. doi:10.1007/s11187-016-9710-4.
Vismara, S. (2016b). Information cascades among investors in equity crowdfunding. Entrepreneurship Theory and Practice, forthcoming. doi:10.1111/etap.12261.
Vulkan, N., Åstebro, T., & Sierra, M. F. (2016). Equity crowdfunding: a new phenomena. Journal of Business Venturing Insights, 5, 37–49. doi:10.1016/j.jbvi.2016.02.001.
Warner, A. G., Fairbank, J. F., & Steensma, H. K. (2006). Managing uncertainty in a formal standards-based industry: a real options perspective on acquisition timing. Journal of Management, 32(2), 279–298. doi:10.1177/0149206305280108.
Wheat, R. E., Wang, Y., Byrnes, J. E., & Ranganathan, J. (2013). Raising money for scientific research through crowdfunding. Trends in Ecology & Evolution, 28(2), 71–72. doi:10.1016/j.tree.2012.11.001.
Wu, S., Wang, B., & Li, Y. (2015). How to attract the crowd in crowdfunding? International Journal of Entrepreneurship and Small Business, 24(3), 322–334. doi:10.1504/IJESB.2015.067465.
Xu, A., Yang, X., Rao, H., Fu, W. T., Huang, S. W., & Bailey, B. P. (2014). Show me the money! An analysis of project updates during crowdfunding campaigns. In Proceedings of the 32nd Annual ACM Conference for Human Factors in Computing Systems, ACM: 591–600. Available at: https://pdfs.semanticscholar.org/34a6/bb3c6ed524f0cdb56cd7c11fe8159ef595a5.pdf.
Zimmerman, M. A., & Zeitz, G. J. (2002). Beyond survival: achieving new venture growth by building legitimacy. Academy of Management Review, 27(3), 414–431. doi:10.5465/AMR.2002.7389921.
Open access funding was provided by the Max Planck Society. This article evolved as part of the research project "Crowdinvesting in Germany, England and the USA: Regulatory Perspectives and Welfare Implications of a New Financing Scheme", which was supported by the German Research Foundation (Deutsche Forschungsgemeinschaft) under the grant number HO 5296/1-1. The authors thank Silvio Vismara, and the participants of the 20th G-Forum (Handelshochschule Leipzig) and the 4th Crowdinvesting Symposium (Max Planck Institute for Innovation and Competition). Gerrit Engelmann provided excellent research assistance. All errors are our own.
Department of Management, University of Trier, Universitätsring 15, 54296, Trier, Germany
Jörn Block & Alexandra Moritz
Department of Applied Economics, Erasmus University Rotterdam, P.O. Box 1738, 3000 DR Rotterdam, Netherlands
Jörn Block
Erasmus Research Institute of Management (ERIM), Erasmus University Rotterdam, P.O. Box 1738, 3000 DR Rotterdam, Netherlands
University of Trier, Department of Economics, Behringstrasse 21, 54296, Trier, Germany
Lars Hornuf
Max Planck Institute for Innovation and Competition, Marstallplatz 1, 80539, Munich, Germany
Alexandra Moritz
Correspondence to Lars Hornuf.
Table 5 Definitions of variables
Table 6 Correlation matrix: update categories
Table 7 Effects of update categories by industry sector
Table 8 Effects of update categories by portal
Block, J., Hornuf, L. & Moritz, A. Which updates during an equity crowdfunding campaign increase crowd participation?. Small Bus Econ 50, 3–27 (2018). https://doi.org/10.1007/s11187-017-9876-4
Issue Date: January 2018
Entrepreneurial finance
Investor communication
|
CommonCrawl
|
This morning, I had the plan to write something about the historical figure behind St. Nicolaus (Santa Claus for his friends) who in Germany fills children's shoes with sweets and small presents in the night to December 6th. On my way to IUB, I had heard a radio program about him: He lived in the fourth century somewhere where it's now Turkey, was a bishop and provided three sisters that were so poor that they had to prositute themselves with balls of gold so they could merry. Some 700 years after his death, some knights brought his body to Bari in Italy to save it from the arabs and then parts of his body were distributed all over Europe. The character of this saint also changed a lot over the centuries from being the saint of millers to the saint of drinkers (apearently, the Russian word for getting drunk is derived from his name) to the saint of children.
But this is not what I am going to talk about.
Rather, I would like to point out this news: Heise is a German publishing company of not only in my mind by far the best computer journals here. They also have a news ticker which I think is comparable to slashdot which hosts a discussion forum. Now a court (Landgericht Hamburg) has ordered the Heise publishing house to make sure that there is no illegal content in the forum (and not only delete entries when it is pointed out to them that they are illegal). Otherwise, they could be fined by any lawyer ('Abmahnung').
The court ruled in the case of a posting providing a script for to run a simple denial of service attack against the server of a company that was discussed in the original article. The court decided that Heise must make sure that no such illegal contet is distributed via their website. Heise will challenge this ruling at the next higher level.
But if it prevails, it means that in Germany anybody providing any possibility for users to leave comments is potentially threatened to be fined, no matter if it is a forum, a guest book or the comment section of a blog: You can simply post an annonymous comment of some illegal content and then sue the provider of the website for publishing it. This would be the end of any unmoderated discussion on the German part of the internet. Just another case where a court show complete ignorance of the working of the internet.
So, comment, as long as I still let you!
(note: I had written this yesterday, but due to a problem with blogspot.com I could not post it until today)
What is not a duality
A couple of days ago, Sergey pointed me to a paper Background independent duals of the harmonic oscillator by Viqar Husain. The abstract promises to show that there is a duality between a class of topological and thus background independent theories that are dual to the harmonic oscillator. Sounds interesting. So, what's going on? This four and a half page paper starts out with one page discussing the general philosophy, how important GR's lesson to look for background independence is and how great dualities are. The holy grail would be to find a background independent theory that has some classical, long wavelength limit in which it looks like a metric theory. For dualities, the author mentions the Ising/Thirring model duality and of course AdS/CFT. The latter already involves a metric theory in terms of an ordinary field theory, but the AdS theory is not background independent, it is an expansion around AdS and one has to maintain the AdS symmetries at least asymptotically. So he looks for something different.
So what constitutes a duality? Roughly speaking it means that there is a single theory (defined in an operational sense, the theory is the collection of what one could measure) that has at least two different looking descriptions. For example, there is one theory that can either be described as type IIB strings on an AdSxS5 background or as N=4 strongly coupled large N gauge theory. Husain gives a more precise definition when he claims:
Two [...] theories [...] are equivalent at the quantum level. "Equivalent" means that there is a precise correspondence between operators and quantum states in the dual theories, and a relation between their coupling constants, at least in some limits.
Then he goes on to show that there is a one to one map between the observables in some topological theories and the observables of the harmonic oscillator. Unfortunately, such a map is not enough for a duality in the usual sense. Otherwise, all quantum mechanical theories with a finite number of degrees of freedom would be dual to each other. All have equivalent Hilbert spaces and thus operators acting on one Hilbert space can also be interpreted as operators acting in the other Hilbert space. But this is only kinematics. What is different between the harmonic oscillator and the hydrogen atom say is the dynamics. They have different Hamiltonians. By the above argument, the oscillator Hamiltonian also acts in the hydrogen atom Hilbert space but it does not generate the dynamics.
So what does Husain do concretely? He focusses on BF theory on space-times of the globally hyperbolic form R x Sigma for some Euclidean compact 3-manifold Sigma. There are two fields, a 2-form B and a (abelian for simplicity) 1-form A with field strength F=dA. The Lagrangian is just B wedge F. This theory does not need a metric and is therefore topological.
Classically, the equations of motion are dB=0 and F=0. For quantization, Husain performs a canonical analysis. From now on, indices a,b,c run over 1,2,3. He finds that epsilon_abc B_bc is the canonical momentum for A_a and that there are first class constraints setting F_ab=0 and the spatial dB=0.
Observables come in two classes O1(gamma) and O2(S) where gamma is a closed path in Sigma and S is a closed 2-surface in Sigma. O1(gamma) is given by the integral of A over gamma, while O2(S) is the integral of B over S. Because of the constraints, these observables are invariant under deformations of S and gamma and thus only depend on homotopy classes of gamma and S. Thus one can think of O1 as living in H^1(Sigma) and O2 as living in H^2(Sigma).
Next, one computes the Poisson brackets of the observables and finds that two O1's or two O2's Poisson commute while {O1(gamma),O2(S)} is given in terms of the intersection number of gamma and S.
As the theory is diffeomorphism invariant, the Hamiltonian vanishes and the dynamics are trivial.
Basically, that's all one could (should) say about this theory. However Husain goes on: First, he specialises to Sigma = S1 x S2. This means (up to equivalence) there is only one non-trivial gamma (winding around S1) and one S (winding around the S2). Their intersection is 1. Thus, in the quantum theory, O1(gamma) and O2(S) form a canonical pair of operators having the same commutation relations as x and p. Another example is Sigma=T3 where H^1 = H^2 = R^3 so this is like 3d quantum mechanics.
Husain chooses to form combinations of these operators like for creation and annihilation operators for the harmonic oscillator. According to the above definition of "duality" this constitutes a duality between the BF-theory and the harmonic oscillator: We have found a one to one map between the algebras of observables.
What he misses is that there is a similar one to one map to any other quantum mechanical system: One could directly identify x and p and use that for any composite observables (for example for the particle in any complicated potential). Alternatively, one could take any orthogonal generating system e1, e2,... of a (separable) Hilbert space and define latter operators a+ mapping e(i) to e(i+1) and a acting in the opposite direction. Big deal. This map lifts to a map for all operators acting on that Hilbert space to the observables of the BF-theory. So, for the above definition of "duality" all systems with a finite number of degrees of freedom are dual to each other.
What is missing of course (and I should not hesitate to say that Husain realises that) is that this is only kinematical. A system is not only given by its algebra of observables but also by the dynamics or time evolution or Hamiltonian: On has to single out one of the operators in the algebra as the Hamiltonian of the system (leaving issues of convergence aside, strictly one only needs time evolution as an automorphism of the algebra and can later ask if there is actually an operator that generates it. This is important in the story of the LQG string but not here).
For BF-theory, this operator is H_BF=0 while for the harmonic oscillator it is H_o= a^+ a + 1/2. So the dynamics of the two theories have no relation at all. Still, Husain makes a big deal out of this by claiming that the harmonic oscillator Hamiltonian is dual to the occupation number operator in the topological theory. So what? The occupation number operator is just another operator with no special meaning in that system. But even more, he stresses the significance of the 1/2: The occupation number doesn't have that and if for some (unclear) reason one would take that operator as a generator of something, there would not be any zero point energy. And this might have a relevance for the cosmological constant problem.
What is that? There is one (as it happens background independent) theory that has a Hamiltonian. But if one takes a different, random operator as the Hamiltonian, that has its smallest eigenvalue at 0. What has that to say about the cosmological constant? Maybe one should tell these people that there are other dualities that not only identify the structure of the observable algebra (without dynamics). But, dear reader, be warned that in the near future we will read or hear that background independent theories have solved the cosmological constant problem.
Let me end with a question that I would really like to understand (and probably, there is a textbook answer to it): If I quantise a system the way we have done it for the LQG string, one does the following: One singles out special observables say x and p (or their exponentials) and promotes them to elements of the abstract quantum algebra (the Weyl algebra in the free case). Then there are automorphisms of the classical algebra that get promoted to automorphisms of the quantum algebra in a straight forward way. For the string, those were the diffeomorphisms, but take simply the time evolution. Then one uses the GNS construction to construct a Hilbert space and tries to find operators in that Hilbert space that implement those automorphisms: Be a_t the automorphism in the algebra sending observable O to a_t(O) and p the representation map that sends algebra elements to operators on the Hilbert space. Then one looks for unitary operators U(t) (or their hermitian generators) such that
p( a_t(O) ) = U(t)^-1 p(O) U(t)
In the case of time evolution, this yields the quantum Hamilton operator.
However, there is an ambiguity in the above procedure: If U(t) fulfils the above requirement, so does e^(i phi(t)) U(t) for any real number phi(t). Usually, there is an additional requirement as t comes from a group (R in the case of time translations but Diff(S^1) in the case of the string) and one could require that U(t1) U(t2) = U(t1 + t2) where + is the group law. This does not leave much room for the t-dependence of phi(t). In fact, in general it is not possible to find phi(t) such that this relation is always satisfied. In that case we have an anomaly and this is exactly the way the central charge appears in the LQG string case.
Assume now, that there is no anomaly. Then it is still possible to shift phi by a constant times t (in case of a one dimensional group of automorphisms, read: time translation). This does not effect any of the relations about the implementation of the automorphisms a_t or the group representation property. But in terms of the Hamiltonian, this is nothing but a shift of the zero point of energy. So, it seems to me that none of the physics is affected by this. The only way to change this is to turn on gravity because the metric couples to this in form of a cosmological constant.
Am I right? That would mean that any non-gravitational theory cannot say anything about zero point energies because they are only observable in gravity. So if you are a studying any theory that does not contain gravity you cannot make any sensible statements about zero point energies or the cosmological constant.
Posted by Robert at 3:57 PM 12 comments
Fixing radios
As everybody knows a gauge invariance is the only thing that needs to be fixed if it is not broken. So broken radios should be fixed. Here is a nice article about how this problem would be approached by an experimental biologist that somebody posted to a noticeboard here at IUB. You could read some analogies to string theory into it but you could also just read it for fun.
Sudoku types
You surely have seen these sudoku puzzles in newspapers: In the original version, it is a 9x9 grid with some numbers inserted. The problem is to fill the grid with number 1 to 9 such that in each row, column and 3x3 block each digit appears exactly once.
In the past I was mildly interested in them, I had done perhaps five or six over several weeks, mostly the ones in Die Zeit. But the last couple of days I was back in the UK where this is really a big thing. And our host clearly is an addict with books all over the house. So I myself did a couple more of them. And indeed, there is something to it.
But what I wanted to point out is that I found several types of ways to approach these puzzles. This starts from "I don't care about puzzles, especially if they are about numbers". This is an excellent attitude because it saves you lots of time. However, sudokus are about permutations of five things and it just happens that they are usually numbers but this is inessentiel in the problem. A similar approach was taken by a famous Cambridge physicist who expressed that he found "solving systems of linear equations" not too entertaining. Well, either he's has a much deeper understanding of sudokus than me or he has not really looked at a single one to see that probably linear equations are of no help at all.
But the main distinction (and that probably tells about your degree of geekiness) is in my mind: How many sudokus do you solve before you write a progam that does it? If the answer is 0 you are really lazy. You could object that if you enjoy solving puzzles why would you delegate that fun to your computer but this just shows that you have never felt the joy of programming. Here is my go at it.
Spacetime dynamics and RG flow
A couple of days ago there appeared a paper by Freedman, Headrick, and Lawrence that I find highly original. It not just follows up on a number of other papers but actually answers a question that has been lurking around for quite a while but had not really been addressed so far (at least as far as I am aware of). I had asked myself the question before but attributed it to my lack of understanding of the field and never worried enough to try to work it out myself. At least, these gentlemen have and produced this beautiful paper.
It is set in the context of tachyon condensation (and this is of course where all this K-Theory stuff is located): You imagine setting up some arrangement of branes and (as far as this paper is concerned even more important as this is about closed strings) some spatial manifold (if you want with first fundamental form, that is the conjugated momentum to a spatial metric) with all the fields you like in terms of string theory and ask what happens.
In general, your setup will be unstable. There could be forces or you could be in some unstable equilibrium. The result is that typically your space-time goes BOOOOOOOOOOM as you had Planck scale energy densities all around but eventually the dust (i.e. gravitational and other radiation) settles and you ask: What will I find?
The K-Theory approach to this is to compute all the conserved charges before turning on dynamics and then predicting you will end up in the lowest energy state with the same value for all the charges (here one might worry that we are in a gravitational theory which does not really have local energy density but only different expansion rates but let's not do that tonight). Then K-Theory (rather than for example de Rham or some other cohomology) is the correct theory of charges.
The disadvantage of this approach is that it is potentially very crude and just knowing a couple of charges might not tell you a lot.
You can also try to approach the problem from the worldsheet perspective. There you start out with a CFT and perturb it by a relevant operator. This kicks off a renormalisation group flow and you will end up in some other CFT describing the IR fixed point. General lore tells you that this IR RG fixed point describes your space-time after the boom. The c-theorem tells you that the central charge decreases during the flow but of course you want a critical string theory before and after and this is compensated by the dilaton getting the appropriate slope.
The paper is addresses this lore and checks if it is true. The first concern is of course that proper space-time dynamics is expected to (classically) be given by some ordinary field equation in some effective theory with typically two time derivatives and time reversal symmetry where the beta functions play the role of force. In contrast, RG flow is a first order differential equation where the beta-functions point in the direction of the flow. And (not only because of the c-theorem) there is a preferred direction of time (downhill from UV to IR).
As it is shown in the paper, this general scheme is in fact true. And since we have to include the dilaton anyway, this also gets its equation of motion and (like the Hubble term in Friedman Robertson Walker cosmology) provides a damping term for the space-time fields. So, at least for large damping, the space-time theory is also effectively first order but at small (or negative which is possible and of course needed for time reversal) damping the dynamics is of different character.
What the two descriptions agree on is the set of possible end-points of this tachyon condensation, but in general the dynamics is different and because second order equations can overshoot at minima, the proper space-time dynamics can end up in a different minimum than predicted by RG flow.
All this (with all details and nice calculations) is in the paper and I can strongly recommend reading it!
Hamburg summary
So, finally, I am back on the train to Bremen and can write up my summary of the openening colloquium of the centre for mathematical physics in Hamburg.
As I reported earlier, this was a conference with an exceptionally well selected program. Not that all talks were in exactly on topics that I think about day and night but with very few exceptions, the speakers had something interesting to say and found good ways to present it. Well done, organisers! I hope your centre will be as successful as this colloquium!
The first physics talk on Thursday was Nikita Nekrasov who talked about Berkovtis' pure spinor approach. As you might know, this is an attempt to combine the advantages of the Green-Schwarz and the Ramond-Neveu-Schwarz formalism for superstrings and gives a covariant formulation with manifest supersymmetry in the target (amongst other things, Lubos has talked about this before). This is done by including not only the X and theta coordinates of the target superspace but also having an additional spinor lambda which obeys the "pure spinor" constraints lambda gamma^i lambda = 0 for all i. You can convince yourself that this equation describes the cone over SO(10)/U(5). This space has a conical singularity at the origin and Nikita asked the question if this can really give a consistent quantization. In particular, the beta-gamma-ghosts for the spinors have to be well defined not only in a patch but globally.
Nikita argued (showing quite explicitly how Cech-Cohomology arises) that this requires the first two Chern classes to vanish. He first showed how not to and then how to properly resolve the singularity of the cone and concluded that in the end, the pure spinor quantization is in fact consistent. However (and unfortunately my notes are cryptic at that point) he mentioned that there are still open problems when you try do use this approach for worldsheet of genus larger than two. Thus, even in this approach there might still be technical difficulties to define string amplitudes beyond two loops.
The next speaker was Roberto Longo. He is one of the big shots in the algebraic approach to quantum field theory and he talked about 2D conformal theories. As you know, the algebraists start from a mathematical definition of a quantum field theory (a Haag-Kastler net which is a refinement of the Wightman axioms) and then deduce general theorems with proofs (of mathematical standard) valid for large classes of QFTs. The problem however is to give examples of theories that can be shown to obey their definition. Free fields do but are a bit boring after a while. And perturbative descriptions on terms of Feynman rules are no good as long as the expansion can be shown to converge (which is probably wrong). You could use the lattice regularization to get a handle on gauge theories but there you have to show (and this hasn't been done despite decades of attempts in the constructive field theory community) Lorentz invariance, positivity of the spectrum and locality, all after the continuum limit has been taken. So you have a nice framework but you are not sure what theories it applies to (although there is little doubt that asymptotically free gauge theories should be in that class). Now Longo reviewed how you can cast the usual language of 2d CFTs into their language and thus have additional, interacting examples. He displayed several theorems that however sounded vaguely familiar to people that have some background in the BPZ approach to CFTs.
The last speaker of Thursday was Nikolai Reshetikhin. He started out with a combinatorial problem of certain graphs with two coloured vertices, transformed that into a dimer model and ended up getting a discrete version of a Dirac operator on graphs (in the sense that the adjacency matrix can give the Laplacian). He also mentioned a related version of Boson-Fermion-correspondence and a relation to the quantum foam of Vafa and collaborators but again my notes are too spares to be of any more use there.
Friday morning started with Philippe Di Francesco. He started out with a combinatorial problem again: Count 4-valent planar graphs with two external edges. He transformed this to rooted ternary trees with black and white leaves and always one more black than white leaves. This could be solved by giving a solvable recursion relation for the generating function. The next question was how many edges (in the first graph) have to be transversed to get from the outside to the face with the other external edge. Again there was a (now slightly more involved) generating function which he again solved and showed that the solution can be thought of as a one soliton solution in terms of a tau function.
After that, he talked about the six-vertex-model and treated it with similar means, showed a beautiful movie of how the transfer matrix acts and suddenly was right in the middle of Peron-Frobenius eigenvectors, Temperley-Lieb algebras and Yang-Baxter equation. Amazing!
Then came Tudor Ratiu who gave quite a dramatic presentation but I have to admit I did not get much out of it. It was on doing the symplectic treatment of symmetries in the infinite dimensional case and how to deal with the functional analysis issues coming up there (in general what would be a Hamiltonian vector field is not a vector field etc.)
John Cardy discussed Stochastic Loewner Evolution: Take as an example the 2D Ising model on a hexagonal lattice and instead of the spins view the phase boundary as your fundamental observable. Then you can ask about its statistics and again in the continuum limit this should be given in terms of a conformal field theory. He focussed on a phase boundary that runs from boundary to boundary. The trick is to parametrise it by t and consider it only up to a certain t1. If the domain before was the disk it is now a disk with a path that wiggles from the boundary somewhere into the interior. By the uniformisation theorem there is a function that maps the complement of the path again onto the unit disk, call it g_t1. Instead of looking at the propagation of the path you can ask how g_t1 varies if you change t1. Cardy derived a differential equation for g_t1 and argued that all the information about the CFT is encoded in the solution to this equation with the appropriate boundary conditions.
The afternoon was started by Robbert Dijkgraaf. He reviewed the connection of black hole entropy (including quantum corrections as computed by Cardoso, de Wit and Mohaupt) and wave functions in topological string theory. He did not give much details (which was good given the broad audience) but one thing I had not yet heard about is to understand why the entropy (after the Legendre transform to electric charges and chemical potential that Vafa and friends discovered to simplify the CdEM result) has to be treated like a wave function while the topological string partition function appears like a probability. Dijkgraaf proposed that the fact that Omega, the holomorphic volume form varies over a SLAG in the complex structure moduli space could be a key to understand this as a Lagrangian submanifold is exactly where a wave function lives after quantization (it only depends on position and not on momenta!). Furthermore, he displayed the diagram for open-closed string duality that can be viewed as a loop of an open string stretched between to D-branes or the D-branes exchanging a closed string at tree level. He interpreted this as an index theorem: The open string loop looked like Tr((-1)^F D-slash) with trace for the loop while the closed string side is given by the integral over ch(E1) ch(E2) A-roof(R) where E1/2 are bundles on the D-brane. He argued that the right hand side looked like scalar product with the Chern classes as wave function and the A-roof-genus as measure. He went on discussing YM on a 2-torus and free fermions (via the eigenvalues of the holonomy). Those are non-relativistic and thus have two Fermi surfaces one for each sign of the square root in the dispersion relation. Perturbations are then about creating holes in these Fermi surfaces and 1/N (N being interpreted as the difference between the two surfaces) effects appear when a hole makes it through the energy band to the other Fermi surface. This again can be computed via a recursion relation and Dijkgraaf ended by interpreting it as being about a grand canonical ensemble of multi black holes rather than a single one.
Then came Bob Wald who reviewed thirty years of quantum field theory on curved backgrounds. If you leave Minkowsky space you have to give up many things that are quite helpful in the flat space approach: Poincare invariance, a preferred vacuum state, the notion of particles (as irreps of the Poincare group), a Fourier transform to momentum space, Wick rotation, the S-Matrix. Wald gave an overview of how people have learnt to deal with these difficulties and which more general concepts replace the flat space one. In the morning, the lecture room was quite cool and more and more people put on their coats. In contrast in the afternoon, the heating worked properly however at the expense of higher levels of carbon dioxide that in my case overcame the effects of lots of coffee from the coffee breaks. So for this lecture I cannot tell you anymore.
Last speaker before the banquet was Sasha Zamolodchikov. He again admitted to mainly live in two dimensions and discussed behaviour of the mass gap and free energy close to criticality. Those are dominated by the most relevant operator perturbing the CFT and usually are well understood. He however wanted to understand the sub-leading contributions and gave a very general argument (which I am unfortunately unable to reproduce) of why the expectation value of the L_(-2) L-bar_(-1) descendant of the vacuum (which is responsible for these sub-leading effects) is given by the energy density.
The last day started out (half an hour later as Friday as I only found out by being the only one at the lecture hall) with Martin Zirnbauer. As he mentioned many different systems (atomic nuclei, disordered metallic grains, chaotic billiards, microwaves in a cavity, acoustic modes of vibration of solids, quarks in non-abelian gauge theory (?) and the zeros of the Riemann zeta function) show similar spectral behaviour: When you plot the histogram of energy differences between levels you do not get a Poisson distribution as you would get if the energy levels are just random but a curve that starts of with a power law and later decays exponentially. There are three different power laws and the universality classes are represented by Gaussian matrix models with either hermitian, real symmetric or quaternion self-dual matrices. This has been well known for decades. Zirnbauer now argued that you will get 11 classes if you allow for super-matrices. He mentioned a theorem of his that showed that any Hamiltonian quadratic in fermionic creation and annihilation operators is in one of those classes (although I did not understand the relevance of this result for what he discussed before). He went on and claimed (again not convincing to me) that the physics of the earlier systems would be described by a non-linear sigma model with these 11 supermatrix spaces as targets. He called all this supersymmetry but to me it sounded as at best this was about systems with both bosons and fermions. In the discussion he had to admit that although he has supergroups, the Hamiltonian is not an element of these and thus the crucial relation H={Q,Q} that gives us all the nice properties of really supersymmetric theories does not hold in his case.
Then came Matthias Staudacher who gave a nice presentation of integrability properties in the AdS/CFT correspondence in particular in spin chains and rotating strings. Most of this we have heard already several times but new to me was the detailed description of how the (generalised) Bethe ansatz arises. As you know, the results about spin-chains and strings do not agree anymore at the three loop level. This is surprising as they agreed up to two loops but on the other hand you are doing different expansions in the two cases so this does not mean that the AdS/CFT correspondence is in trouble. This is pretty much like the situation in M(atrix)-model vs. supergravity. There are certain amplitudes that work (probably those protected by susy) and certain more complicated ones that do not. Matthias summarised this by making the statement "Who cares about the correspondence if you have integrability?"
The conference was rounded off by Nigel Hitchin who gave an overview of generalised geometry. Most of this is beautifully explained in Gualtieri's thesis, but there are a few points to note: Hitchin only talked about generalised metrics (given in terms of generalisations of the graph of g in TM+T^*M he did not mention generalised complex structure (except than in the Q&A period). He showed how to write the Levi-Civita connection (well, with torsion given by +/- H) in terms of the Lie- and the Courrant-bracket and the generalised metric (actually g+/-B) given in terms of maximal rank isotropic subbundles. What was new to me was how to carry out generalised Hamiltonian reduction of a group action (which he said was related to gauged WZW-models): The important step is to lift the Hamilton vector field X to X + xi_a where a labels the coordinate patch under consideration. It is important that under changes of coordinates xi changes as xi_b - xi_a = i_X dA_ab where A_ab is the 1-form that translates the two B-fields B_a and B_b. Then one can define L_X (Y+eta_a) = Lie_X (Y+eta_a) -i_Y dxi_a in terms of the Lie derivative Lie. This is globally defined as it works across patches. Now if you have a symmetry, take K to be the bundle of its Hamilton vector fields and K-perp its orthogonal bundle (containing K). Then what you want is the bundle E-bar = (K-perp / K)/G. You have the exact sequence 0->T*(M/G)->E-bar->T(M/G)->0 with non-degenerate inner product and the Courrant bracket descends nicely but it is not naturally a direct sum. Furthermore, you can define the 'moment form' c = i_X B_a - xi_a which makes sense globally. We have dc = i_X H and on the quotient g(X,Y) = i_Y c. Note that even when dB=0 on M before, it we can have H non-vanishing in cohomology on M/G because the horizontal vector bundle can have a curvature F and in fact downstairs one computes H=cF. Again, as always in this generalised geometry context, I find this extremely beautiful!
Update: After arriving at IUB, I see that Urs has reported from Nikita's talk.
Update: Giuseppe Policastro has pointed out a couple of typos that I corrected.
No news is good news
You might have noticed that I haven't reported from the Hamburg opening colloquium, yet. First of all this is due to the fact that there is no wlan in the lecture hall (or at least no wlan that I can log into) and second, and that really is the good news, that the talks are so interesting that I am far too busy listening and taking notes to turn on my laptop. The organisers really have done a great job in selecting not only prominent speakers but at the same time people who know how to give good talks. Thanks a million!
However, you my dear readers will have to wait until Monday for me to give you a conference summary.
More conference reporting
I just found that the weekly quality paper "Die Zeit" has an interview with Smolin on the occation of Loops '05. Probably no need to learn German for this, nothing new: String theory doesn't predict anything because there are 10^500 String theories (they lost the ^ somewhere), Peter W. can tell you more about this, stringy people have lost contact to experiment, LQG people do better because they predict a violation of the relativistic dispersion relation for light (is this due to the 3+1 split of their canonical formalism?) and Einstein would have been suppressed today because he was an independant thinker and not part of the (quantum mechanics) mainstream.
I was told, "Frankfurter Allgemeine Sonntagszeitung" also had a report on Loops '05. On their webpage, the article costs 1.50 Euros and I am reluctant to pay this. Maybe one of my readers has a copy and can post/fax it to me?
Tomorrow, I will be going to Hamburg where for three days they are celebrating the opening of the centre for mathematical physics. This is a joint efford of people from the physics (Louis, Fredenhagen sen., Samtleben, Kuckert) and math (Schweigert) departments of Hamburg university and the DESY theory group (Schomerus, Teschner). This is only one hour away and I am really looking forward to having a stringy critical mass coming together in northern Germany. Speakers of the opening colloquium include Dijkgraaf (hopefully he will make it this time), Hitchin, Zamolodchikov, Nekrassov, Cardy and others.
If there is some reasonable network connection, there will be some more live blogging, Urs in now a postdoc in Christoph Schweigert's group, I assume he will be online as well.
Classical limit of mathematics
The most interesting periodic event at IUB is the mathematics colloquium as the physicists don't manage to get enough people together for a regular series. Today, we had G. Litvinov who introduced us to idempotent mathematics. The idea is to build upon the group homomorphism x-> h ln(x) for some positive number h that maps the positive reals and multiplication to the reals with addition.
So we can call addition in R "multiplication" in terms of the preimage and we can also define "addition" in terms of the pre-image. The interesting thing is what becomes of this when we take the "classical limit" of h->0: Then "addition" is nothing but the maximim and this "addition" is idempotent: a "+" a = a.
This is an example of an idempotent semiring and in fact it is the generic one: Besides idempotency, it satisfies many of the usual laws: associativity, distributional law, commutativity. Thus you can carry over much of the usual stuff you can do with fields to this extreme limit. Other examples of this structure are Boolean algebras or compact convex sets where "multiplication" is the usual sum of sets and "addition" is the convex hull (obviously, the above example is a special case). Another example are polynomials with non-negative coefficients and for these the degree turns out to be a homomorphism! The obvious generalization of the integral is the supremum and the Fourier transform becomes the Legendre transform (you have to work out what the characters of the addition are!).
This theory has many applications, it seems especially strong for optimization problems. But you can also apply this limiting procedure to algebraic varieties under which they are turned into Newton polytopes.
I enjoyed this seminar especially because it made clear that many constructions can be thought of extreme limits of some even more common, linear constructions.
But now for something completely different: When I came back to my computer, I had received the following email:
Dear Mr. Helling
I would greatly appreciate your response.
Please what is interrelation mutually
fractal attractor of the black hole condensation,
Bott spectrum of the homotopy groups
and moduli space of the nonassociative geometry?
Thank you very much obliged.
[Sender]
I have no idea what he is talking about but maybe one of my readers has. I googled for a passage from the question and found that exactly the same question has also been posted in the comment sections of the Coffee Table and Lubos's blog.
How to read blogs and how not to write blogs
I usually do not have articles here saying basically "check out these articles in other blogs I liked them". Basically this is because I think if you, dear reader, have found this blog you will be able to find others that you like as well, so no need for me to point you around the blogsphere. And , honestly, I don't think too many people read this blog, anyway. I don't have access to the server's log files and I do not have a counter (I must say, I hate counters because often they delay loading a page a lot). But it happens more and more often that I meet somebody in person and she/he tells me that she/he has read this or that in atdotde. So in the end, I might not write for the big bit bucket.
My reporting on Loops '05 was picked up in other places so that might have brought even more reader to my little place. I even got an email from a student in China asking me that he cannot read atdotde anymore (as well as for example Lubos' Reference Frame). Unfortunatly, I had to tell him that this was probably due to the fact that his government decided to block blogger.com from the Chinese part of the net because blogs are way to subversive.
So as a little service for some of my readers who not already now, here is a hint on how to read blogs: Of course you can if you have some minutes of boredom type your friends names into google and surf to their blogs every now and then. That is fine. Maybe at some point, you want to find out, what's new in all those blogs. So you go through your bookmarks (favourites in MS speak) and check I you've seen everything that you find there.
But that is cyber stone age! What you want is a "news aggregator". This is a little program that does this for you periodically and informs you about the new articles it found. You just have to tell it where to look. This comes in form of a URL called the "RSS feed". Often you find little icons in the sidebar of the blogs that link to that URL. In others like this you have to guess. All the blogs on blogger.com it is in the form URL_of_blog/atom.xml so for atdotde it is http://atdotde.blogger.com/atom.xml. You have to tell your news aggregator about this URL. In the simplest form, this is just your web browser. Firefox calls it "live bookmarks": You open the "manage bookmarks" window and select "new life bookmark" from the menu. I use an aggregator called liferea, that even opens a little window once it found anything new, but there are many others.
Coming back to the theme of the beginning, I will for once tell you which blogs I monitor (in no particular order):
Amelies Welt(in German) I know Amelie from a mailing list an like the variety of topics she writes about.
BildBlog They set straight the 'news' from the biggest German tabloid. And it's funny.
Bitch, PhD Academic and feminist blogger. I learned a lot.
String Coffee Table you can chat about strings while nipping a coffee.
Musings The first blog of a physicist I came across. Jacques still sets standards.
Die Schreibmaschine Anna is a friend from Cambridge, currently not very active because...
Broken Ankle Diary a few weeks ago she broke here ankle
Lubos Motl's Reference Frame Strong opinions on physics , global warming, politics.
hep-th The arxiv now also comes in this form but still I prefer to read it in the classic way.
Jochen Weller One of the Quantum Diaries. Jochen was in Cambrigde while I was there.
Preposterous Universe Sean's blog is dead now because he is part of...
Cosmic Variance Currently my best loved blog. Physics and everything else.
Not Even Wrong Peter Woit has one point of criticism of string theory that he keeps repeating. But he is a very reasonable guy.
Daily ACK I met Al on a dive trip in the English Channel. Some astronomy and some Apple and Perl news.
Physics Comments Sounded like a good idea but not really working at least in the hep-th area.
Now I should send trackback pings. This is such a pain with blogger.com...
Ah, I nearly forgot: This article and how academic blogs can hurt your job hunting scares me a lot! (I admit, I found it on Cosmic Variance.)
Posted by Robert at 11:31 AM 3 comments
IUB is noble
Coming back from Loops '05 I find a note in my mailbox that the International University Bremen has now an Ig-Noble Laurate amongst its faculty: V. Benno Meyer-Rochow has received the prize in fluid dynamics for his work on the pressure produced when penguins pooh.
More news on the others
The careful reader will have noticed that yesterday, my blogging got sparser and sparser. This was probably due to increasing boreodom/annoyance on my part. Often, I thought the organisers should have applied the charta of sci.physics.research that forbits contributions that are so vague and speculative that they are not even wrong. I could not stand it anymore and had to leave the room when the speaker (I am not going to say who it was) claimed that "the big bang is just a phase transition".
Today, I give it a new shot. And the plenary talks are promising. Currently, John Baez has been giving a nice overview on various incarnations of spin foam models (he listed Feynman diagrams, lattice gauge theory and topological strings among them although I am under the impression that in the last point he is misguided as topological strings in fact take into account the complex/holomorphic structure of the background). However, starting from the point "what kind of matter do we have to include to have a nice continuum limit" he digressed via a Witten anecdote (who answered the question if he thinks LQG is right said that he hoped not because he hoped (in the 90s) that there is only one unique matter content (ie strings) consistend with quantised gravity) to making fun of string theorists asking them to make their homework to check the 10^whatever vacua in the landscape.
The next speaker will be Dijkgraaf who hopefully will do a better job than did Theissen yesterday in presenting that stringy people have interesting, deep stuff to say about physics.
Unfortunately, electro-magnetism lectures back in Bremen require me to run off at 11:00 and catch the train so I will not be able to follow the rest of the conference closely.
Baez got back on track with a nice discussion of how Lorentzlian Triangulation fit into the scheme of things and what role the prescribed time slicing might have on large scale physics (introducting further terms than Lambda and R in the LEEA). He showed also a supposed to be spin foam version of it.
Oh no. They have grad students and young postdocs as chairpersons. Bianca Dittrich just announced "Our next speaker is Robbert Dijkgraaf" and nothing happened. It seems Dijkgraaf didn't make it here on the early morning plane. Now, I can fulfil the annonymous reader's wish and report on the presentation of Laurent Friedel, the next speaker.
Before I power down my laptop: Friedel looks at effects of quantum gravity on low energy particle actions. In order to do that he couples matter to the Ponzano Regge model and then will probably try to integrate out the gravitational degrees of freedom.
Posted by Robert at 9:22 AM 2 comments
I sneaked into the Loops '05 Conference at the AEI at Potsdam. So, I will be able to give you live blogging for today and tomorrow. After some remarks by Nicolai and Thiemann and the usual impedence mismatch between laptops and projectors, Carlo Rovelli has started the first talk. He is on slide 2, and still reviews recent and not so recent devellopments of LQG.
Rovelli talked about his paper on the graviton propagator. If you like he wants to recover Newton's law from his model. The obvious problem of course is that any propagator g(x,y) cannot depend on x or y if everything is diffeomorphism invariant (at least in these people's logic). So he had to include also a dependence on a box around the 'detector' and introduce the metric on the box as boundary values. He seems to get out of this problem by in fact using a relational notion as you would of course have to in any interesting background independent theory (measure not with respect to coordinates but with respect to physical rulers). Then there was a technical part which I didn't quite get and in the end he had something like g(x,y)=1/|x-y|^2 on his slide. This could be interesting. I will download the paper and read it on the train.
Next is Smolin. Again computer problems, this time causing an unscheduled coffee break. Smolin started out talking about problems of bckground independent approaches including unification and the nature of anomalies. Then, however, he decided to focus on another one: How does macroscopic causality arise? He doesn't really know, but looked at some simple models where macro causality is usually destroyed my some non-local edges (like in a small world network). Surprisingly, he claims, these non-local connection do not change macroscopic physics (critical behaviour) a lot and thus they are not really detectable.
Even more, these non-local "defects" could, according to Smolin, play the role of matter. Then he showed another model where instead of a spin network, the physics is in twisted braided ribbon graphs. There, he called some configurations "quarks" and asigned the usual quantum numbers and ribbon transformations for C, P and T. Then it got even better, next slides mentioned the problem of small power in low l modes in the CMB ("scales larger than 1/Lambda"), the Poineer anomly and the Tully Fisher relation that is the empirical observation behind MOND. I have no idea what his theory as to do with all these fancy open probelms. Stefan Theissen next to me makes interesting noises of astonishment.
Next speaker is John Barrett. This talk sounds very solid. He presents a 3+0 dimensional model which to me looks much like a variant of a spin network (a graph with spin labels and certain weight factors for vertices, links, and tetrahedra). He can do Feynman graph like calculations in this model. Further plus: A native speaker of British English.
Last speaker of the forenoon is Stefan Theissen. He tries to explain how gravity arises from string theory to the LQG crowd. Many have left before he started and so far he has only presented string theory as one could have done this already 20 years ago: Einstein's equation as consistency requirement for the sigma model and scattering amplitudes producing the vertices of the Einstein Hilbert action. Solid but not really exciting.
In the afternoon, there are parallel sessions. I chose the "seminar room". Here, Markopoulou presents her idesa that dynamics in some (quantum gravity?) theory has formal similiarities to quantum information processing. In some Ising type model she looks at the block spin transformation and reformulates the fact that low energy fields only talk to the block spins and not to the high frequency fields. With some fancy mathematical machinery, she relates this to error correction where the high frequency fields play the role of noise.
Next is Olaf Dreyer. Very strange. He proposes that quantum mechnics should be deterministic and non-linear. Most of what he says are philosophical statements (and I do by far not agree with all of them) but what seems to be at the core of it is that he does not want macroscopic states that are superpositions of elementary states. I thought that was solved by decoherence long ago...
At least Rovelli asks "[long pause] maybe I didn't understand it. you make very general statements. But where is the physics?"
The next speaker is Wang who expands a bit on what Smolin said in the morning. It's really about Small World Networks (TM). If you have such a network with gauge flux along the edges then in fact a non-local random link looks locally as a charged particle. This is just like in Wheeler's geometrodynamics. The bulk of the talk is about the Ising model on a lattice with a small number of additional random links. The upshot is that the critical temperature and the heat capacity as well as the correltations at criticality do not much depend on the existence of the additional random links.
Martinetti reminds us that time evolution might have a connection with temperature. Concretely, he wants to take the Tomito-Takesaki unitary evolution as time evolution and build a KMS-state out of it. There is a version of the Unruh effect in the language of KMS states and Martinetti works out the corretion to the Unruh temperature from the fact that the observer might have a finite life time. This correction turns out to be so small that by uncertainty, one would have to measure longer than the life time to detect the difference in tempaerture.
I stopped reporting on the afternoon talks as I did not get much out of those. Currently, Rüdiger Vaas, a science journalist, is the last speaker of the day. He at least admits that his talk is on philosophy rather than physics. His topic are the philosophical foundations of big bang physics.
Faster than light or not
I don't know about the rest of the world but here in Germany Prof. Günter Nimtz is (in)famous about his display experiments that he claims show that quantum mechanical tunneling happens instantaneously rather than according to Einstein causality. In the past, he got a lot of publicity for that and according to Heise online he has at least a new press release.
All these experiments are similar: First of all, he is not doing any quantum mechanical experiments but uses the fact that the Schrödinger equation and the wave equation share similarities. And as we know, in vacuum, Maxwell's equations imply the wave eqaution, so he uses (classical) microwaves as they are much easier to produce than matter waves of quantum mechanics.
So what he does is to send a pulse these microwaves through a region where "classically" the waves are forbidden meaning that they do not oscillate but decay exponentially. Typically this is a waveguide with diameter smaller than the wavelength.
Then he measures what comes out at the other side of the wave guide. This is another pulse of microwave which is of course much weaker so needs to be amplified. Then he measures the time difference between the maximum of the weaker pulse and the maximum of the full pulse when the obstruction is removed. What he finds is that the weak pulse has its maximum earlier than the unobstructed pulse and he interprets that as that the pulse has travelled through the obstruction at a speed greater than the speed of light.
Anybody with a decent education will of course immediately object that the microwaves propagate (even in the waveguide) according to Maxwell's equations which have special relativity build in. Thus, unless you show that Maxwell's equations do not hold anymore (which Nimtz of course does not claim) you will never be able to violate Einstein causality.
For people who are less susceptible to such formal arguments, I have written a little programm that demonstrates what is going on. The result of this programm is this little movie.
The programm simulates the free 2+1 dimensional scalar field (of course again obeying the wave equation) with Dirichlet boundary conditions in a certain box that is similar to the waveguide: At first, the field is zero everywhere in the strip-like domain. Then the field on the upper boundary starts to oscillate with a sine wave and indeed the field propagates into the strip. The frequency is chosen such that that wave can in fact propagate in the strip.
(These are frames 10, 100, and 130 of the movie, further down are 170, 210, and 290.) About in the middle the strip narrows like in the waveguide. You can see the blob of field in fact enters the narrower region but dies down pretty quickly. In order to see anything, in the display (like for Nimtz) in the lower half of the picture I amplify the field by a factor of 1000. After the obstruction ends, the field again propagates as in the upper bit.
What this movie definitely shows is that the front of the wave (and this is what you would use to transmit any information) everywhere travels at the same speed (that if light). All what happens is that the narrow bit acts like a high pass filter: What comes out undisturbed is in fact just the first bit of the pulse that more or less by accident has the same shape as a scaled down version of the original pulse. So if you are comparing the timing of the maxima you are comparing different things.
Rather, the proper thing to compare would be the timing when the field first gets above a certain level, one that is actually reached by the weakend pulse. Then you would find that the speed of propagation is the same independant of the obstruction being there or not.
Update: Links updated DAMTP-->IUB
Negative votes and conflicting criteria
Yesterday, Matthijs Bogaards and Dierk Schleicher ran a session on the electoral system for the upcoming general election we are going to have on Sunday in Germany. I had thought I I know how it works but I was proven wrong. Before I was aware that there is something like Arrow's impossibility theorm which states that there is a certain list of criteria your electoral system is supposed to fulfill but which cannot hold all at the same time for any implementation. What typically happens are cyclic preferences (there is a majority for A over B and one for B over C and one for C over A) but I thought all this is mostly academic and does not apply to real elections. I was proven wrong and there is a real chance that there is a paradoxical situation coming up.
Before explaining the actual problem, I should explain some of the background. The system in Germany is quite complicated because it tries to accomodate a number of principles: First, after the war, the British made sure the system contains some component of constituency vote: Each local constituency (electoral district for you Americans) should send one candidate to parliament that is in principle directly responsible to the voters in that district so voters have something like "their representative". Second, proportional vote, that is the number of seats for a party should reflect the percentage of votes for that party in the popular vote. Third, Germany is a federal republic, so the sixteen federal states should each send their own representatives. Finally, there are some practical considerations like the number of seats in parliament should be roughly 600 and you shouldn't need a PhD in math and political science to understand your ballot.
So this is how it works. Actually, it's slightly more complicated but that shall not bother us here. And I am not going into the problem of how to deal with rounding errors (you can of course only have integer seats) which brings with it its own paradoxes. What I am going to cover is how to deal with the fact, that the number of seats has to be non-negative:
The ballot has two columns: In the first, you vote for a candidate from your constituency (which is nominated by its party). In the second, you vote for a party for the proportional vote. Each voter makes one cross in each column, one for a candidate from the constituency and one for a party in the proportional vote. There are half as many constituencies as there are seats in parliament and these are filled immediately according to majority vote of the first column.
The second step is to count the votes in the second column. If a party neither gets more than five percent of those nor wins three or more constituencies their votes are dropped. The rest is used to work out how many of the total of 600 seats each of the parties gets.
Now comes the federal component: Let's consider party A and assume the popular vote says they should get 100 seats. We have to determine how these 100 seats are distributed between the federal states. This is again done proportionally: Party A in federal state (i) gets that percentage of the 100 seats that reflects the percentage of the votes for party from state (i) of the total votes for party A in all of Germany. Let's say this is 10. Further assume that A has won 6 constituencies in federal state (i). Then, in addition to these 6 candiates from the constituencies, the top four candidates from party A's list for state (i) are send to Berlin.
So far, everything is great: Each constituency has "their representative" and the total number of seats for each party is proportional to its share of the popular vote.
Still, there is a problem: The two votes in the two columns are independent. And as the constituencies are determined by majority vote, except in a few special cases (Berlin Kreuzberg where I used to live before moving to Cambridge being one with the only constituency winner from the green party) it does not make much sense to vote for a constituency candidate that is not nominated by one of the two big parties. Any other vote would likely be irrelevant and effectively your only choice is between the candidate of SPD or CDU.
Because of this, it can (and in fact often does for the two big parties) happen that a party wins more constituencies in a federal state than it is entitled to for that state according to the popular vote. In that case (because there are no negative numbers of candidates from the list to balance this) the rule is that all the constituency winners go to parliament and none from the list of that party. The parliament is enlarged for these "excess mandates". So that party gets more seats than their proportion of the popular vote.
This obviously violates the principle of proportional elections but it gets worse: If that happens in a federal state for party A you can hurt this party by voting for it: Take the same numbers as above but assume A has won 11 constituencies in (i). If there are no further excess mandates, in the end, A gets 101 seats in the enlarged parliament of 601 seats. Now, assume A gets an additional proportional vote. It is not impossible that this does not increase A's total share of 100 votes for all of Germany but increases to proportional share for the A's candidates in federal state (i) from 10 to 11. This does not change anything for the represenatives from (i), still the 11 constituency candidates go to Berlin but there is no excess mandate anymore. Thus, overall, A sends only 100 representatives to a parliament of 600, one less than with the additional vote!
As a result, in that situation the vote for A has a negative weight: It decreases A's share in the parliament. Usually, this is not so much of a problem, because the weights of votes depend on what other people have voted (which you do not know when you fill out your ballot) and chances are much higher that your vote has positive weight. So it is still save to vote for your favourite party.
However, this year, there is one constituency in Dresden in the federal state of Saxony where one of the candidates died two weeks before election day. To ensure equal chances in campaining, the election in that constituency has been postponed for two weeks. This means, voters there will know the result from the rest of the country. Now, Saxony is known to be quite conservative so it is not unlikely that the CDU will have excess mandates there. And this might just yield the above situation: Voters from Dresden might hurt the CDU by voting for them in the popular vote and they would know if that were the case. It would still be democratic in a sense, it's just that if voters there prefer CDU or FDP they should vote for FDP and if they prefer SPD or the Greens they should vote for CDU. Still, it's not clear if you can explain that to voters in less then two weeks... I find this quite scary, especially since all polls predict this election to be extremely close and two very different outcomes are withing one standard deviation.
If you are interested in alternative voting systems, Wikipedia is a good starting point. There are many different ones and because of the above mentioned theorem they all have at least one drawback.
Yesterday, there was also a brief discussion of whether one should have a system that allows fewer or more of the small parties in parliament. There are of course the usual arguments of stability versus better representation of minorities. But there is another argument against a stable two party system that is not mentioned often: This is due to the fact that parties can actually change their policies to please more voters. If you assume, political orientation is well represented by a one dimensional scale (usually called left-right), then the situation of icecream salesmen on a beach could occur: There is a beach of 4km with two competing people selling icecream. Where will they stand? For the customers it would be best if they are each 1km from the two ends of the beach so nobody would have to walk more than 1km to buy an icecream and the average walking distance is half a km. However, this is an unstable situation as there is an incentive for each salesman to move further to the middle of the beach to increase the number of customers to which he is closer
than his competitor.
So, in the end, both will meet in the middle of the beach and customers have to walk up to 2km with an average distance of 1km. Plus if that happens with two parties in the political spectrum they will end up with indistinguishable political programs and as a voter you don't have a real choice anymore. You could argue that this has already taken place in the USA or Switzerland (there for other reasons) but that would be unfair to the Democrats.
I should have had many more entries here about politics and the election like my role models on the other side of the Atlantic. I don't know why these never materialised (vitualised?). So, I have to be brief: If you can vote on Sunday, think of where the different parties actually have different plans (concrete, rather than abstract "less unemployment" or "more sunshine") and what the current government has done and if you would like to keep it that way (I just mention the war in Iraq and foreign policy, nuclear power, organic food as a mass market, immigration policy, tax on waste of energy, gay marriage, student fees, reform of academic jobs, renewable energy) your vote should be obvious. Mine is.
The election is over and everybody is even more confused than before. As the obvious choices for coalitions do not have a majority one has to look for the several colourful alternatives and the next few weeks will show us which of the several impossibilities will actually happen. What will definitely happen is that in Dresden votes for the CDU will have negative weight (linked page in German with an excel sheet for your own speculations). So, Dresdeners, vote for CDU if you want to hurt them (and you cannot convince 90% of the inhabitants to vote for the SPD).
Natural scales
When I talk to non-specialists and mention that the Planck scale is where quantum gravity is likely to become relevant sometimes people get suspicious about this type of argument. If I have time, I explain that to probe smaller length details I would need so much CM energy that I create a black home and thus still cannot resolve it. However, if I have less time, I just say: Look, it's relativistic, gravity and quantum, so it's likely that c, G and h play a role. Turn those into a length scale and there is the Planck scale.
If they do not believe this gives a good estimate I ask them to guess the size of an atom: Those are quantum objects, so h is likely to appear, the binding is electromagnetic, so e (in SI units in the combination e^2/4 pi epsilon_0) has to play a role and it comes out of the dynamics of electrons, so m, the electron mass, is likely to feature. Turn this into a length and you get the Bohr radius.
Of course, as all short arguments, this has a flaw: there is a dimensionless quantity around that could spoil dimension arguments: alpha, the fine-structure constant. So you also need to say, that the atom is non-relativistic, so c is not allowed to appear.
You could similarly ask for a scale that is independant of the electric charge, and there it is: Multiply the Bohr radius by alpha and you get the electron Compton wavelength h/mc.
You could as well ask for a classical scale which should be independent of h: Just multiply another power of alpha and you get the classical electron radius e^2/4 pi epsilon_0 m c^2. At the moment, however, I cannot think of a real physical problem where this is the characteristic scale (NB alpha is roughly 1/137, so each scale is two orders of magnitude smaller than the previous).
Update: Searching Google for "classical electron radius" points to scienceworld and wikipedia, both calling it the "Compton radius". Still, there is a difference of an alpha between the Compton wavelength and the Compton radius.
Reading through the arxive's old news items I became aware of hep-th/9203227 for which the abstract reads
\Paper: 9203227
From: [email protected] (J. B. Harvey)
Date: Wed 1 Apr 1992 00:25 CST 1992
A solvable string theory in four dimensions,
by J. Harvey, G. Moore, N. Seiberg, and A. Strominger, 30 pp
\We construct a new class of exactly solvable string theories by generalizing
the heterotic construction to connect a left-moving non-compact Lorentzian
coset algebra with a right-moving supersymmetric Euclidean coset algebra. These
theories have no spacetime supersymmetry, and a generalized set of anomaly
constraints allows only a model with four spacetime dimensions, low energy
gauge groups SU(3) and spontaneously broken SU(2)xU(1), and three families
of quarks and leptons. The model has a complex dilaton whose radial mode
is automatically eaten in a Higgs-like solution to the cosmological
constant problem, while its angular mode survives to solve the strong CP
problem at low energy. By adroit use of the theory of parabolic cylinder
functions, we calculate the mass spectrum of this model to all orders in
the string loop expansion. The results are within 5% of measured values,
with the discrepancy attributable to experimental error. We predict a top
quark mass of $176 \pm 5$ GeV, and no physical Higgs particle in the spectrum.
It's quite old and there are some technical problems downloading it.
Local pancake and axis of evil
I do not read the astro-ph archive on a daily basis (as any astro-* or *-ph archive) but I use liferea to stay up to date with a number of blogs. This news aggregator shows a small window with the heading if a new entry appears in the logs that I told it to monitor. This morning, it showed an entry form Physics Comments with the title "Local pancake defeats axis of evil". My first reaktion was that this must be a hoax, there could not be a paper with that title.
But the paper is genuine. I read the four pages over lunch and it looks quite interesting: When you look at the WMAP power spectrum (or COBE for that matters) you realize that for very low l there is much less power than expected from the popular models. Actually, the plot starts with l=2 because l=0 is the 2.73K uniform background and l=1 is the dipole or vector that is attributed to the Doppler shift due the motion of us (the sun) relative to the cosmic rest frame.
What I did not know is that the l=2 and l=3 have a prefered direction and they actually agree (althogh not with the dipole direction, they are perpendicular to it). This fact was realised by Copi, Huterer, Starkman, and Schwarz as I am reminded). I am not entirely sure what this means on a technical level but it could be something like "when this direction is chosen as the z-direction, most power is concentrated in the m=0 component". This could be either due to a systematic error or a statistical coincidence but Vale cites that this is unlikely with 99.9% confidence.
This axis encoded in the l=2 and l=3 modes has been termed "axis of evil" by Land and "varying speed of light and I write a book and offend everybody at my old university" Magueijo. The in the new paper, Vale offers an explanation for this preferred direction:
His idea is that gravitational lensing can mix the modes and what appears to be l=2 and l=3 is actually the dipole that is mixed into these higher l by the lensing. To first order, this is effect is given by
A = T + grad(T).grad(Psi)
(Jacques is right, I need TeX formulas here!) where T is the true temperature field, A is the apprarant temperature field and Psi is a potential that summarizes the lensing. All these fields are function of psi and phi, the coordinates on the celestial sphere.
He then goes on an uses some spherical mass distribution of twice the mass of the Great attractor 30Mpc away from us to work out Psi and eventually A. The point is that the l=1 mode of A is two orders of magnitude stronger than l=2 and l=3 so small mixing could be sufficient.
What i would like to add here is how to obtain some analytical expressions: As always, we expand everything in spherical harmonics. Then
A_lm = T_lm + integral( Y_lm T_l'm' Psi_l"m" grad(Y_l'm').grad(Y_l"m") )
I am too lazy, but with the help of Mathworlds page on spherical harmonics and spherical coordinates you should be able to work out the derivatives and the integral analytically. By choosing coordinates aligned with the dipole, you can assume that in the correction term only the l'=1, m'=0 term contribute.
Finally, the integral of three Y's is given by an expression in Wigner 3j-symbols and those are non-zero only if the rules for addition of angular momentum hold. Everybody less lazy than myself should be able to work out which A_lm are affected by which modes of Psi_l"m" and it should be simple to invert this formula to find the modes of Psi if you assume all power in A comes from T_10. Especially Psi_l"m" should only infulence modes with l and m not too different from l" and m". By looking at the coefficients, maybe one is even able to see that only the dipole component of Psi has a stron infuence and this only on l=2 and l=3 and only for special values of m.
This would then be an explanation of this axis of evil.
Still, the big problem not only remains but gets worse: The observed power in l=2 and l=3 are too small to fit the model and they even get smaller if one subtracts the leaked dipole.
Using the trackback mechanism at the arxive, I found that at CosmoCoffee there is also a discussion of this paper going on. There, people point out that what counts is the motion of the lense relative to the background (Vale is replying to comments there as well).
It seems to me that this should be viewed as a two stage process: First there is the motion of the lense and this is what converts power in l=1 (due to the lense's motion) to l=2 and l=3. Then there is our motion but that affects only l=1 and not l=2 and l=3 anymore. Is that right? In the end, it's our motion that turns l=0 into l=1.
My not so humble opinions on text books
Over at Cosmic Variance, Clifford has asked for opinions on your favourite text book. Me, joining in only as commentator number 85, would like to repost my $.02 worth here:
Number one are by far the Feynman Lectures (Vol. I and II). From these I learned how physicists think.
When it comes to a GR text, it's clearly MTW. And yes, a tensor is a machine with slots (egg crates etc) and not something with indices that transforms in a particular way (as Weinberg wants to make you believe). [I have to admit, I haven't really looked at Sean's book,yet].
Both these are 100% pure fun to read but admittedly, none of them can probably be used to accompany a lecture course. This is why many people who were busy with their courses never properly used them. Due to biographical oddities (three months of spare time between highschool and uni for Feynman, one year of compulsory [German] community/civil service after two years of uni) I actually read these books from beginning to end. I doubt that many other people can claim this. But it's worth!
In addition, our library had a German/English bilingual edition of the Feynman lectures. So besides physics I could also learn physicists' English (which is different from the literature English I learned in highschool).
Some other books: Jackson makes a great mouse pad and I have used it to this end for years. Plus it contains everything you ever wanted to know about boundary value problems. But clearly it's not fun to read.
The writeup of Witten's lectures in the IAS physics lectures for mathematicians contain lots of interesting insights.
And there is (admittedly not a physics text) "Concrete Mathematics" by Graham, Knuth (the Knuth of TeX) and Patashinik. This is a math book for computer scientists. It's my standard reference for formulas containing binomials, for generating functions, sums, recurrence relations, and asymptotic expressions. And (probably thanks to Knuth) it's full of jokes and fun observations. And there are hundreds of problems of a variety of difficulties, rated from "warmups" to "research problems". Therefore it's also a source for Great Wakering.
Finally, there are the books by Landau and Lifshits. Since my first mechanics course. I have a strong disliking for them, probably based on my own poor judgement. When I first opened vol. 1 I was confused (admitedly I shouldn't have been) by the fact that they use brackets for the vector product rather than \times like everybody else. Okok, it's a Lie bracket but still, it makes formulas ugly. And then there is the infamous appendix on what a great guy Landau has been.
Marco Zagermann, who was in my year can still recite the highlights from this appendix: About the logarithmic scale for physicists and how Landau promoted himself on this scale later in his life, how he only read the abstracts of the papers in the Physical Review and then judged the papers either as pathological or how he rederived the results of the paper just from the abstract for himself. And there are more pearls like this.
LogVyper and summer holiday
After posting my notes on Lee Smolin's paper on the coffee table I left for two weeks of summer holidays which I spent in Berlin. The plan was to catch up with all the amenities of a major capital that you just don't get around to do on ordinary weekends. We were quite successful with this goal and even spent two days on Usedom (an island in the Baltic sea) and due to the weather got up to date with the movies (Sin City (---), LA Crash (++), Melinda and Melinda (+), Colateral (open air, o), The Island (+), and something on DVD (--) which I already forgot.
Update: A. reminds me the DVD was Spanglish and I thought it was (o).
The other thing I did (besides trying to read Thomas Mann's Magic Mountain, I only got to page 120) is I got a new dive computer (a Suunto Vyper) and wrote a GNU Linux software, to download and print dive profiles recorded with it: LogVyper/. It's under the GNU General Public License. Have a look and please give feedback!
Bottom line on patents
Now that the EU parliament has stopped the legislation on software patents, it seems time to summarize what we have learned:
The whole problem arises because it is much easier to copy information than to produce it by other means. On the other hand, what's great about information is that you still have it if you give it to somebody else (this is the idea behind open source).
So, there are patents in the first place because you do not want to disfavour companies that do expensive research and development to companies that just save these costs by copying the results of this R&D. The state provides patent facilities because R&D is in it's interest.
The owner of the patent on the other hand should not use it to block progress and competition in the field. He should therefore sell licenses to the patent that reflect the R&D costs. Otherwise patent law would promote large companies and monopolies as these are more likely to be able to afford the costs of the administrative overhead of filing a patent.
Therefore in an ideal world the patent holder should be forced to sell licenses for a fair price that is at most some specific fraction of the realistic costs of the R&D that lead to the patent (and not the commercial value of the products derived from the patent). Furthermore, the fraction could geometrically depend on the number of licenses sold so far such that the 100th license to an idea is cheaper than the first and so on (with the idea that from license fees you could at most asymptotically gain a fixed multiple of your R&D investment).
This system would still promote R&D while stopping companies from exploiting their patents. Furthermore it would prevent trivial patents as those require hardly any R&D and are therefore cheap (probably, you should not be able to patent an idea for which the R&D costs were not significantly higher than the administrative costs of obtaining the patent).
Unfortunately, in the real world it is hard to measure the costs of R&D that a necessary to come up with a certain idea.
Geek girl
Ever worked out your geek code? Consider yourself a geek? Maybe you should reconsider, check out Jeri Ellsworth, especially the Stanford lecture!
PISA Math
Spiegel online has example problems for the Programme for International Student Assessment (PISA), the international study comparing the problem solving abilities of pubils in different countries in which Germany featured in the lower ranks amongst other develloping countries.
Today, the individual results for the German federal states are published, Bavaria comes out first and the city states (Hamburg, Berlin, Bremen, curiously all places where I was at the uni for some time...) came out last.
However, this might not only be due to superiour schools in conservative, rural federal states but also due to selection: If you are the child of a blue collar worker, your chances of attending high school are six times higher in the city states than in Bavaria. Plus, bad results in these tests can be traced to socially challenge areas with high unemployment and high percentage of immigrants. And those just happen to be more common in the cities than in rural areas.
But what I really wanted to talk about, is the first math problem (for the link in German). My translation is:
The picture shows the footprints of a walking man. The step length P corresponds to the distance between the two most rear points of two successive footprints. For men the formula n/P=140 expresses the approximate relation between n and P, where
n = number of steps per minute
P = step length in meters
This is followed by two questions asking to solve for P in one example, one asks for the walker's speed in m/s and km/h given P.
OK, these questions can be solved with simple algebra. But what I am worried about: n/P = 140 looks really suspicous! Not the the units are lacking (so it's like an astronomer's formula where you are told that you have to insert luminosity in magnitudes and distances in parsec and velocities by quater nautical miles per week), but if they had inserted units, they would see that 140 is actually 140 steps^2/s/m. What is this? They are suggesting that people who make longer steps have a higher frequency of steps?!? C'mon this is at least slightly against my intuition.
But this is a math rather than a natural sciences question, so as long as there are numbers, it's probably ok...
PS: The English version is on page 9 of this pdf.
Sailing on Radiation
Due to summer temperatures, I am not quite able to do proper work, so I waste my time in the blogsphere. Alasdair Allen is an astronomer that I know from diving in the English Channel. In his blog, he reports on the recent fuzz about Cosmos-1 not reaching its orbit.
Cosmos-1 was a satellite that was supposed to use huge mirror sails to catch the solar radiation for propulsion. Al also mentions a paper together with a rebuttal that claims that this whole principle cannot work.
In physics 101, we've all seen the light mill that demonstrates that the photons that bounce off the reflecting sides of the panels transfere their momentum to the wheel. So this shows that you can use radiation to move things.
Well, does it?
Gold argues, that the second law of thermodynamics is in the way of using this effectively. So what's going on? His point is that once the radiation field and the mirrors are in thermal equilibrium, the mirror would emit photos to both sides and there is no net flux of momentum. On general grounds, you should not be able to extract mechanical energy from heat in a world where everything has the same temperature.
The reason that the light mill works is really that the mill is much colder than the radiation. So, it seems to me that the real question (if Gold is right, which I tend to think, but as I said above, it's hot and I cannot really convice myself, that at equilibrium the emission and absorption of photons to both sides balances) is how long it takes for the sails to heat up. If you want to archive a significant amount of acceleration they should be very light which on the other hand means the absolute heat capacity is small.
At least, the rebuttle is so vague, it's written by an engineer of the project, that I don't think he really understood Gold's argument. But it seems, that some physics in the earlier stages of the flight was ill understood, as cosmos-1 did not make it to orbit...
Back in Bremen and after finishing my tax declaration for 2004, great wakering provided my with an essay by Michael Peskin on the future of scientific publication. Most of it contains the widely accepted arguments about how The ArXive has revolutionized high energy physics but one aspect was new to me: He proposes that the refereeing process has to be organized by professionals and that roughly 30 percent of the costs of an article in PRL come from this stage of the publishing process. He foresees that this service will always need to be paid for but his business model sounds interesting: As page charges don't work, libraries should pay a sum (depending on the size of the institution but not on the number of papers) to these publishers which then accept papers from authors affiliated with those institutions for refereeing.
This would still require a brave move to get this going but this would have to come from the libraries. And libraries are well aware of the current crisis in the business (PRL is incedibly cheap (2950 US$) compared to NPB which costs 15460 US$ per year for institutions).
Once we are in the process of reforming the publishing process, I think we should also adopt an idea that I learnt from Vijay Balasubramanian: If a paper gets accepted, the name of the referee should also be made public. This would still protect the referee that rejects a paper but would make the referee accountable and more responsible for accepting any nonsense.
Still more phenomenology
Internet connectivity is worse than ever, so Jan Plefka and I had to resort to an internet cafe for lunch to get online. So I will just give a breif report of what happened since my last report.
First there was Gordon Kane who urged everybody to think about how to extract physical data from the numbers that are going to come out of LHC. He claimed, that one should not expect (easily obtained) useful numbers on susy except the fact that it exists. Especially, it will be nearly impossible to deduce lagrangian parameters (masses etc) for the susy particles as there are not enough independent observables at LHC to completely determine those.
Still, he points out that it will be important to be trained to understand the message that our experimental friends tell us. To this end, there will be the LHC Olympics where Monte Carlo data of the type as it will come out of the experiment will be provided with some interesting physics beyond the standard model hidden and there will be a competition to figure out what's going on.
Today, Dvali was the first speaker. He presented his model that amounts to an IR modification of gravity (of mass term type) that is just beyond current observational limits from solar system observations that would allow for fit of cosmological data without dark energy. One realization of that modification would be a 5D brane world scenario with a 5D Einstein Hilbert action and a 4D EH action for the pull-back of the metric.
Finally, there was Paul Langacker who explained why it is hard to get seasaw type neutrinos from heterotic Z_3 orbifolds. As everybody knows, in the usual scenario neutrino masses arise from physics around some high energy (GUT, Planck?) scale. Therefore neutrino physics might be the most direct source of ultra high energy physics information and one should seriously try to obtain it from string constructions. According to Langacker, this has so far not been possible (intersectin g brane worlds typically preserve lepton number and are thus incompatible with Majorana masses and he showed that none of the models in the class he studied had a usable neutrino sector).
More Phenomenology
Now, we are at day three of the string phenomenology and it gets
better by the day: Yesterday, the overall theme was flux vacua and
brane constructions. These classes of models have the great advantage
over heterotic constructions for example that they are much more
concrete (fewer spectral sequences involved) and thus a simple mind
like myself has fewer problem to understand them.
Unfortunately, at the same rate as talks become more interesting (at
least to me, I have to admit that I do not get too excited when people
present the 100th semi-realistic model that might even have fewer
phenomenological shortcomings than the the 99th that was presented at
last year's conference) the internet connectivity get worse and worse:
In principle, there is a WLAN in the lecture hall and the lobby and it
is protected by a VPN. However, the signal strength is so low that the
connection gets lost every other minute resulting in the VPN client
also losing its authentification. As a result, I now type this into my
emacs and hope to later cut and paste it into the forms at
blogger.com.
Today's session started with two presentations that I am sure many
people are not completely convinced by they at least had great
entertainment value: Mike Douglas reviewed his (and collaborators)
counting of vacua and Dimopoulos presented Split Supersymmetry.
Split Supersymmetry is the idea that the scale of susy breaking is
much higher than the weak scale (and the hierarchy is to be explained
by some other mechanism) but the fermionic superpartners still have
masses around (or slightly above) 100GeV. This preserves the MSSM good
properties for gauge unification and provides dark matter candidates
but removes all possible problems coming with relatively light scalars
(CP, FCNC, proton decay). However, it might also lack good motivation
(Update: I was told, keeping this amount of susy prevents the Higgs
mass from becoming too large. This is consistent with upper bounds
coming from loop corrections etc) . But as I learned at the weak scale
there are only four coupling constants that all come from tan(beta) so
they should run and unify at the susy scale.
But the most spectacular prediction would be that LHC would produce
gluinos at a rate of about one per second and as they decay through
the heavy scalars they might well have a life time of several
seconds. As they are colour octets they either bind to q q-bar or to
qqq and thus form R-mesons and R-baryons. These (at least if charged
which a significant fraction would be) would get stuck inside the
detector (for example in the muon chambers) and decay later into jets
that would be easy to observe and do not come from the interaction
area of the detector. So, stay tuned for a few more years.
Talking of interesting accelerator physics beyond the standard model,
Gianguido Dall'Agata urges me to spread a rumour that at some US
accelerator (he doesn't remember which) sees evidence for a Z' that is
a sign of another SU(2) group (coupling to right handed fermions?)
that is broken at much higher scale than the usual SU(2)-left. He
doesn't remember any more details but he promised to dig up the
details. So again, stay tuned.
Finally, I come to at least one reader at Columbia's favourite topic,
The Landscape(tm). Mike gave a review talk that evolved from a talk
that he has already given a number of times, so there was not much
news. I haven't really followed this topic over the last couple of
months so I was updated on a number of aspects and one of them I find
worth discussing. I have to admit it is not really new but at least to
me it got a new twist.
It is the question of which a priori assumptions you are willing to
make. Obviously you want to exclude vacua with N=2 susy as they come
with exact moduli spaces. That is there is a continuum of such vacua
and these would dominate any finite number however large it (or
better: its exponent) it might be. Once you accept that you have to
make some assumption to exclude some "unphysical" vacua you are free
to exclude further: It is common in this business to assume four
non-compact dimensions and put an upper bound on the size of the
compact ones (or a lower bound on KK masses) for empirical
reasons. Furthermore, one could immediately exclude models that for
example unwanted ("exotic") chiral matter. To me (being no expert in
these counting matters), intuition from intersecting branes and their
T-duals, magnetized branes, suggests that this restriction would help
to get rid of really many vacua and in the end you might end up with a
relatively small number of remaining ones.
Philosophically speaking, accepting a priori assumptions (aka
empirical observations) one gives up the idea of a theory of
everything, a theory that predicts every observation you make. Be it
the amount of susy, the number of generations, the mass of the
electron (in Planck units), the spectrum of the CMB, the number of
planets in the solar system, the colour of my car. But (as I have
argued earlier) a hope for such a TOE would have been very optimistic
anyway. This would be a theory, that has only one single solution to
its equations of motion (if that classical concept
applies). Obviously, this is a much stricter requirement than to ask
for a theory without parameters (a property I would expect from a more
realistic TOE). All numerical parameters would actually be vevs of
some scalar fields that are determined by the dynamics and might even
be changing or at least varying between different solutions.
So, we will have to make a priori assumptions. Does this render the
theory unpredictive? Of course not! At least if we can make more
observations that data we had to assume. For example, we could ask for
all string vacua with standard model gauge group, four large
dimensions, susy breaking at around 1TeV and maybe an electron mass of
511keV and some weak coupling constant. Then maybe we end up with an
ensemble of N vacua (hopefully a small number). Then we could go ahead
(if we were really good calculators) and check which of these is
realized and from that moment on we would make predictions. So it
would be a predictive theory, even if the number of vacua would be
infinite if we dropped any of our a priori assumptions.
Still, for the obvious reasons, we would never be able to prove that
we have the correct theory and that there could not be any other, but
this is just because physics is an empirical science and not math.
I think, so far it is hard to disagree with what I have said (although
you might not share some of my hopes/assumptions). It becomes really
controversial if one starts to draw statistical conclusions from the
distribution of vacua as in the end we only live in a single one. This
becomes especially dangerous when combined with the a priori
assumptions: These are of course most effective when they go against
the statistics as then they rule out a larger fraction of vacua. It is
tempting to promote any statement which goes against the statistics
into an a priori assumption and celebrate any statement that is in
line with the weight of the distribution. Try for yourself with the
statement "SUSY is broken at a low scale". This all leaves aside the
problem that so far nobody has had a divine message about the
probability distribution between the 10^300 vacua and why it should be
flat.
|
CommonCrawl
|
Emerging Themes in Epidemiology
Dynamics of COVID-19 progression and the long-term influences of measures on pandemic outcomes
Yihong Lan1,
Li Yin2 &
Xiaoqin Wang3
Emerging Themes in Epidemiology volume 19, Article number: 10 (2022) Cite this article
The pandemic progression is a dynamic process, in which measures yield outcomes, and outcomes in turn influence subsequent measures and outcomes. Due to the dynamics of pandemic progression, it is challenging to analyse the long-term influence of an individual measure in the sequence on pandemic outcomes. To demonstrate the problem and find solutions, in this article, we study the first wave of the pandemic—probably the most dynamic period—in the Nordic countries and analyse the influences of the Swedish measures relative to the measures adopted by its neighbouring countries on COVID-19 mortality, general mortality, COVID-19 incidence, and unemployment. The design is a longitudinal observational study. The linear regressions based on the Poisson distribution or the binomial distribution are employed for the analysis. To show that analysis can be timely conducted, we use table data available during the first wave. We found that the early Swedish measure had a long-term and significant causal effect on public health outcomes and a certain degree of long-term mitigating causal effect on unemployment during the first wave, where the effect was measured by an increase of these outcomes under the Swedish measures relative to the measures adopted by the other Nordic countries. This information from the first wave has not been provided by available analyses but could have played an important role in combating the second wave. In conclusion, analysis based on table data may provide timely information about the dynamic progression of a pandemic and the long-term influence of an individual measure in the sequence on pandemic outcomes.
Since the World Health Organization (WHO) declared coronavirus disease 2019 (COVID-19) a pandemic on 11 March 2020, countries around the globe have adopted different strategies to combat the pandemic. The progression of a pandemic is a complex stochastic process, in which a sequence of measures are implemented and pandemic outcomes occur between measures. Here, the pandemic outcomes include COVID-19 incidence, COVID-19 related admission to hospital or intensive care, COVID-19 death, general death, or economic indicators such as unemployment. The sequence of measures may be, for instance, a sequence of vaccine doses or a sequence of administrative interventions.
The pandemic progression is dynamic in the sense that the pandemic outcomes are results from the earlier measures and reasons for the subsequent measures and outcomes. Due to the dynamics of pandemic progression, a challenge arises in analysing the long-term influence of an individual measure in the sequence on pandemic outcomes. Probably the most dynamic progression occurred during the first wave of the pandemic, which completed a cycle of rising, plateau, and decline and base for public health outcomes such as COVID-19 deaths. The Nordic countries (Sweden, Denmark, Norway, and Finland) followed nearly the same time line of the progression during the first wave between March 2020 and August 2020, so we focus on the Nordic countries.
During the first wave, Sweden was representative of those strategies, emphasizing the mitigation of transmission and taking stepwise mild measures [1,2,3]. On the other hand, the other Nordic countries, i.e., Denmark, Finland, and Norway, were representative of the common strategies, emphasizing the suppression of transmission and taking invasive measures [3,4,5,6]. In the initial period, public health outcomes in Sweden were far poorer than the other Nordic countries but were gradually improved in the later period, i.e., curves were flattened, leading to a sense of optimism. Without considering the dynamics of the pandemic progression and the long-term influence of the early measure, Sweden continued with the mild measure recommendation upon arrival of the second wave, leading to a surge of poor public health outcomes. For instance, the COVID-19 mortality per 100,000 individuals in Sweden versus the other Nordic countries was 69.71 versus 13.40 between September 2020 and February 2021 (the second wave) [7,8,9,10,11,12].
There have been a large variety of analyses comparing the Nordic countries for the effectiveness of their strategies in combating COVID-19. One class of analyses is descriptive, which uses the daily or weekly count of public health and economic outcomes without adjustment for characteristic differences and updating pandemic situations [3,4,5,6]. Another class of analyses is statistical, which allows for adjustment of the characteristic differences and updating pandemic situations [13,14,15]. There are also mathematical analyses [16, 17] and political and cultural analyses [18,19,20]. However, few analyses involve the dynamics of pandemic progression and the long-term influences of individual measures in the sequence.
In this article, we demonstrate the dynamic progression of a pandemic and the long-term influence of an individual measure on pandemic outcomes. The pandemic outcomes include COVID-19 mortality, general mortality, COVID-19 incidence, and unemployment. The influence is described by the causal effect, which is defined in this article as an increase in the summary outcomes under different sequences of the Swedish and common measures. Furthermore, table data is used to demonstrate that analyses may provide timely information about the dynamic progression of a pandemic.
Causal effects of Swedish strategy relative to common strategy on public health outcomes
Dynamic progression of the pandemic
Here, we study the public health outcomes of COVID-19 mortality, general mortality, and COVID-19 incidence. We follow the population in Scandinavian countries during weeks 1–35, 2020. Please note that week 1, 2020 corresponds to the dates from 30 December 2019 to 5 January 2020.
The initial period of the pandemic took place around weeks 10–18 in the Nordic countries. Weeks 10–35 completed a cycle of rising, plateau, and decline and base for the public health outcome, and they are considered as the first wave of the pandemic [2, 3]. Because it is impossible to know when measures became effective, we divide the entire follow-up into four periods of approximately equal length: weeks 1–9, 10–18, 19–26, and 27–35. Let period \(t\) \((t=1, 2, 3)\) indicate the three periods during weeks 10–35: period \(1\) for weeks 10–18, period \(2\) for weeks 19–26, and period \(3\) for weeks 27–35. Please note that period 2 is one week shorter than periods 1 and 3. In Supplementary Information, we conduct a sensitivity analysis to show that when alternatively dividing the entire follow-up into weeks 1–9, 10–17, 18–26, and 27–35, the result only differs slightly, and the conclusion is the same. To examine the sensitivity of our methodology to periodization, we divide the follow-up into periods of different lengths and obtain essentially the same result and conclusion (results not shown).
During weeks 1–9, the pandemic had not yet broken out, so no measure was taken, and there was only outcome \({y}_{0}\) for general mortality in population \({p}_{0}\). During period \(1\) (weeks 10–18), the exposure was \({z}_{1}=1\) for Swedish measure or \({z}_{1}=0\) for common measure and yielded outcome \({y}_{1}\) for population \({p}_{1}\). From here and on, the common measures refer to those adopted by the other Nordic countries. During period \(2\) (weeks 19–26), the exposure was \({z}_{2}=1\) for Swedish measure or \({z}_{2}=0\) for common measure and yielded outcome \({y}_{2}\) for population \({p}_{2}\). During period \(3\) (weeks 27–35), the exposure was \({z}_{3}=1\) for Swedish measure or \({z}_{3}=0\) for common measure and yielded outcome \({y}_{3}\) for population\({p}_{3}\).
Outcome \({y}_{0}\) represents the initial health status and has an influence on outcomes \({y}_{1}\), \({y}_{2}\) and \({y}_{3}\). Thus, it is a stationary covariate and may confound the causal effects of exposures \({z}_{1}\), \({z}_{2}\) and \({z}_{3}\). Outcome \({y}_{1}\) represents the updating health status and pandemic situation during exposure \({z}_{1}\) and has an influence on outcomes \({y}_{2}\) and \({y}_{3}\). Thus, it is also a time-dependent covariate between exposures \({z}_{1}\) and \({z}_{2}\) and may confound the causal effects of exposures \({z}_{2}\) and \({z}_{3}\). Outcome \({y}_{2}\) represents the updating health status and pandemic situation during exposure \({z}_{2}\) and has an influence on outcome \({y}_{3}\). Thus, it is also a time-dependent covariate between exposures \({z}_{2}\) and \({z}_{3}\) and may confound the causal effect of exposure \({z}_{3}\).
Confounding adjustment and the assumption of no hidden confounding covariates
Scandinavian countries are similar to one another in terms of economy, culture, and society. So, most of the stationary covariates, such as gender, education, and socioeconomic status, have similar distributions among these countries and thus do not confound the effects of exposures \({z}_{1}\), \({z}_{2}\) and \({z}_{3}\). As a result, there is no need to adjust for these covariates as is common practice in causal inference. Table 1 lists some characteristics of the populations in the Nordic countries. As seen in this table, the initial general mortality \({y}_{0}\) and population density \(x\) differ considerably in different regions of these countries and may confound the causal effects. Therefore, we divide Sweden into six regions: Stockholm, Skåne, Gothenburg, Halland, Västmanland, and the rest of Sweden. Because COVID-19 mortality is low in Denmark, Finland, and Norway, we do not divide these countries into small regions. For the COVID-19 incidence, we divide Sweden into only two regions (Stockholm and the rest of Sweden) due to the data quality for the number of tested people for weeks 10–22.
Table 1 Characteristics of study populations in regions of the Nordic countries before the breakout of COVID-19: (1) Stockholm, (2) Skåne, (3) Göteborg, (4) Halland, (5) Västmanland, (6) the rest of Sweden, (1–6) Sweden as a whole, (7) Denmark, (8) Norway, (9) Finland
There may exist other potential confounding covariates, such as immigration status. Because different definitions of immigration status are used in these countries, it is difficult to adjust for immigration status without individual-level data. However, such covariates are often highly associated with population density, and as an approximation, we consider only population density as the confounding covariate in addition to the time-dependent outcomes \({y}_{0}, { y}_{1},\) and \({y}_{2}\). A summary of population densities, exposures, outcomes (covariates) and populations is given in Table 2 together with the probability models for the outcomes.
Table 2 A summary of population densities, exposures, outcomes, and the populations during different periods of the first wave
To summarize the confounding situation in the pandemic progression, we have the assumption of no hidden confounding covariates: (a) conditional on population density \(x\) and outcome \({y}_{0}\), no other covariates confound the causal effect of an exposure sequence \({(z}_{1}, {z}_{2},{z}_{3})\); (b) conditional on population density \(x\) and outcome \({y}_{1}\), no other covariates confound the causal effect of an exposure sequence \({(z}_{2},{z}_{3})\); (c) conditional on population density \(x\) and outcome \({y}_{2}\), no other covariates confound the causal effect of exposure \({z}_{3}\). The assumption implies that to study the causal effects of exposures, we need to compare the outcomes of the exposures on the same level of population density and the most recent outcome prior to these exposures. In the Discussion section, we will discuss the limitation of our analysis linked to this assumption. In the Data sources section, we will describe the table data used for our analysis in detail.
Analytic strategy
We will estimate two types of causal effects of the Swedish measures relative to the common measures: sequential causal effects and long-term causal effects. The sequential causal effect compares Swedish sequence versus common sequence for a summary outcome, for instance, Swedish sequence \({(z}_{1}, {z}_{2},{z}_{3})=(1, 1, 1)\) versus common sequence \((0, 0, 0)\) for summary outcome \({y}_{1}+{y}_{2}+{y}_{3}.\) Both the exposure sequences and the summary outcomes are observed for these causal effects, so we can apply regression to estimate them.
The long-term causal effect compares, for instance, mixed sequence \({(z}_{1}, {z}_{2},{z}_{3})=(1, 0, 0)\) to common sequence \((0, 0, 0)\) for the summary outcome \({y}_{1}+{y}_{2}+{y}_{3}\). Because mixed sequence cannot be observed, we cannot apply regression to estimate the long-term causal effect. Due to Robins [21], sequential causal inference is developed to estimate long-term causal effects under unobserved sequences of exposures by using observed data. Notably, the new general formula (G-formula) reveals a rather intuitive observation that the causal effect of an exposure sequence must be the sum of contributions of individual exposures in the sequence [22]. The new G-formula allows us to estimate the long-term causal effect from the estimated sequential causal effect without introducing additional modeling assumptions. In the following subsections, we will describe analyses and the results in detail.
Sequential causal effects of the Swedish sequences relative to common sequences
We estimate the following three sequential causal effects of interest: (i) an increase in summary outcome \({y}_{1}+{y}_{2}+{y}_{3}\) during periods 1, 2, and 3 (weeks 10–35) under the Swedish sequence \({(z}_{1}, {z}_{2}, {z}_{3})=(1, 1, 1)\) relative to the common sequence \((0, 0, 0)\), (ii) an increase in summary outcome \({y}_{2}+{y}_{3}\) during periods 2 and 3 (weeks 19–35) under the Swedish sequence \(({z}_{2},{z}_{3})=(1, 1)\) relative to the common sequence \((0, 0)\), and (iii) an increase in outcome \({y}_{3}\) during period 3 (week 27–35) under the Swedish measure \({z}_{3}=1\) relative to the common measure \(0\). In the context of the pandemic, the exposure sequence takes either the Swedish sequence or the common sequence. The outcomes are observed under the exposure sequences in causal effects (i), (ii) and (iii), so we can use regression to estimate these causal effects [21, 22]. The results are summarized in Table 3. A detailed description of the probability models and regression models is given in the Method section below.
Table 3 Estimate, 95% CI, and p-value for sequential causal effects of the Swedish sequence relative to the common sequence on summary public health outcomes
As shown from causal effect (i) in Table 3, the Swedish strategy performed far worse than the common strategy throughout the complete follow-up (weeks 10–35) for all public health outcomes: it led, per 100,000 individuals, to 42.6 (95% Confidence Interval: 41.0–44.1) more COVID-19 deaths, 25.0 (18.7–30.7) more general deaths and 19,094.5 (18,916.6–19,212.3) more COVID-19 incidences. As shown from causal effects (ii) and (iii), the Swedish strategy improved its performance during weeks 19–35 and 27–35, particularly for general mortality: it led, per 100,000 individuals, to 20.0 (11.1–28.2) fewer general deaths during week 19–35 and 17.6 (12.6–22.5) fewer general deaths during week 27–35. The reason might be that the Swedish public health system regained its usual level of general medical care after the early pandemic period of weeks 10–18. In the Supplementary Information, we conduct a sensitivity analysis to show that the improvement was not due to population change caused by more general deaths during weeks 10–18.
Long-term causal effects of the Swedish measures relative to common measures
To reveal the critical role of the early measures in combating the pandemic, we then estimate two long-term causal effects (iv) and (v). Causal effect (iv) is an increase in summary outcome \({y}_{1}+{y}_{2}+{y}_{3}\) during periods 1, 2, and 3 (weeks 10–35) under the mixed sequence \({(z}_{1}, {z}_{2},{z}_{3})=(1, 0, 0)\) relative to the common sequence \(({0}, 0, 0)\), and it describes the long-term influence of the Swedish measure during period 1 on the summary outcome during periods 1, 2, and 3. Causal effect (v) is an increase in summary outcome \({y}_{2}+{y}_{3}\) during periods 2 and 3 (weeks 19–35) under the mixed sequence \(({z}_{2},{z}_{3})=(1, 0)\) relative to the common sequence \((0, 0)\), and it describes the long-term influence of the Swedish measure during period 2 on the summary outcome during periods 2 and 3.
Here, the outcomes are not observable because the population is never exposed to mixed sequence \({(z}_{1}, {z}_{2},{z}_{3})=(1, 0, 0)\) or \(({z}_{2},{z}_{3})=(1, 0)\), so we cannot use regression to estimate long-term causal effects (iv) and (v). However, by the new G-formula [22], sequential causal effect is a sum of contributions from individual exposures in the sequence, and therefore we obtain the equality that causal effect (ii) is equal to the sum of causal effects (v) and (iii). The equality is illustrated by the fact that the sequences in causal effects (ii), (v) and (iii) are \(({z}_{2},{z}_{3})=(1, 1)\), \(({z}_{2},{z}_{3})=(1, 0)\) and \({z}_{3}=1\). By using this equality, we obtain the estimate of causal effect (v) from the estimates of causal effects (ii) and (iii). Similarly, causal effect (i) is equal to the sum of causal effects (iv), (v), and (iii). We obtain the estimate of causal effect (iv) from the estimates of causal effects (i), (v) and (iii). A detailed description of this method is given in the Method section below. The estimates of causal effects (iv) and (v) are presented in Table 4.
Table 4 Estimate, 95% CI, and p-value for long-term causal effects of the Swedish measure relative to the common measure on public health outcome during different periods
Table 4 shows that the early Swedish measure had a long-term and significant influence on public health outcomes. As shown from causal effects (iv), the Swedish measure during the early period (weeks 10–18) led, per 100,000 individuals, to 25.1 (23.0–27.0) more COVID-19 deaths, 44.3 (34.5–54.2) more general deaths and 10,422.1 (8553.8–12,290.5) more COVID-19 incidences for the whole first wave (weeks 10–35). From causal effects (iv), (v) in Table 4, and (iii) in Table 3 together, we see a continual improvement in the Swedish measures relative to the common measures along weeks 10–18, 19–26, and 27–35.
Causal effects of the Swedish strategy relative common strategy on unemployment
Here, we study the economic outcome of unemployment in an analogy to the public health outcomes. We divide the complete follow-up (quarters 1–3) into three periods: quarters 1, 2, and 3. During quarter 1, no measures were taken, and even if some measures had been taken, they would not have influenced unemployment in the current quarter, so there is only unemployment \({y}_{1}\) from labour force \({p}_{1}\) in quarter 1. During quarter 2, the exposure is \({z}_{2}=1\) for Swedish measure or \({z}_{2}=0\) for common measure, yielding unemployment \({y}_{2}\) in labour force \({p}_{2}\). During quarter 3, the exposure is \({z}_{3}=1\) for Swedish measure or \({z}_{3}=0\) for common measure, yielding unemployment \({y}_{3}\) in labour force \({p}_{3}\).
To adjust for confounding, we have the following assumption of no hidden confounding covariates: (a) conditional on population density \(x\) and outcome \({y}_{1}\), no other covariates confound the causal effect of exposure sequence \({(z}_{2},{z}_{3})\); (b) conditional on population density \(x\) and outcome \({y}_{2}\), no other covariates confound the causal effect of exposure \({z}_{3}\). With the assumption and the data, we will estimate the following three causal effects for unemployment: (i) an increase in summary unemployment \({y}_{2}+{y}_{3}\) during quarters 2–3 under the Swedish sequence \(({z}_{2},{z}_{3})=(1, 1)\) relative to the common sequence \((0, 0)\), (ii) an increase in unemployment \({y}_{3}\) during quarter 3 under the Swedish measure \({z}_{3}=1\) relative to the common measure \(0\), (iii) an increase in summary unemployment \({y}_{2}+{y}_{3}\) during quarters 2–3 under the mixed sequence \(({z}_{2},{z}_{3})=(1, 0)\) relative to the comment sequence \((0, 0)\). Causal effects (i) and (ii) are sequential. Causal effect (iii) is long-term. In the Method section, we describe the probability model and the regression model in detail. The estimates of causal effects (i), (ii), and (iii) are presented in Table 5.
Table 5 Estimate, 95% CI, and p-value for causal effects of the Swedish strategy relative to the common strategy on unemployment during different periods
As shown from causal effects (i) and (ii) in Table 5, the Swedish strategy performed worse than the common strategy during quarters 2–3 and quarter 3 for unemployment: it led, per 100,000 individuals, to 1177.0 (1088.8–1265.1) more unemployment during quarters 2–3 and 528.4 (480.2–576.5) more unemployment during quarter 3. As shown from causal effect (iii), the early measures during quarter 2 had a certain long-term mitigating influence on unemployment, yielding a mild increase of 648.6 (555.7–751.5) unemployment per 100,000 individuals during quarters 2–3.
This article analyses the dynamic progression of the first wave in the Nordic countries and has two major findings. First, the early mild measure had a long-term and significant influence on public health outcomes. Second, the early mild measure led to a certain degree of long-term mitigating influence on unemployment. The article demonstrates that the long-term influence of an individual measure in the sequence can be sizable and significant and may play an important role in combating COVID-19 or the future pandemic.
The analysis in this article contributes to combating the pandemic in two aspects. First, to the best of our knowledge, the dynamics of pandemic progression has not been sufficiently studied. Our analysis demonstrates that the long-term influence of individual measure in the sequence can be estimated in the framework of sequential causal inference [21, 22]. Second, the data used for our analysis is the same table data as used for descriptive analyses. This implies that our analysis can be conducted at the same time as descriptive analysis. We believe that, by the same method, one can analyse the dynamic progression of pandemic under vaccination, where the exposure sequence can be a sequence of vaccine doses, the outcome can be COVID-19 incidence, admission to hospital or intensive care, or COVID-19 death, the study population can be one nation, and the data can be table data recording the frequencies of vaccine sequences and outcomes over the time. The causal effect of individual dose over prolonged period has been one of the major issues in combating the current COVID-19 [23].
Though our analyses based on table data can be timely, they have several limitations in comparison to individual-level data. First, some of the covariates are individual based, such as income and education, and it is impossible to assess their confounding influence on the causal effect without individual-level data. Second, it is difficult to conduct quality control of pandemic outcomes. In this article, we use COVID-19 incidences among tested people during different periods to study the transmission. Ideally, we might use admission to inpatient care and intensive care as the outcome. However, different countries had different policy for admission; for instance, Denmark had a much higher admission rate than Sweden, so it would be problematic to use admission as an outcome of the transmission. The second limitation exists for both table data and individual-level data.
As recommended by the WTO, all four Nordic countries identify COVID-19 death as death for which a positive COVID-19 PCR test was recorded within 30 days. Unemployment is measured as the number of unemployed persons aged 15–74, and employment as the number of employed persons aged 15–74. These numbers are produced by the labour force surveys conducted in individual countries following the European Union Council Regulation. The labour force is the sum of employed and unemployed persons. Population density is measured as the number of inhabitants per square kilometre.
The Public Health Agency of Sweden and the National Board of Health and Welfare are two national agencies accountable to the Swedish government. The Public Health Agency has an overall responsibility for the control of communicable diseases, such as COVID-19. From its public webpage (https://www.folkhalsomyndigheten.se/the-public-health-agency-of-sweden/), we obtained the number of tested people and COVID-19 incidences in different regions of Sweden. The National Board of Health and Welfare has a general responsibility for social welfare and healthcare including knowledge support and statistics. From its public website (https://www.government.se/government-agencies/national-board-of-health-and-welfare--socialstyrelsen/), we obtained the COVID-19 mortality and general mortality. Statistics Sweden is a government agency that produces official statistics. From its public website (https://www.scb.se/en), we obtained the population size, population density, unemployment, and labour force.
The Danish Health Authority is the national agency for health care. Statistics Denmark is the national agency that produces official statistics. All table data relevant to our article were obtained from the public site of Statistics Denmark (https://www.dst.dk/en).
The Finnish Institute for Health and Welfare is the national agency for healthcare and welfare in Finland. From its public website (https://thl.fi/en/web/thlfi-en), we obtained the numbers of tested people and COVID-19 incidences and COVID-19 mortality. Statistics Finland is the national agency that produces official statistics. From its public website (https://www.stat.fi/index_en.html), we obtained general mortality, population size, population density, unemployment, and labour force.
The Norwegian Institute of Public Health is the national agency for public healthcare in Norway. From its public website (https://www.fhi.no/en/), we obtained the numbers of tested people and COVID-19 incidences, COVID-19 mortality, and general mortality. Statistics Norway is the national agency that produces official statistics. From its public website (https://www.ssb.no/en), we obtained population size, population density, unemployment, and labour force. In the Supplementary Information, we provide table data relevant to this article.
Method of estimating causal effects on public health outcomes
Here, we estimate the causal effect of the Swedish strategy relative to the common strategy on COVID-19 mortality, general mortality, and COVID-19 incidence. COVID-19 mortality and general mortality are measured as the number of deaths during a follow-up of person weeks in the population. They follow the Poisson distribution conditional on the history of previous exposures and covariates. COVID-19 incidence is measured as the number of cases among a group of tested persons. It follows the binomial distribution conditional on the history of previous exposures and covariates. The regression models are given in detail below.
Causal effect (i) is an increase in summary outcome \(s={y}_{1}+{y}_{2}+{y}_{3}\) during periods 1, 2, and 3 under the Swedish sequence \({(z}_{1}, {z}_{2},{z}_{3})=(1, 1, 1)\) relative to the common sequence \({(z}_{1}, {z}_{2},{z}_{3})=(0, 0, 0)\). Let \({r}_{0}={y}_{0}/{p}_{0}\), which is the mortality rate during weeks 1–9. Let \(w\) be the variable that describes the exposure sequence during periods 1, 2, and 3 such that \(w=1\) for the Swedish sequence \((1, 1, 1)\) or 0 for the common sequence \((0, 0, 0)\). The regression model for the expectation of the summary outcome \(s={y}_{1}+{y}_{2}+{y}_{3}\) is
$$E\left(s | x,{r}_{0}, w\right)=({p}_{1}+{p}_{2}+{p}_{3})\left(\alpha +\gamma x+{\delta r}_{0}+\beta w\right).$$
Here, the link function is identity function; the covariates are density \(x\) and mortality rate \({r}_{0}={y}_{0}/{p}_{0}\) during weeks 1–9; the exposure is \(w\) (Swedish or common sequence); the amount \({p}_{t} (t=1, 2, 3)\) of person weeks during period \(t\) is fixed as a constant. We use linear model with only the main effect of exposure \(w\) for the following reasons. First, the exact functional form for the nuisance variables \(x\) and \({r}_{0}\) is unknown, and a reasonable assumption is linear form. Second, by sensitivity analysis, the effect modification of the main effect of exposure \(w\) by \(x\) and \({r}_{0}\) is small. Under the assumption of no hidden confounding covariates, we have that
$$\text{causal\, effect (i)}=E\left(s | x,{r}_{0}, w=1\right)-E\left(s | x,{r}_{0 }, w=0\right)=\left({p}_{1}+{p}_{2}+{p}_{3}\right)\beta .$$
From \(\beta\), we obtain causal effect (i) under the assumption of no hidden confounding covariates.
Causal effect (ii) is an increase in summary outcome \({y}_{2}+{y}_{3}\) during periods 2 and 3 under the Swedish sequence \(({z}_{2},{z}_{3})=(1, 1)\) relative to the common sequence \((0, 0)\). Using the same method as for causal effect (i), we obtain the regression model for the expectation of the summary outcome \({y}_{2}+{y}_{3}\) to estimate the causal effect (ii). In the model, the link function is identity function; the covariates are density \(x\) and rate \({r}_{1}={y}_{1}/{p}_{1}\) during period 1; the exposure is the variable that describes the exposure sequence during periods 2 and 3 (Swedish or common sequence); the amount \({p}_{t} (t=1, 2, 3)\) of person weeks during period \(t\) is fixed as a constant.
Causal effect (iii) is an increase in outcome \({y}_{3}\) during period 3 under the Swedish measure \({z}_{3}=1\) relative to the common measure \(0\). Similarly, we obtain the regression model for the expectation of outcome \({y}_{3}\) to estimate the causal effect (iii). Here, the link function is identity function; the covariates are density \(x\) and rate \({r}_{2}={y}_{2}/{p}_{2}\) during period 2; the exposure is \({z}_{3}\); the amount \({p}_{2}\) of person weeks during period \(2\) and \({p}_{3}\) during period \(3\) are fixed as constants.
Causal effect (iv) is an increase in summary outcome \({y}_{1}+{y}_{2}+{y}_{3}\) during periods 1, 2, and 3 under the mixed sequence \(({z}_{1}, {z}_{2},{z}_{3})=(1, 0, 0)\) relative to the common sequence \({(z}_{1}, {z}_{2},{z}_{3})=(0, 0, 0)\). Here, the exposures \({z}_{2}, {z}_{3}\) in the mixed sequence are set at 0, namely, common measures, so this causal effect describes the long-term influence of the Swedish measure \({z}_{1}=1\) during period 1 on the summary outcome throughout periods 1, 2, and 3. Causal effect (v) is an increase in the summary outcome \({y}_{2}+{y}_{3}\) during weeks 19–35 under the mixed sequence \(( {z}_{2},{z}_{3})=(1, 0)\) relative to the common sequence \(({z}_{2},{z}_{3})=(0, 0)\). It describes long-term influence of the Swedish measure \({z}_{2}=1\) during period 2 on the summary outcome throughout periods 2 and 3.
The mixed sequences in causal effects (iv) and (v) were never exposed to the population, so their outcomes were not observed, and these causal effects cannot be estimated by regressions. On the other hand, by applying Theorems 1 and 2 of Wang and Yin[22], we obtain the equality that
$$\text{causal effect (ii) } =\, \text{causal effect (v) }+\text{causal effect (iii).}$$
The equality reveals a rather intuitive observation that sequential causal effect is a sum of the contributions from individual exposures in the sequence. Please note that the exposure sequences in causal effects (ii), (v) and (iii) are \(({z}_{2},{z}_{3})=(1, 1)\), \(({z}_{2},{z}_{3})=(1, 0)\) and \({z}_{3}=1\) respectively. Based on this equality, the estimate of causal effect (v) is obtained by using the obtained estimates of causal effects (ii) and (iii). Similarly, by applying Theorems 1 and 2 of Wang and Yin [22], we obtain the equality that
$$\text{causal effect (i) }=\text{causal effect (iv) }+\text{causal effect (v) + causal effect (iii).}$$
Please note that the exposure sequences in causal effects (i), (iv), (v) and (iii) are \(({z}_{1},{z}_{2},{z}_{3})=(1, 1, 1)\), \(({z}_{1},{z}_{2},{z}_{3})=(1, 0, 0)\), \(({z}_{2},{z}_{3})=(1, 0)\) and \({z}_{3}=1\), respectively. Based on this equality, the estimate of causal effect (iv) can be obtained by using the obtained estimates of causal effects (i), (v) and (iii).
The confidence interval and p-values of causal effects (i)-(v) are obtained using Monte Carlo simulation based on the probability models and the obtained regression models.
Method of estimating causal effects on unemployment
Here, we estimate the causal effect of the Swedish strategy relative to the common strategy adopted by other Nordic countries on unemployment. Unemployment is measured as the number of unemployed persons among labour force (the sum of employed and unemployed persons). Therefore, we assume that unemployment follows the binomial distribution. The regression models are described below.
Causal effect (i) is an increase in summary unemployment \({s=y}_{2}+{y}_{3}\) during quarters 2–3 under the Swedish sequence \(( {z}_{2},{z}_{3})=(1, 1)\) relative to the common sequence \(( {z}_{2},{z}_{3})=(0, 0)\). Let \({r}_{1}={y}_{1}/{p}_{1}\) be the unemployment rate during quarter 1. Denote the exposure by \(w\), which takes 1 for the Swedish sequence or 0 for the common sequence. Then, the regression model for the expectation of \({s=y}_{2}+{y}_{3}\) is
$$E\left(s | x,{r}_{1}, w\right)=\left({p}_{2}+{p}_{3}\right)\left(\alpha +\gamma x+\delta {r}_{1}+\beta w\right).$$
Under the assumption of no hidden confounding covariates, we have
$$\mathrm{causal effect }\left(\mathrm{i}\right)=E\left(s | x,{r}_{1}, w=1\right)-E\left(s | x,{r}_{1}, w=0\right)=\left({p}_{2}+{p}_{3}\right)\beta .$$
From estimate of \(\beta\), we obtain estimate of causal effect (i) under the assumption of no hidden confounding covariates. Similarly, we can estimate causal effect (ii) which is an increase in outcome \({y}_{3}\) under the Swedish measure \({z}_{3}=1\) relative to the common measure \({z}_{3}=0\) during quarter 3.
Causal effect (iii) is an increase in summary outcome \({y}_{2}+{y}_{3}\) under the mixed sequence \(( {z}_{2},{z}_{3})=(1, 0)\) relative to the common sequence \(( {z}_{2},{z}_{3})=(0, 0)\) during quarters 2–3. By applying Theorems 1 and 2 of Wang and Yin [22], we obtain the equality that
$$\text{causal effect (i) }=\text{causal effect (iii) }+\text{ causal effect (ii)}$$
Please note that the exposure sequences in causal effects (i), (iii) and (ii) are \(({z}_{2},{z}_{3})=(1, 1)\), \(({z}_{2},{z}_{3})=(1, 0)\) and \({z}_{3}=1\), respectively. With the equality, we have that causal effect (iii) is equal to
$$\text{causal effect (iii) = causal effect (i)}-\text{causal effect (ii)}$$
Therefore, we obtain the estimate of causal effect (iii) from those of causal effects (i) and (ii). The confidence intervals and p-values of causal effects (i), (ii), and (iii) are obtained using Monte Carlo simulation based on the probability models and the obtained regression models.
The data and material are publicly available and can be assessed as described in the Method section. A data set relevant to the analysis is given in the Supplementary Information together with the code producing the result.
Kavaliunas A, et al. Swedish policy analysis for COVID-19. Health Policy Technol. 2020;9:598–612. https://doi.org/10.1016/j.hlpt.2020.08.009.
Ludvigsson JF. The first eight months of Sweden's COVID-19 strategy and the key actions and actors that were involved. Acta Padiatrica. 2020;109:2459–71. https://doi.org/10.1111/apa.15582.
Erica AA. Comparison of COVID-19 epidemiological indicators in Sweden, Norway, Denmark, and Finland. Scand J Public Health. 2021;49:69–78. https://doi.org/10.1177/1403494820980264.
Zheng Q. HIT-COVID, a global database tracking public health interventions COVID-19. Nat Sci Data. 2021;7:286. https://doi.org/10.1038/s41597-020-00610-2.
Haug N, et al. Ranking the effectiveness of worldwide COVID-19 government interventions. Nat Hum Behav. 2020;4:1303–12. https://doi.org/10.1038/s41562-020-01009-0.
Kontis V. Magnitude, demographics and dynamics of the effect of the first wave of the COVID-19 pandemic on all-cause mortality in 21 industrialized countries. Nat Med. 2020;26:1919–28.
Bjermer L et al. Flockimmunitet ¨ar en farlig och orealistisk coronastrategi". Dagens Nyheter, Stockholm. https://www.dn.se/debatt/flockimmunitet-ar-en-farlig-och-orealistisk-coronastrategi/. Accessed 15 May 2020.
BBC, Coronavirus: Sweden says WHO made 'total mistake' by including it in warning. https://www.bbc.com/news/world-europe-53190008. Accessed 26 June 2020.
Claeson M, Hanson S. The Swedish COVID-19 strategy revisited. Lancet. 2021;397:P1619. https://doi.org/10.1016/S0140-6736(21)00885-0.
Claeson M, Hanson S. COVID-19 and the Swedish enigma. Lancet. 2020;397:P259. https://doi.org/10.1016/S0140-6736(20)32750-1.
Vogel G. Sweden's gamble. Science. 2020;9:159–83.
Habib H. Covid-19: what Sweden taught Scandinavia for the second wave. BMJ. 2020;371: m4456. https://doi.org/10.1136/bmj.m4456.
Brauner JM. Inferring the effectiveness of government interventions against COVID-19. Science. 2021;371:1–8.
Soltesz K, et al. The effect of interventions on COVID-19. Nature. 2020;588:E26–8. https://doi.org/10.1038/s41586-020-3025-y.
Flaxman S, et al. Estimating the effects of non-pharmaceutical interventions on COVID-19 in Europe. Nature. 2020;584:257–61. https://doi.org/10.1038/s41586-020-2405-7.
Sayers, F. Swedish expert: Why lockdowns are the wrong policy—the post. UnHerd. 17 April. https://unherd.com/thepost/coming-up-epidemiologist-prof-johan-giesecke-shares-lessons-from-sweden/. Accessed 4 May 2020.
Britton T, Ball F, Trapman P. A mathematical model reveals the influence of population heterogeneity on herd immunity to SARS-CoV-2. Science. 2020;369:846–9.
Lindström M. The COVID-19 pandemic and the Swedish strategy: epidemiology and postmodernism. SSM Popul Health. 2020;11:100643.
Lindström M. The new totalitarians: The Swedish COVID-19 strategy and the implications of consensus culture and media policy for public health. SSM Popul Health. 2021;14: 100788. https://doi.org/10.1016/j.ssmph.2021.100788.
Irwin RE. Misinformation and de-contextualization: international media reporting on Sweden and COVID-19. Glo Health. 2020;16:62. https://doi.org/10.1186/s12992-020-00588-x.
Hernan MA, Robins JM. Causal inference: what if. Boca Raton: Chapman & Hall/CRC; 2020.
Wang X, Yin L. New G-formula for the sequential causal effect and blip effect of treatment in sequential causal inference. Ann Stat. 2020;48:138–60.
Koelle K, et al. The changing epidemiology of SARS-CoV-2. Science. 2022;375:1116–21.
Yin L. Data and code for COVID-19 mortality, total mortality, COVID-10 incidence and unemployment in Scandinavian countries. 2021. Zenodo. https://doi.org/10.5281/zenodo.5136641.
The authors acknowledge partial financial support from Swedish Research Council (Grant Numbers 2019-02913).
Open access funding provided by University of Gävle.
Suntar Research Institute, Singapore, Singapore
Yihong Lan
Karolinska Institutet, Solna, Sweden
Li Yin
University of Gävle, Gävle, Sweden
Xiaoqin Wang
The authors make equal contributions. All authors read and approved the final manuscript.
Correspondence to Xiaoqin Wang.
Not involved.
Additional file 1.
1) Sensitivity analysis for the impact of an alternative follow-up split on the estimation. 2) Sensitivity analysis for the impact of population change on the estimation. 3) Data and code, available in [Zenodo] at http://doi.org/10.5281/zenodo.5136641 [24].
Lan, Y., Yin, L. & Wang, X. Dynamics of COVID-19 progression and the long-term influences of measures on pandemic outcomes. Emerg Themes Epidemiol 19, 10 (2022). https://doi.org/10.1186/s12982-022-00119-6
Submission enquiries: [email protected]
|
CommonCrawl
|
Effect of oxygen stoichiometry on the structure, optical and epsilon-near-zero properties of indium tin oxide films
Shilin Xian, Lixia Nie, Jun Qin, Tongtong Kang, ChaoYang Li, Jianliang Xie, Longjiang Deng, and Lei Bi
Shilin Xian,1 Lixia Nie,1 Jun Qin,1 Tongtong Kang,1 ChaoYang Li,2 Jianliang Xie,1 Longjiang Deng,1 and Lei Bi1,*
1National Engineering Research Center of Electromagnetic Radiation Control Materials, University of Electronic Science and Technology of China, Chengdu 610054, China
2State Key Laboratory of Marine Resource Utilization in South China Sea, Hainan University, No. 58, Renmin Avenue, Haikou, Hainan Province, 570228, China
*Corresponding author: [email protected]
Lei Bi https://orcid.org/0000-0002-2698-2829
S Xian
L Nie
J Qin
T Kang
C Li
J Xie
L Deng
L Bi
Shilin Xian, Lixia Nie, Jun Qin, Tongtong Kang, ChaoYang Li, Jianliang Xie, Longjiang Deng, and Lei Bi, "Effect of oxygen stoichiometry on the structure, optical and epsilon-near-zero properties of indium tin oxide films," Opt. Express 27, 28618-28628 (2019)
Extinction coefficients
Optical constants
Thin film optical properties
Original Manuscript: July 15, 2019
Revised Manuscript: August 22, 2019
Manuscript Accepted: September 3, 2019
Transparent conductive oxide (TCO) films showing epsilon near zero (ENZ) properties have attracted great research interest due to their unique property of electrically tunable permittivity. In this work, we report the effect of oxygen stoichiometry on the structure, optical and ENZ properties of indium tin oxide (ITO) films fabricated under different oxygen partial pressures. By using spectroscopic ellipsometry (SE) with fast data acquisition capabilities, we observed modulation of the material index and ENZ wavelength under electrostatic gating. Using a two-layer model based on Thomas-Fermi screening model and the Drude model, the optical constants and Drude parameters of the ITO thin films are determined during the gating process. The maximum carrier modulation amplitude ΔN of the accumulation layer is found to vary significantly depending on the oxygen stoichiometry. Under an electric field gate bias of 2.5 MV/cm, the largest ENZ wavelength modulation up to 27.9 nm at around 1550 nm is observed in ITO thin films deposited with oxygen partial pressure of ${P_{{O_2}}}$=10 Pa. Our work provides insights to the optical properties of ITO during electrostatic gating process for electro-optic modulators (EOMs) applications.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Optical and electrical properties of ultra-thin indium tin oxide nanofilms on silicon for infrared photonics
Justin W. Cleary, Evan M. Smith, Kevin D. Leedy, Gordon Grzybowski, and Junpeng Guo
Ultra-compact and broadband electro-absorption modulator using an epsilon-near-zero conductive oxide
Qian Gao, Erwen Li, and Alan X. Wang
Photon. Res. 6(4) 277-281 (2018)
Study of the phase evolution, metal-insulator transition, and optical properties of vanadium oxide thin films
Taixing Huang, Lin Yang, Jun Qin, Fei Huang, Xupeng Zhu, Peiheng Zhou, Bo Peng, Huigao Duan, Longjiang Deng, and Lei Bi
Opt. Mater. Express 6(11) 3609-3621 (2016)
Electro-optical modulation of a silicon waveguide with an "epsilon-near-zero" material
Alok P. Vasudev, Ju-Hyung Kang, Junghyun Park, Xiaoge Liu, and Mark L. Brongersma
Ellipsometry on sputter-deposited tin oxide films: optical constants versus stoichiometry, hydrogen content, and amount of electrochemically intercalated lithium
Jan Isidorsson, Claes G. Granqvist, Klaus von Rottkay, and Michael Rubin
Z. Ma, Z. Li, K. Liu, C. Ye, and V. J. Sorger, "Indium-Tin-Oxide for High-performance Electro-Optic Modulation," Nanophotonics 4(1), 198–213 (2015).
M. G. Wood, S. Campione, S. Parameswaran, T. S. Luk, J. R. Wendt, D. K. Serkland, and G. A. Keeler, "Gigahertz speed operation of epsilon-near-zero silicon photonic modulators," Optica 5(3), 233–236 (2018).
N. Kinsey, C. DeVault, J. Kim, M. Ferrera, V. M. Shalaev, and A. Boltasseva, "Epsilon-near-zero Al-doped ZnO for ultrafast switching at telecom wavelengths," Optica 2(7), 616–622 (2015).
A. Melikyan, N. Lindenmann, S. Walheim, P. M. Leufke, S. Ulrich, J. Ye, P. Vincze, H. Hahn, T. Schimmel, C. Koos, W. Freude, and J. Leuthold, "Surface plasmon polariton absorption modulator," Opt. Lett. 19(9), 8855–8869 (2011).
V. J. Sorger, N. D. Lanzillotti-Kimura, R. M. Ma, and X. Zhang, "Ultra-compact silicon nanophotonic modulator with broadband response," Nanophotonics 1(1), 17–22 (2012).
H. W. Lee, G. Papadakis, S. P. Burgos, K. Chander, A. Kriesch, R. Pala, U. Peschel, and H. A. Atwater, "Nanoscale conducting oxide PlasMOStor," Nano Lett. 14(11), 6463–6468 (2014).
K. Shi, R. R. Haque, B. Zhao, R. Zhao, and Z. Lu, "Broadband electro-optical modulator based on transparent conducting oxide," Opt. Lett. 39(17), 4978–4981 (2014).
S. Zhu, G. Q. Lo, and D. L. Kwong, "Design of an ultra-compact electro-absorption modulator comprised of a deposited TiN/HfO2/ITO/Cu stack for CMOS backend integration," Opt. Express 22(15), 17930–17947 (2014).
H. Zhao, Y. Wang, A. Capretti, L. D. Negro, and J. Klamkin, "Broadband Electroabsorption Modulators Design Based on Epsilon-Near-Zero Indium Tin Oxide," IEEE J. Sel. Top. Quantum Electron. 21(4), 192–198 (2015).
L. Jin, Q. Chen, W. Liu, and S. Song, "Electro-absorption Modulator with Dual Carrier Accumulation Layers Based on Epsilon-Near-Zero ITO," Plasmonics 11(4), 1087–1092 (2016).
J. Yoon, M. Zhou, M. A. Badsha, T. Y. Kim, Y. C. Jun, and C. K. Hwangbo, "Broadband Epsilon-Near-Zero Perfect Absorption in the Near-Infrared," Sci. Rep. 5(1), 12788 (2015).
S. G. C. Carrillo, G. R. Nash, H. Hayat, M. J. Cryan, M. Klemm, H. Bhaskaran, and C. D. Wright, "Design of practicable phase change metadevices for near infrared absorber and modulator applications," Opt. Lett. 24(12), 13563–13573 (2016).
J. S. Kim and J. T. Kim, "Silicon electro-optic modulator based on an ITO-integrated tunable directional coupler," J. Phys. D: Appl. Phys. 49(7), 075101 (2016).
A. O. Zaki, K. Kirah, and M. A. Swillam, "Hybrid plasmonic electro-optical modulator," Appl. Phys. A 122(4), 473 (2016).
A. Anopchenko, L. Tao, C. Arndt, and H. W. H. Lee, "Field-Effect Tunable and Broadband Epsilon-Near-Zero Perfect Absorbers with Deep Subwavelength Thickness," ACS Photonics 5(7), 2631–2637 (2018).
Q. Gao, E. Li, and A. X. Wang, "Ultra-compact and broadband electro-absorption modulator using an epsilon-near-zero conductive oxide," Photonics Res. 6(4), 277–281 (2018).
L. Tao, A. Anopchenko, S. Gurung, J. Zhang, and H. W. H. Lee, "Gate-Tunable Plasmon-Induced Transparency Modulator Based on Stub-Resonator Waveguide with Epsilon-Near-Zero Materials," Sci. Rep. 9(1), 2789 (2019).
X. Liu, J.-H. Kang, H. Yuan, J. Park, Y. Cui, H. Y. Hwang, and M. L. Brongersma, "Tuning of Plasmons in Transparent Conductive Oxides by Carrier Accumulation," ACS Photonics 5(4), 1493–1498 (2018).
S. Vassant, A. Archambault, F. Marquier, F. Pardo, U. Gennser, A. Cavanna, J. L. Pelouard, and J. J. Greffet, "Epsilon-near-zero mode for active optoelectronic devices," Phys. Rev. Lett. 109(23), 237401 (2012).
N. Engheta, "Pursing Near-Zero Response," Science 340(6130), 286–287 (2013).
Y. C. Jun, J. Reno, T. Ribaudo, E. Shaner, J. J. Greffet, S. Vassant, F. Marquier, M. Sinclair, and I. Brener, "Epsilon-near-zero strong coupling in metamaterial-semiconductor hybrid structures," Nano Lett. 13(11), 5391–5396 (2013).
P. Moitra, Y. Yang, Z. Anderson, I. I. Kravchenko, D. P. Briggs, and J. Valentine, "Realization of an all-dielectric zero-index optical metamaterial," Nat. Photonics 7(10), 791–795 (2013).
S. Campione, I. Brener, and F. Marquier, "Theory of epsilon-near-zero modes in ultrathin films," Phys. Rev. B 91(12), 121408 (2015).
M. Z. Alam, I. D. Leon, and R. W. Boyd, "Large optical nonlinearity of indium tin oxide in its epsilon-near-zero region," Science 352(6287), 795–797 (2016).
L. Caspani, R. P. Kaipurath, M. Clerici, M. Ferrera, T. Roger, J. Kim, N. Kinsey, M. Pietrzyk, A. Di Falco, V. M. Shalaev, A. Boltasseva, and D. Faccio, "Enhanced Nonlinear Refractive Index in epsilon-Near-Zero Materials," Phys. Rev. Lett. 116(23), 233901 (2016).
A. Ciattoni, A. Marini, and C. Rizza, "All-optical modulation in wavelength-sized epsilon-near-zero media," Opt. Lett. 41(13), 3102–3105 (2016).
F. Yi, E. Shim, A. Y. Zhu, H. Zhu, J. C. Reed, and E. Cubukcu, "Voltage tuning of plasmonic absorbers by indium tin oxide," Appl. Phys. Lett. 102(22), 221102 (2013).
Y. Wang, A. Capretti, and L. Dal Negro, "Wide tuning of the optical and structural properties of alternative plasmonic materials," Opt. Mater. Express 5(11), 2415 (2015).
Y. Wang, A. C. Overvig, S. Shrestha, R. Zhang, R. Wang, N. Yu, and L. Dal Negro, "Tunability of indium tin oxide materials for mid-infrared plasmonics applications," Opt. Mater. Express 7(8), 2727–2739 (2017).
Y. Li and C. Argyropoulos, "Exceptional points and spectral singularities in active epsilon-near-zero plasmonic waveguides," Phys. Rev. B 99(7), 075413 (2019).
Y. Wang, H. Zhao, D. Huo, H. Su, C. Wang, and J. Zhang, "Accumulation-layer hybridized surface plasmon polaritions at an ITO/LiNbO3 interface," Opt. Lett. 44(4), 947–950 (2019).
G. V. Naik, V. M. Shalaev, and A. Boltasseva, "Alternative plasmonic materials: beyond gold and silver," Adv. Mater. 25(24), 3264–3294 (2013).
P. R. West, S. Ishii, G. V. Naik, N. K. Emani, V. M. Shalaev, and A. Boltasseva, "Searching for better plasmonic materials," Laser Photonics Rev. 4(6), 795–808 (2010).
B. Dastmalchi, P. Tassin, T. Koschny, and C. M. Soukoulis, "A New Perspective on Plasmonics: Confinement and Propagation Length of Surface Plasmons for Different Materials and Geometries," Adv. Opt. Mater. 4(1), 177–184 (2016).
P. Guo, R. D. Schaller, J. B. Ketterson, and R. P. H. Chang, "Ultrafast switching of tunable infrared plasmons in indium tin oxide nanorod arrays with large absolute amplitude," Nat. Photonics 10(4), 267–273 (2016).
A. Forouzmand and H. Mosallaei, "Real-Time Controllable and Multifunctional Metasurfaces Ultilizing Indium Tin Oxide Materials: Phase Array Perspective," IEEE Trans. Nanotechnol. 16(2), 296–306 (2017).
A. Forouzmand, M. M. Salary, S. Inampudi, and H. Mosallaei, "A Tunable Multigate Indium-Tin-Oxide-Assisted All-Dielectric Metasurface," Adv. Opt. Mater. 6(7), 1701275 (2018).
Y. Li and C. Argyropoulos, "Tunable nonlinear coherent perfect absorption with epsilon-near-zero plasmonic waveguides," Opt. Lett. 43(8), 1806–1809 (2018).
T. Cui, B. Bai, and H.-B. Sun, "Tunable Metasurfaces Based on Active Materials," Adv. Funct. Mater. 29(10), 1806692 (2019).
E. Feigenbaum, K. Diest, and H. A. Atwater, "Unity-order index change in transparent conducting oxides at visible frequencies," Nano Lett. 10(6), 2111–2116 (2010).
S. Peng, X. Cao, J. Pan, X. Wang, X. Tan, A. E. Delahoy, and K. K. Chin, "X-ray Photoelectron Spectroscopy Study of Indium Tin Oxide Films Deposited at Various Oxygen Partial Pressures," J. Electron. Mater. 46(2), 1405–1412 (2017).
S.-I. Jun, T. E. McKnight, M. L. Simpson, and P. D. Rack, "A statistical parameter study of indium tin oxide thin films deposited by radio-frequency sputtering," Thin Solid Films 476(1), 59–64 (2005).
X. Fang, C. L. Mak, S. Zhang, Z. Wang, W. Yuan, and H. Ye, "Pulsed laser deposited indium tin oxides as alternatives to noble metals in the near-infrared region," J. Phys.: Condens. Matter 28(22), 224009 (2016).
H. Kim, C. M. Gilmore, A. Piqué, J. S. Horwitz, H. Mattoussi, H. Murata, Z. H. Kafafi, and D. B. Chrisey, "Electrical, optical, and structural properties of indium–tin–oxide thin films for organic light-emitting devices," J. Appl. Phys. 86(11), 6451–6461 (1999).
L. Hao, X. Diao, H. Xu, B. Gu, and T. Wang, "Thickness dependence of structural, electrical and optical properties of indium tin oxide (ITO) films deposited on PET substrates," Appl. Surf. Sci. 254(11), 3504–3508 (2008).
M. M. Munir, F. Iskandar, K. M. Yun, K. Okuyama, and M. Abdullah, "Optical and electrical properties of indium tin oxide nanofibers prepared by electrospinning," Nanotechnology 19(14), 145603 (2008).
J. Gwamuri, M. Marikkannan, J. Mayandi, P. K. Bowen, and J. M. Pearce, "Influence of Oxygen Concentration on the Performance of Ultra-Thin RF Magnetron Sputter Deposited Indium Tin Oxide Films as a Top Electrode for Photovoltaic Devices," Materials 9(1), 63 (2016).
D. B. Tice, S. Q. Li, M. Tagliazucchi, D. B. Buchholz, E. A. Weiss, and R. P. Chang, "Ultrafast modulation of the plasma frequency of vertically aligned indium tin oxide rods," Nano Lett. 14(3), 1120–1126 (2014).
P. Guo, R. P. H. Chang, and R. D. Schaller, "Transient Negative Optical Nonlinearity of Indium Oxide Nanorod Arrays in the Full-Visible Range," ACS Photonics 4(6), 1494–1500 (2017).
P. Uprety, M. M. Junda, K. Ghimire, D. Adhikari, C. R. Grice, and N. J. Podraza, "Spectroscopic ellipsometry determination of optical and electrical properties of aluminum doped zinc oxide," Appl. Surf. Sci. 421, 852–858 (2017).
K. E. Peiponen and E. M. Vartiainen, "Kramers-Kronig relations in optical data inversion," Phys. Rev. B 44(15), 8301–8303 (1991).
A. V. Krasavin and A. V. Zayats, "Photonic signal processing on electronic scales: electro-optical field-effect nanoplasmonic modulator," Phys. Rev. Lett. 109(5), 053901 (2012).
J. Baek, J. B. You, and K. Yu, "Free-carrier electro-refraction modulation based on a silicon slot waveguide with ITO," Opt. Express 23(12), 15863–15876 (2015).
F. Neumann, Y. A. Genenko, C. Melzer, S. V. Yampolskii, and H. von Seggern, "Self-consistent analytical solution of a problem of charge-carrier injection at a conductor/insulator interface," Phys. Rev. B 75(20), 205322 (2007).
Y. W. Huang, H. W. Lee, R. Sokhoyan, R. A. Pala, K. Thyagarajan, S. Han, D. P. Tsai, and H. A. Atwater, "Gate-Tunable Conducting Oxide Metasurfaces," Nano Lett. 16(9), 5319–5325 (2016).
Abdullah, M.
Adhikari, D.
Alam, M. Z.
Anderson, Z.
Anopchenko, A.
Archambault, A.
Argyropoulos, C.
Arndt, C.
Atwater, H. A.
Badsha, M. A.
Baek, J.
Bai, B.
Bhaskaran, H.
Boltasseva, A.
Bowen, P. K.
Boyd, R. W.
Brener, I.
Briggs, D. P.
Brongersma, M. L.
Buchholz, D. B.
Burgos, S. P.
Campione, S.
Cao, X.
Capretti, A.
Carrillo, S. G. C.
Caspani, L.
Cavanna, A.
Chander, K.
Chang, R. P.
Chang, R. P. H.
Chen, Q.
Chin, K. K.
Chrisey, D. B.
Ciattoni, A.
Clerici, M.
Cryan, M. J.
Cubukcu, E.
Cui, T.
Cui, Y.
Dal Negro, L.
Dastmalchi, B.
Delahoy, A. E.
DeVault, C.
Di Falco, A.
Diao, X.
Diest, K.
Emani, N. K.
Engheta, N.
Faccio, D.
Fang, X.
Feigenbaum, E.
Ferrera, M.
Forouzmand, A.
Freude, W.
Gao, Q.
Genenko, Y. A.
Gennser, U.
Ghimire, K.
Gilmore, C. M.
Greffet, J. J.
Grice, C. R.
Gu, B.
Guo, P.
Gurung, S.
Gwamuri, J.
Hahn, H.
Han, S.
Hao, L.
Haque, R. R.
Hayat, H.
Horwitz, J. S.
Huang, Y. W.
Huo, D.
Hwang, H. Y.
Hwangbo, C. K.
Inampudi, S.
Ishii, S.
Iskandar, F.
Jin, L.
Jun, S.-I.
Jun, Y. C.
Junda, M. M.
Kafafi, Z. H.
Kaipurath, R. P.
Kang, J.-H.
Keeler, G. A.
Ketterson, J. B.
Kim, H.
Kim, J.
Kim, J. S.
Kim, J. T.
Kim, T. Y.
Kinsey, N.
Kirah, K.
Klamkin, J.
Klemm, M.
Koos, C.
Koschny, T.
Krasavin, A. V.
Kravchenko, I. I.
Kriesch, A.
Kwong, D. L.
Lanzillotti-Kimura, N. D.
Lee, H. W.
Lee, H. W. H.
Leon, I. D.
Leufke, P. M.
Leuthold, J.
Li, E.
Li, S. Q.
Li, Y.
Li, Z.
Lindenmann, N.
Liu, K.
Liu, W.
Liu, X.
Lo, G. Q.
Lu, Z.
Luk, T. S.
Ma, R. M.
Ma, Z.
Mak, C. L.
Marikkannan, M.
Marini, A.
Marquier, F.
Mattoussi, H.
Mayandi, J.
McKnight, T. E.
Melikyan, A.
Melzer, C.
Moitra, P.
Mosallaei, H.
Munir, M. M.
Murata, H.
Naik, G. V.
Nash, G. R.
Negro, L. D.
Neumann, F.
Okuyama, K.
Overvig, A. C.
Pala, R.
Pala, R. A.
Pan, J.
Papadakis, G.
Parameswaran, S.
Pardo, F.
Pearce, J. M.
Peiponen, K. E.
Pelouard, J. L.
Peng, S.
Peschel, U.
Pietrzyk, M.
Piqué, A.
Podraza, N. J.
Rack, P. D.
Reed, J. C.
Reno, J.
Ribaudo, T.
Rizza, C.
Roger, T.
Salary, M. M.
Schaller, R. D.
Schimmel, T.
Serkland, D. K.
Shalaev, V. M.
Shaner, E.
Shi, K.
Shim, E.
Shrestha, S.
Simpson, M. L.
Sinclair, M.
Sokhoyan, R.
Song, S.
Sorger, V. J.
Soukoulis, C. M.
Su, H.
Sun, H.-B.
Swillam, M. A.
Tagliazucchi, M.
Tan, X.
Tao, L.
Tassin, P.
Thyagarajan, K.
Tice, D. B.
Tsai, D. P.
Ulrich, S.
Uprety, P.
Valentine, J.
Vartiainen, E. M.
Vassant, S.
Vincze, P.
von Seggern, H.
Walheim, S.
Wang, A. X.
Wang, C.
Wang, R.
Wang, T.
Wang, X.
Wang, Y.
Wang, Z.
Weiss, E. A.
Wendt, J. R.
West, P. R.
Wood, M. G.
Wright, C. D.
Xu, H.
Yampolskii, S. V.
Yang, Y.
Ye, C.
Ye, H.
Ye, J.
Yi, F.
Yoon, J.
You, J. B.
Yu, K.
Yu, N.
Yuan, W.
Yun, K. M.
Zaki, A. O.
Zayats, A. V.
Zhang, J.
Zhang, R.
Zhang, S.
Zhang, X.
Zhao, B.
Zhao, H.
Zhao, R.
Zhu, A. Y.
Zhu, H.
Zhu, S.
ACS Photonics (3)
Adv. Funct. Mater. (1)
Adv. Mater. (1)
Adv. Opt. Mater. (2)
Appl. Phys. A (1)
Appl. Surf. Sci. (2)
IEEE J. Sel. Top. Quantum Electron. (1)
IEEE Trans. Nanotechnol. (1)
J. Electron. Mater. (1)
J. Phys. D: Appl. Phys. (1)
J. Phys.: Condens. Matter (1)
Laser Photonics Rev. (1)
Nano Lett. (5)
Nanophotonics (2)
Nat. Photonics (2)
Opt. Mater. Express (2)
Photonics Res. (1)
Phys. Rev. B (4)
Phys. Rev. Lett. (3)
Sci. Rep. (2)
Fig. 1. (a) XRD pattern of ITO thin films deposited under different oxygen partial pressures. Within the figure, black, red, green and blue curves represent thin films deposited under oxygen partial pressures of 0.1 Pa, 1 Pa, 10 Pa and 30 Pa, respectively. (b) AFM image ($1 \times 1\;\mu {m^2}$) of ITO thin films deposited at 10 Pa.
Fig. 2. Refractive index n (solid line) and extinction coefficient ${\kappa }$ (dotted line) as a function of wavelength for ITO thin films fabricated under different oxygen partial pressures.
Fig. 3. (a) Schematic of the ITO/SiO2/Si thin films stack structure. A DC bias of 0 V to 5 V was applied a cross the SiO2 layer during the ellipsometry characterizations. (b) Cross-sectional view of the two-layer model under applied bias. (c) Optical micrograph pattern under the DC bias of the ITO films. The green circle indicates the incident light spot location of the ellipsometer.
Fig. 4. Refractive index (left axis) and extinction coefficient (right axis) of ITO films deposited under ${P_{{O_2}}}$ of (a) 1 Pa. (c) 10 Pa. (e) 30 Pa. and with different applied bias in a wavelength range from 210 to 1690 nm. Also shown are the zoom-in view of corresponding refractive index and extinction coefficient for ITO deposited at ${P_{{O_2}}}$ of (b) 1 Pa. (d) 10 Pa and (f) 30 Pa, respectively.
Fig. 5. Tuning the ENZ wavelength of ITO thin films under different applied bias for ITO thin films deposited at: (a) 1 Pa. (b) 10 Pa and (c) 30 Pa.
Table 1. Comparison of Hall effect measurements and Drude-Lorentz model fitting parameters for ITO thin films fabricated under different oxygen partial pressures
Table 2. The variation of the parameters obtained by the Drude model fitting under different oxygen pressures with an applied gate voltage
(1) ε ( ω ) = ε ∞ − ω P 2 ω 2 + i Γ ω + f 1 ω 1 2 ω 1 2 − ω 2 + i Γ 1 ω
(2) ω P 2 = N 0 e 2 ε 0 m ∗
(3) t T F = ( ε I T O ε 0 h 2 4 π 2 m ∗ e 2 ) 1 / 2 ( π 4 3 N 0 ) 1 / 6
(4) n a c c = Q / e A × t T F
Comparison of Hall effect measurements and Drude-Lorentz model fitting parameters for ITO thin films fabricated under different oxygen partial pressures
${P_{{O_2}}}$ P O 2 (Pa)
Hall measurement
Drude-Lorentz model
Electron Concentration (×1020 cm−3)
Mobility (cm2V−1s−1)
0.1 7.3 20.1 9.7 28.2
1 5.1 24.2 8.3 30.8
10 3.2 26.4 6.6 32.2
The variation of the parameters obtained by the Drude model fitting under different oxygen pressures with an applied gate voltage
Drude model fitting data
${\varepsilon _\infty }$ ε ∞
Carrier concentration (×1020 cm−3)
Damping rate (×1014 rad/s)
1 Pa 0 4.10 8.31 1.63
10 Pa 0 4.10 6.62 1.56
|
CommonCrawl
|
Home Journals I2M Optimized Calibration Method for Analog Parametric Temperature Sensors
Optimized Calibration Method for Analog Parametric Temperature Sensors
Oleksandr Vovna | Ivan Laktionov* | Antonina Andrieieva | Eduard Petelin | Oleksandr Shtepa | Hanna Laktionova
Department of Electronic Engineering, Faculty of Computer-Integrated Technologies, Automatization, Electrical Engineering and Radioelectronics, SHEE 'Donetsk National Technical University' of the Ministry of Education and Science of Ukraine, UA85300, Shybankova sq., 2, Pokrovsk, Ukraine
Department of Occupational Safety, Mining Faculty, SHEE 'Donetsk National Technical University' of the Ministry of Education and Science of Ukraine, UA85300, Shybankova sq., 2, Pokrovsk, Ukraine
[email protected]
The aim of the article is development of scientific and applied foundations for improving the accuracy of current measurement systems of temperature characteristics of technological processes by improving calibration methods of analog parametric temperature sensors. The article presents the developed and investigated improved mathematical model of determining characteristics of thermistor conversion based on the Steinhart-Hart equation. The possibility of calibrating thermistors using two reference points in an operating temperature range from 0 to 100ºС, with the interconnected choice of its values, is mathematically grounded and experimentally proved. The results of the studies have shown that the use of the proposed method can reduce the approximation uncertainty by 3 times compared with the existing approaches. Using the presented research results made it possible to synthesize a software component of information measurement systems to automate the calibration process of parametric resistive sensors. The obtained research results can be used as a scientific and practical basis for optimization and adaptation of metrological certification of resistive temperature sensors.
calibration model, thermistor conversion characteristic, measurement error, approximation function
Precision continuous non-destructive temperature monitoring of physical objects and processes using budget analog sensors is a knowledge intensive and relevant scientific and applied task for many practical applications. Since, meeting current requirements for reliability, accuracy and efficiency of functional and metrological provision of technological automation systems can be performed only with the regulated characteristics of information measurement subsystems. This can be achieved under conditions of measurement uncertainty in the presence of destabilizing factors by introducing improved methods and algorithms for calibrating sensors in measurement systems, followed by adaptive intelligent processing of measurement information.
Temperature is one of the most frequently measured physical quantities. Due to the variety of processes in which it is necessary to obtain reliable measurement information and take into account temperature characteristics of objects, many different purpose-oriented sensors for measuring temperature have appeared. In addition to direct measurements, a number of derivative physical parameters can be indirectly determined from temperature monitoring results, as well as the destabilizing effect of temperature on other parameters and characteristics can be taken into account.
Also, relevance of the research in the subject field of digitalization and intellectualization of industrial enterprises through development and implementation of state-of-the-art computer-integrated technologies for monitoring and controlling technology process parameters is due to the high science-intensive procedure of optimizing dynamic thermal processes [1-3].
A significant number of studies is devoted to solving the urgent scientific and applied problem of improving methods and algorithms for increasing accuracy of analog temperature sensors of technological processes in industrial enterprises. For example, some articles present basic provisions for the organization of measuring calibration procedures for modern NTC-thermistors [4, 5]. Also, main technical and metrological characteristics of serial models of thermistors in a wide temperature range are stated [5]. The scientific articles theoretically substantiated and experimentally proved the possibility of using the Steinhart-Hart equation to approximate conversion characteristics of analog resistive temperature sensors [6-9].
Results of the studies substantiated the possibility of using a two-parameter equation which describes the conversion characteristic of thermistors and is a simplified model of the Steinhart-Hart equation in various temperature ranges [10]. The possibility of using thermistors as primary measuring transducers of modern information measurement systems is mathematically substantiated and experimentally proved [11].
In research papers, the authors obtained the main results on assessment of the main metrological characteristics of temperature measuring instruments, and also substantiated possible directions for improving the accuracy of modern temperature sensors of physical objects through the use of various hardware and software solutions [12-15]. The main approaches to the organization and implementation of procedures for metrological certification and verification of measuring instruments [16].
As a result of the analysis and logical generalization of the above mentioned studies, it was found that most authors do not pay enough attention to the problems of substantiating the choice of optimal temperature ranges for calibrating sensors from the point of view of the criterion of minimizing the approximation error of experimental observation results. Thus, the research on the development of scientific and applied foundations of metrological provision of current computerized systems for monitoring temperature of technological processes necessitates the continuation of the research in this subject area.
The main purpose of the article is to develop scientific and applied foundations for improving accuracy of current information measurement systems of temperature parameters of technological processes by improving the calibration methods of analog parametric temperature sensors.
The object of the study is computerized methods of improving the efficiency and information content of the calibration procedures of current analog resistive temperature sensors.
The subject of study is electrical processes that occur in the measurement temperature channels.
The obtained research results can be used as a scientific and practical basis for optimization and adaptation of metrological certification procedures for resistive temperature sensors with their subsequent implementation in automation systems of technological processes.
Section 2 describes the used research methods and tools, Section 3 explains all scientific and practical findings, Section 4 explains conclusions from the present work and suggestions for future investigations.
Full-scale experimental studies were performed in specialized laboratories of the State Higher Educational Institution "Donetsk National Technical University" (Pokrovsk, Ukraine) using standardized laboratory materials and tools. During the laboratory tests, three series of observation results were obtained, while maintaining identity of the methods, means and measurement conditions. The time interval for obtaining one series of experimental data was 6 hours, the period of polling the measuring temperature channels is 2 seconds. The temperature range of the study is from 0 to +100°C at 10°C intervals. The used method of metrological certification of the temperature meter is an automated method of comparison with a calibration device (temperature sensing device of the certified ZenithLab WH-1/2/4/6 water bath [17] with microprocessor temperature control in the range from 0 to +100°C with an absolute error of not more than 0.10°C. The block diagram of the proposed algorithm for the metrological certification of the implemented temperature measuring instrument based on the KY-013 analog temperature sensor is shown in Figure 1; the structural diagram of the laboratory setup is shown in Figure 2.
Figure 1. Block diagram of the metrological certification of the analog temperature sensor
The sensitive element of the KY-013 analog temperature sensor is the NTC thermistor. This model is compatible at the design and software levels with the Arduino UNO R3 serial microcontroller board and provides temperature measurements in the range from –55 to +125°С with an absolute error of not more than ±0.50°С [18]. The functional module for collecting and primary processing of experimental observation results is implemented on the Arduino UNO R3 board with expansion devices: DS1302 real-time clock and a micro-SD card. Discrete control of temperature conditions in the range from 0 to +100°С at 10°С intervals was carried out using the ZenithLab WH-1/2/4/6 water thermostat.
Since the KY-013 module is a parametric type sensor, therefore, to convert the resistance change (RT) into a proportional voltage signal (UOUT), the KY-013 thermistor is included in the voltage divider circuit, as shown in Figure 3. The control of power supply circuits is implemented using a highly stable laboratory voltage source (US) with a nominal value of 5.0 V.
When planning the experimental studies and implementing the laboratory tests of the developed microprocessor means for measuring temperature, specialized application packages were used. The functional purpose and sequence of using the software components are shown in Figure 4.
Figure 2. Block diagram of the laboratory setup
Figure 3. Electrical schematic diagram of the temperature meter based on the KY-013 sensor
Figure 4. Generalized chart of the software component design of the experimental research
3. Research Results
The analytical dependence of the output voltage changes in the measuring circuit (see Figure 3) was obtained based on Ohm's and Kirchhoff's laws:
$U_{\mathrm{ouT}}(T)=U_{\mathrm{s}} \cdot \frac{R(T)}{R 1+R(T)}$ (1)
where, UOUT is output voltage of the circuit from temperature changes T; Us is stabilized reference voltage, the value of which is 5.0V; R1 is resistance of the limiting resistor, the value of which is 9850 Ohm; R is resistance of the KY-013 semiconductor thermistor from temperature changes.
To reduce the random error component of the voltage measurement result, the magnitude of which is proportional to the temperature change, the mathematical methods for processing the results of non-equal measurements were used [19-21]. The studies were carried out by repeated observations of the voltage at the control points, each of which corresponds to temperature: +5.4; +10.4; + 20.4; +29.9; +39.9; +49.7; +59.5; +70.2; +78.4; + 87.7; +97.3 ºС. Figure 5 shows a graph of changes in standard deviations with temperature changes at the control points.
Figure 5. Dependence of standard deviations on temperature changes
At least 100 observations were made at each of the control points. During the experimental studies, three series of observation results were obtained. After mathematical processing, weighted average results of voltage measurements (UOUT) at each of the temperature control points were obtained (see Figure 6).
Figure 6. Conversion characteristic of the measurement circuit with the KY-013 thermistor
The resistance change in the KY-013 thermistor with the change in the output voltage of the circuit (see Figure 3) was obtained on the basis of the analytical dependence (1):
$R(T)=R 1 \cdot \frac{U_{\mathrm{our}}(T)}{U_{\mathrm{s}}-U_{\mathrm{OUT}}(T)}$ (2)
The analytical expression (2) is a nonlinear function of the voltage change, the magnitude of which is proportional to temperature. The Steinhart-Hart equation is used to mathematically describe the change in resistance of the semiconductor sensor as a result of temperature changes:
$\frac{1}{T}=\sum_{i=0}^{\infty} a_{i} \cdot \ln ^{i} R$ (3)
where, R is resistance of the KY-013 semiconductor thermistor, Ohm; T is temperature, ºС; ai are the Steinhart-Hart equation (3) coefficients, the values of which depend on the sensor parameters and the temperature range.
When using the Steinhart-Hart Eq. (3) in practical calculations, terms of sum, a2×ln2R, a3×ln3R etc. are neglected due to the fact that their contribution to the calculation result is quite small [10, 11, 22].
To determine the values of the Steinhart-Hart Eq. (3), the reduced values of resistance R and temperature T to their reference values R0 and T0=273.15K were used:
$\frac{1}{T}-\frac{1}{T_{0}}=a_{1} \cdot\left(\ln R-\ln R_{0}\right)=a_{1} \cdot \ln \frac{R}{R_{0}}=\frac{1}{B} \cdot \ln \frac{R}{R_{0}}$
$\frac{1}{T}=\frac{1}{T_{0}}+\frac{1}{B} \cdot \ln \frac{R}{R_{0}}=\left(\frac{1}{T_{0}}-\frac{1}{B} \cdot \ln R_{0}\right)+\frac{1}{B} \cdot \ln R$ (4)
where, $a_{0}=\frac{1}{T_{0}}-\frac{1}{B} \cdot \ln R_{0}$; $a_{1}=\frac{1}{B}$; B are parameters of the simplified Steinhart-Hart Eq. (3), which take into account changes in R with T for the sensor:
$R(T)=R_{0} \cdot \exp \left(B \cdot \frac{T_{0}-T}{T_{0} \cdot T}\right)$ (5)
To obtain values of the approximation parameters of the Eq. (5) with a minimum error, the Levenberg-Maquardt method was used. The method is implemented in MathCAD using the function genfit. As a result, the values of the parameters $R_{0}=6714 \mathrm{Ohm}$ and $B=3173 \mathrm{K}$ were obtained, the use of which in the Eq. (5) provides the minimum approximation error with the average value $\overline{\Delta R}=1.6$ Ohm and standard deviations $\overline{\Delta R}$ from $R$ not more than $\sigma_{\overline{\Delta R}}=\pm 19$ Ohm in the temperature range from 0 to $+100^{\circ} \mathrm{C}$.
Calculation of the values of the measured temperature (Tcalc,°С) was performed in the Arduino Mega 2560 based on (4) using the obtained values of the parameters of the simplified Eq. (5):
${{T}_{\text{calc}}}=\frac{1}{\frac{1}{{{T}_{0}}}+\frac{1}{B}\cdot \ln \frac{R1\cdot \frac{{{U}_{\text{OUT}}}}{{{U}_{\text{S}}}-{{U}_{\text{OUT}}}}}{{{R}_{0}}}}-{{T}_{0}}.$ (6)
During the research, it was found that the absolute error of the temperature measurement changes (DT, ºС) in the range from 0 to +100ºС for the KY-013 thermistor, which is shown in Figure 7, where the results of the experimental data are indicated with •.
Analysis of the obtained results (see Figure 7) allowed discovering that the use of the simplified parametric Eq. (4) for the KY-013 analog semiconductor thermistor made it possible to ensure an absolute temperature measurement error of not more than ±0.20ºС in the range from 0 to +60ºС. With the expansion of the temperature measurement range from 0 to +100ºС, the value of the absolute measurement error is not more than ±1.6ºС.
Figure 7. Changes in the value DT with temperature in the range from 0 to +100 ºС for the KY-013 thermistor
Figure 8. The normalized Steinhart-Hart equation with ranges for selection of two reference standard temperatures
During the experiments, the following task arose – to simplify the process of calibrating the thermistor by reducing the number of the used calibration temperature values. The simplified Steinhart-Hart Eq. (5) has two parameters, therefore, in order to calculate their values, it is enough to use a system of two equations on two calibration values of temperature and resistance. This minimum number of control values during approximation will allow obtaining the values of the equation parameters (5) with a minimum error. As a result of this, there is a need to justify the temperature ranges in which these calibration values are selected.
To determine the B parameter of the Steinhart-Hart equation (5), it was maximized:
$r(T)=\frac{R(T)}{R_{0}}=\exp \left(B \cdot \frac{T_{0}-\left(T_{0}+T\right)}{T_{0} \cdot\left(T_{0}+T\right)}\right)=\exp \left(-B \cdot \frac{T}{T_{0} \cdot\left(T_{0}+T\right)}\right)$ (7)
where, r is the maximized Steinhart-Hart equation.
Based on the obtained normalized Eq. (7), temperature ranges with the calibration values were determined (see Figure 8): Tbegin is an initial temperature (the first reference standard), ºС; Tend1 and Tend2 is the range of values in which the value of the second reference standard temperature falls, ºС.
During the experimental studies, the values of the initial and final reference standard temperatures were established (see Figure 9). When using these standards, the minimum value of the average value (from ±0.4ºС to ±0.27ºС) and standard deviations (from ±0.84ºС to ±0.89ºС) of the approximation error of the thermistor conversion static characteristic is ensured. Out-of-range average values of the approximation error increase (two- or threefold), and standard deviations increase (by 20 – 30) %.
Figure 9. Temperature ranges with a minimum approximation error
Figure 10. The ratio of the normalized Steinhart-Hart equations at the initial and final values of the second reference standard temperature
To determine the thermistor conversion characteristic based on the initial (Tbegin) and final (Tend) reference standard temperatures, it is proposed to use the ratio of the normalized Steinhart-Hart Eq. (7) (see Figure 10) at the indicated temperatures:
$\Delta r\left(T_{\text {begin }}, T_{\text {end }}\right)=\frac{r\left(T_{\text {end }}\right)}{r\left(T_{\text {begin }}\right)}=\frac{\exp \left(-B \cdot \frac{T_{\text {end }}}{T_{0} \cdot\left(T_{0}+T_{\text {end }}\right)}\right)}{\exp \left(-B \cdot \frac{T_{\text {begin }}}{T_{0} \cdot\left(T_{0}+T_{\text {begin }}\right)}\right)}$, (8)
where, Tend is the second reference standard temperature in the range from Tend1 to Tend2, ºС.
When processing the experimental data, the ranges of changing the function values (8) from $\Delta r_{\text {end } 1}^{\text {exper }}$ to $\Delta r_{\text {end } 2}^{\text {exper }}$ were established, at which the approximation error in the thermistor conversion characteristic has the minimum value. Using the initial temperature reference value Tbegin=5.4ºС and the final reference in the range from Tend1=29.9ºС to Tend2=59.3ºС, the value Dr (8) varies from $\Delta r_{\text {end } 1}^{\text {exper }}$=0.3984 to $\Delta r_{\text {end } 2}^{\text {exper }}$=0.1577. The results of the experimental studies to determine $\Delta r$ in the range from Tend1 to Tend2 in order to ensure the minimum approximation error are summarized in Table 1.
Table 1. The results of the experimental studies on determining $\Delta r$ in the range Tend1 to Tend2
Tbegin,ºС
$T_{\text {end } 1}^{\text {exper }}$,ºС
$\Delta r_{\text {end } 1}^{\text {exper }}$
Figure 11. Mathematical rationale of the value of the final value of the second reference temperature range
To enable automatic calibration of parametric resistive sensors, a mathematical model was developed, which can be the basis for the principle of the software component operation of information measurement systems. The proposed mathematical model is based on the choice of the final value of the second reference standard temperature range. It is proposed to use an integral indicator as a selection criterion for this range, which takes into account the type of static conversion characteristic and the operating temperature range of the thermistor. The geometric meaning of this criterion is the area under the sensor conversion characteristic. It is assumed that in the range from 0 to $T_{\text {end } 2}^{\text {calc }}$ 90 % of the area of the static conversion characteristic of the thermistor is concentrated (see Figure 11). The total area of the conversion characteristic is in the range from 0 to the maximum temperature value Tmax (see Figure 11), which is measured with the thermistor with a regulated error. Therefore, to determine Tcalc end2 the ratio of the areas was used:
$\lambda_{\text {end } 2}\left(T_{\text {end } 2}^{\text {calc }}\right)=\frac{\int_{0}^{T_{\text {end 2}}^{\text {calc}}} r(T) d T}{\int_{0}^{T_{\text {max }}} r(T) d T}=0.9$, (9)
where, $T_{\text {end } 2}^{\text {calc }}$ is the calculated value of the final value of the second reference temperature range, ºС; Tmax is the maximum temperature value, which is measured with the thermistor with a regulated error, ºС.
As a result of the studies (see Figure 11), the final value of the second reference temperature range was established, the value of which is 63.2ºС at Tmax=100ºС for the KY-013 thermistor. The obtained result is consistent with the results of the experimental studies (see Table 1) with an error of not more than 7%, which confirms the adequacy of the proposed mathematical model.
To determine the value of the beginning of the second reference temperature range ($T_{\text {end } 1}^{\text {calc }}$), it is proposed to use a complex indicator that takes into account both the temperature range and the type of function of the normalized static conversion characteristic (7). The value of this indicator is calculated as the difference between the areas of the conversion characteristic in the temperature range from Tbegin to $T_{\text {end } 1}^{\text {calc }}$–S1 and from Tcalc end1 to $T_{\text {end } 2}^{\text {calc }}$–S2 (see Figure 8). With the expansion of the upper limit of the range $T_{\text {end } 1}^{\text {calc }}$1, the value S1 increases and, accordingly, S2 decreases. To compare the mathematical modelling results of the choice of values $T_{\text {end } 1}^{\text {calc }}$, the proposed indicator was reduced to the total area of the static characteristic in the temperature range from Tbegin to $T_{\text {end } 2}^{\text {calc }}$ (see Figure 8):
$\lambda_{\text {end1}}\left(T_{\text {begin }}, T_{\text {end1 }}^{\text {calc }}\right)=\frac{\int_{T_{\text {begin }}}^{T_{\text {end 1}}^{\text {calc }}} r(T) d T-\int_{T_{\text {end 1}}^{\text {calc }}}^{T_{\text {end 2}}^{\text {calc }}} r(T) d T} {\int_{T_{\text {begin }}}^{T_{\text {end 2}}^{\text {calc}}} r(T) d T}$, (10)
where, S1 and S2 are the areas of the normalized static conversion characteristics in the range from Tbegin to $T_{\text {end } 1}^{\text {calc }}$ and from $T_{\text {end } 1}^{\text {calc }}$ to $T_{\text {end } 2}^{\text {calc }}$, respectively.
Table 2. The results of the mathematical and experimental studies to determine the range of variation of the second reference temperature
$T_{\text {end } 1}^{\text {exper}}$,ºС
$T_{\text {end } 1}^{\text {calc }}$,ºС
To calculate the value of the beginning of the second reference temperature range $T_{\text {end } 1}^{\text {calc }}$, the normalized complex indicator (10) is compared with the normalized static conversion characteristic of the thermistor (7):
$r(T)=\frac{\int_{T_{\text {begin }}}^{T_{\text {end 1}}^{\text {calc }}} r(T) d T-\int_{T_{\text {end 1}}^{\text {calc }}}^{T_{\text {end 2}}^{\text {calc }}} r(T) d T} {\int_{T_{\text {begin }}}^{T_{\text {end 2}}^{\text {calc}}} r(T) d T}$. (11)
A graphical solution of the Eq. (11) is shown in Figure 12. As a result, the values of the beginning of the second reference temperature range are obtained. The results of modelling and experimental studies to determine the values of the range of the second reference temperature to ensure the minimum approximation error are summarized in Table 2.
Figure 12. Graphical determination of the beginning of the second reference temperature range
Figure 13. The range of selection of the second reference temperature based on the first reference temperature
To compare the results of the experimental studies and mathematical modelling (see Table 2), by definition Tend1 the value of the relative error is calculated, the value of which does not exceed ±8 %, which proves the adequacy of the proposed mathematical model.
The results of the experimental studies and mathematical modelling for choosing the second reference temperature range based on the first one is shown in Figure 13.
From the analysis of the research results, a linear dependence of the beginning of the second reference temperature range on changes in the first one is established:
$T_{\mathrm{end} 1}^{\mathrm{calc}}\left(T_{\mathrm{begin}}\right)=k_{\mathrm{T}} \cdot T_{\mathrm{begin}}+b_{\mathrm{T}}$, (12)
where, kT is coefficient of proportionality between the first and the beginning of the second reference temperature range, the value of which is 0.610; bT is constant component equal to 26.8ºС.
As an example of using the proposed approach for approximating the static conversion characteristic, the value of the first reference temperature Tbegin=10.4℃ at resistance Rbegin=4423.8 Ohm is chosen. The recommended value of the second reference temperature is in the range from $T_{\text {end } 1}^{\text {calc }}$ (Tbegin=10.4℃)=33.1℃, calculated by the formula (12), to $T_{\text {end } 2}^{\text {calc }}$=63.2℃. In this temperature range, the value of the second reference standard, for example, is Tend=39.9℃ at resistance Rend=1531.8 Ohm. As a result of solving the system of equations:
$\left\{\begin{array}{l}{\frac{1}{T_{0}+T_{\text {begin }}}=\left(\frac{1}{T_{0}}-\frac{1}{B} \cdot \ln R_{0}\right)+\frac{1}{B} \cdot \ln R_{\text {begin }}} \\ {\frac{1}{T_{0}+T_{\text {end }}}=\left(\frac{1}{T_{0}}-\frac{1}{B} \cdot \ln R_{0}\right)+\frac{1}{B} \cdot \ln R_{\text {end }}}\end{array}\right.$ (13)
The parameters of the Steinhart-Hart equation are determined: R0=6790 Ohm and B=3191K. Similar studies were conducted for the other two second temperature reference standards that fall outside the recommended range:
1) Tend=29.9℃ at Rend=2126.9 Ohm with the Steinhart-Hart equation parameters: R0=6823 Ohm and B=3227K;
2) Tend=70.2℃ at Rend=603.8 Ohm with the Steinhart-Hart equation parameters: R0=6837 Ohm and B=3242K.
To compare the results obtained, the relative error of approximation of the static characteristics of thermistor conversion is calculated:
${{\delta }_{\text{R}}}\left(T \right)=\frac{{{R}_{\text{LM}}}\left( T \right)-{{R}_{\text{ }\!\!\lambda\!\!\text{ }}}\left( T \right)}{{{R}_{\text{LM}}}\left( T \right)}\cdot$$100 \%$, (14)
where, RLM(T) and Rl(T) are static characteristics of thermistor conversion with coefficients obtained by approximation with the Levenberg-Maquardt method using 10 temperature reference values (R0=6714 Ohm and B=3173K), and using the proposed normalized complex indicators (9) and (11), respectively.
Changes in the relative approximation error (14) of the Steinhart-Hart equation in the temperature range from 0 to 100°C for fixed values of the second reference standard temperature are shown in Figure 14.
Figure 14. Changes in the relative approximation error of the Steinhart-Hart equation depending on the value of the second reference standard temperature
Analysis of the results obtained (see Figure 14) proved that when using the proposed approach for determining the second reference standard temperature, the value of the relative approximation error (see Tend=39.9℃ Figure 14) varies from –1.1% to 0.6% in the range of measured temperatures from 0 to 100ºС. When using temperatures Tend=29.9ºС or Tend=70.2℃ as the second reference standard, values of which fall outside the recommended range (from $T_{\text {end } 1}^{\text {calc }}$(Tbegin=10.4℃)=33.1℃ to $T_{\text {end } 2}^{\text {calc }}$=63.2℃), the value of the relative approximation error varies from –1.6% to 3.6% (see Tend=29.9ºС Figure 14) and from –1.8% to 4.8% (see Tend=70.2ºС Figure 14) in the range of measured temperatures from 0 to 100 ºС. Based on the analysis of the results obtained, it was found that the choice of the value of the second reference standard temperature in the recommended range from Tend1 to Tend2 allows reducing the approximation error of the static conversion characteristic by two reference values by at least 3 times. The use of the second reference standard temperature, the values of which are determined by the normalized complex indicators (9) and (11), allows the use of the two reference standard temperatures for calibrating thermistors while maintaining the approximation accuracy with the Levenberg-Maquardt method.
The criteria for choosing two reference standard temperatures were developed and investigated to approximate the static conversion thermistor characteristics, which is described by the Steinhart-Hart equation. It is proposed to determine the value of the second reference temperature and resistance based on the value of the first reference temperature and the thermistor resistance.
During the mathematical modelling, the adequacy of which was proved by the results of the experimental studies, it was found that when choosing the second reference standard temperature in the recommended range, the approximation error of the conversion characteristics does not exceed 1.1% compared with the approximation results with the Levenberg-Maquardt method at 10 reference points. When choosing the value of the second reference standard temperature outside the recommended range, the approximation error of the conversion characteristics increases by more than 3 times.
When conducting the theoretical and experimental studies, the possibility of calibrating thermistors using two reference points, the choice of which values is interconnected, was proved. Using the presented research results allows developing a software component of information measurement systems to automate the calibration process of parametric resistive sensors.
Promising areas of research to increase the efficiency of automatic computerized calibration procedures for analog parametric temperature sensors are: experimental testing of the implemented methods and means of measurement in real operating conditions in order to clarify laboratory-applied scientific results; optimization of structural-algorithmic organizations of computer-integrated information measurement systems based on analog resistive temperature sensors; evaluation of investment attractiveness of the implementation of the developed methods and means of metrological certification of temperature meters; substantiation of intelligent algorithms for processing experimental results of temperature monitoring using modern technologies of Internet of Things and Data Mining.
$a_{i}$
coefficient of the Steinhart-Hart equation
$b_{\mathrm{T}}$
constant component, ºС
dimensionless parameter of the simplified Steinhart-Hart equation
$k_{\mathrm{T}}$
dimensionless proportionality coefficient between the first and the beginning of the range of the second reference standard temperature, the value of which is
resistance of a semiconductor thermistor, Ohm
parameter of the Steinhart-Hart equation, Ohm
resistance of a current-limiting resistor, Ohm
$R_{\mathrm{LM}}(T)$
static characteristic of thermistor conversion with coefficients obtained by approximation with the Levenberg-Maquardt method, Ohm
$R_{\lambda}(T)$
static characteristic of thermistor conversion with coefficients obtained using the proposed normalized complex indicators, Ohm
maximized Steinhart-Hart equation
area of the normalized static conversion characteristic
temperature of the analyzed medium, ºС
$T_{\text {begin }}$
initial temperature (first reference standard), ºC
$T_{\text {calc }}$
calculated temperature value, ºC
$T_{\text {end } 1}$
minimum boundary of the value range in which the value of the second reference standard temperature falls, ºС
maximum boundary of the value range in which the value of the second reference standard temperature falls, ºС
$T_{\max }$
maximum temperature value, ºС
$U_{\mathrm{OUT}}$
circuit output voltage, V
reference stabilized voltage, V
$\delta_{R}(T)$
relative error of approximation, %
$\Delta r$
dimensionless normalized value of the resistance function
$\overline{\Delta R}$
average error, Ohm
$\sigma_{\overline{\Delta R}}$
standard deviation, Ohm
sequence number of the approximation coefficient
[1] Laktionov, I.S., Vovna, O.V., Zori, A.A. (2017). Planning of remote experimental research on effects of greenhouse microclimate parameters on vegetable crop-producing. International Journal on Smart Sensing and Intelligent Systems, 10(4): 845-862. https://doi.org/10.21307/ijssis-2018-021
[2] Laktionov, I., Vovna, O., Zori, A. (2017). Concept of low cost computerized measuring system for microclimate parameters of greenhouses. Bulg. Journal of Agricultural Science, 23(4): 668-673.
[3] Laktionov, I.S., Vovna, O.V., Zori, A.A., Lebediev, V.A. (2018). Results of simulation and physical modeling of the computerized monitoring and control system for greenhouse microclimate parameters. International Journal on Smart Sensing and Intelligent Systems, 11(1): 1-15. https://doi.org/ 10.21307/ijssis-2018-017
[4] Thermistor Calibration for High Accuracy Measurements. https://www.dataloggerinc.com/resource-article/thermis
tor-calibration/, accessed on Sept. 12, 2019.
[5] Texas Instruments. Semiconductor Temperature Sensors Challenge Precision RTDs and Thermistors in Building Automation. http://www.ti.com/lit/an/snaa267a/snaa267a.pdf, accessed on Sept. 15, 2019.
[6] Paseltiner, D., Payagala, S., Jarrett, D. (2017). Design, construction, and calibration of a temperature monitoring system for resistance standards. Journal of Research of the National Institute of Standards and Technology, 122(45): 1-9. https://doi.org/10.6028/jres.122.045
[7] Matus, M. (2011). Temperature measurement in dimensional metrology – Why the Steinhart-Hart Equation works so well. Proceedings MacroScale 2011, Wabern, pp. 1-6. https://doi.org/10.7795/810.20130620d
[8] Rachakonda, P., Sawyer, D., Muralikrishnan, B., Blackburn, C., Shakarji, C., Strouse, G., Phillips, S. (2014). In-situ temperature calibration capability for dimensional metrology. The Journal of Measurement Science, 9(4): 40-45. https://doi.org/10.1080/19315775.2014.11721704
[9] Steinhart, J., Hart, S. (1968). Calibration curves for thermistors. Deep Sea Research and Oceanographic Abstracts, 15(4): 497-503. https://doi.org/10.1016/0011-7471(68)90057-0
[10] White, D.R., Hill, K., del Campo, D., Garcia Izquierdo, C. (2014). Guide on secondary thermometry: Thermistor thermometry. Bureau International des Poids et Mesures. Paris, France, pp. 1-19.
[11] Wang, C.C., Hou, Z.Y., You, J.C. (2018). A high-precision CMOS temperature sensor with thermistor linear calibration in the (–5 ºC, 120 ºC) temperature range. Sensors, 18(2165): 1-12. https://doi.org/10.3390/s18072165
[12] Jevtic, N., Drndarevic, V. (2013). Design and implementation of plug-and-play analog resistance temperature sensor. Metrology and Measurement Systems, 20(4): 565-580. https://doi.org/10.2478/mms-2013-0048
[13] Webster, J.G. (2014). Measurement, Instrumentation and Sensors: 2nd ed. Boca Raton, USA, pp. 1-1921.
[14] Kochan, R., Sachenko, A. (2004). Metrology software test for verification of sensor based instrumentation. In: Conference Sensors for Industry Conference' 2004 ISA/IEEE, LA, USA, pp. 123-128. https://doi.org/10.1109/sficon.2004.1287143
[15] Han, S.B., Liu, Q.Q., Han, X., Dai, W., Yang, J. (2018). An E-type Temperature sensor for upper air meteorology. Nanotechnology and Precision Engineering, 1(2): 145-149. https://doi.org/10.13494/j.npe.20170016
[16] Langari, R. (2012). Calibration of Measuring Sensors and Instruments. Measurement and Instrumentation. Measurement and Instrumentation. Oxford, UK, pp. 103-114. https://doi.org/10.1016/C2009-0-63052-X
[17] ZenithLab Thermostatic Bath WH-1/2/4/6. https://www.zenithlabo.com/thermostatic-bath-wh-2.html, accessed on Aug. 21, 2019.
[18] Arduino KY-013 Temperature sensor module. http://www.energiazero.org/arduino_sensori/Arduino%20KY-013%20termistore%20modulo%20.pdf, accessed on Aug. 23, 2019.
[19] Podzharenko, V.O., Vasilevskyi, O.M., Kucheruk, V. Yu. (2008). Processing of the measurement results based on the uncertainty concept. Tutorial in Ukrainian. Vinnytsia, Ukraine, pp. 1-127. http://ir.lib.vntu.edu.ua/bitstream/handle/123456789/15132/15183.PDF?sequence=2
[20] Matula, S., Bat'kova, K., Legese, W.L. (2016). Laboratory performance of five selected soil moisture sensors applying factory and own calibration equations for two soil media of different bulk density and salinity levels. Sensors, 16(11): 1-22. https://doi.org/10.3390/s16111912
[21] Analysis of Variance – ANOVA. http://isoconsultantpune.com/analysis-of-variance-anova, accessed on Sept. 3, 2019.
[22] Laktionov, I., Lebediev, V., Vovna, O., Zolotarova, O., Sukach, S. (2019). Results of researches of metrological characteristics of analog temperature sensors. In: 2019 IEEE International Conference on Modern Electrical and Energy Systems (MEES), Kremenchuk, Ukraine, pp. 478-481. https://doi.org/10.1109/MEES.2019.8896378
|
CommonCrawl
|
One of the fundamental concepts in vector analysis and the theory of non-linear mappings.
The gradient of a scalar function $ f $ of a vector argument $ t = ( t ^ {1} \dots t ^ {n} ) $ from a Euclidean space $ E ^ {n} $ is the derivative of $ f $ with respect to the vector argument $ t $, i.e. the $ n $- dimensional vector with components $ \partial f / \partial t ^ {i} $, $ 1 \leq i \leq n $. The following notations exist for the gradient of $ f $ at $ t _ {0} $:
$$ \mathop{\rm grad} f ( t _ {0} ),\ \ \nabla f ( t _ {0} ),\ \ \frac{\partial f ( t _ {0} ) }{\partial t } ,\ \ f ^ { \prime } ( t _ {0} ) ,\ \ \left . \frac{\partial f }{\partial t } \right | _ {t _ {0} } . $$
The gradient is a covariant vector: the components of the gradient, computed in two different coordinate systems $ t = ( t ^ {1} \dots t ^ {n} ) $ and $ \tau = ( \tau ^ {1} \dots \tau ^ {n} ) $, are connected by the relations:
$$ \frac{\partial f }{\partial t ^ {i} } ( \tau ( t)) = \ \sum _ {j = 1 } ^ { n } \frac{\partial f ( \tau ) }{\partial \tau ^ {j} } \ \frac{\partial \tau ^ {j} }{\partial t ^ {i} } . $$
The vector $ f ^ { \prime } ( t _ {0} ) $, with its origin at $ t _ {0} $, points to the direction of fastest increase of $ f $, and is orthogonal to the level lines or surfaces of $ f $ passing through $ t _ {0} $.
The derivative of the function at $ t _ {0} $ in the direction of an arbitrary unit vector $ \mathbf N = ( N ^ {1} \dots N ^ {n} ) $ is equal to the projection of the gradient function onto this direction:
$$ \tag{1 } \frac{\partial f ( t _ {0} ) }{\partial \mathbf N } = \ ( f ^ { \prime } ( t _ {0} ), \mathbf N ) \equiv \ \sum _ {j = 1 } ^ { n } \frac{\partial f ( t _ {0} ) }{\partial t ^ {j} } N ^ {j} = | f ^ { \prime } ( t _ {0} ) | \cos \phi , $$
where $ \phi $ is the angle between $ \mathbf N $ and $ f ^ { \prime } ( t _ {0} ) $. The maximal directional derivative is attained if $ \phi = 0 $, i.e. in the direction of the gradient, and that maximum is equal to the length of the gradient.
The concept of a gradient is closely connected with the concept of the differential of a function. If $ f $ is differentiable at $ t _ {0} $, then, in a neighbourhood of that point,
$$ \tag{2 } f ( t) = f ( t _ {0} ) + ( f ^ { \prime } ( t _ {0} ),\ t - t _ {0} ) + o ( | t - t _ {0} | ), $$
i.e. $ df = ( f ^ { \prime } ( t _ {0} ), dt) $. The existence of the gradient of $ f $ at $ t _ {0} $ is not sufficient for formula (2) to be valid.
A point $ t _ {0} $ at which $ f ^ { \prime } ( t _ {0} ) = 0 $ is called a stationary (critical or extremal) point of $ f $. An example of such a point is a local extremal point of $ f $, and the system $ \partial f ( t _ {0} ) / \partial t ^ {i} = 0 $, $ 1 \leq i \leq n $, is employed to find an extremal point $ t _ {0} $.
The following formulas can be used to compute the value of the gradient:
$$ \mathop{\rm grad} ( \lambda f ) = \ \lambda \mathop{\rm grad} f,\ \ \lambda = \textrm{ const } , $$
$$ \mathop{\rm grad} ( f + g) = \mathop{\rm grad} f + \mathop{\rm grad} g, $$
$$ \mathop{\rm grad} ( fg) = g \mathop{\rm grad} f + f \mathop{\rm grad} g, $$
$$ \mathop{\rm grad} \left ( { \frac{f}{g} } \right ) = \frac{1}{g ^ {2} } ( g \mathop{\rm grad} f - f \mathop{\rm grad} g). $$
The gradient $ f ^ { \prime } ( t _ {0} ) $ is the derivative at $ t _ {0} $ with respect to volume of the vector function given by
$$ \Phi ( E) = \ \int\limits _ {t \in \partial E } f ( t) \mathbf M ds, $$
where $ E $ is a domain with boundary $ \partial E $, $ t _ {0} \in E $, $ ds $ is the area element of $ \partial E $, and $ \mathbf M $ is the unit vector of the outward normal to $ \partial E $. In other words,
$$ f ^ { \prime } ( t _ {0} ) = \ \lim\limits \frac{\Phi ( E) }{ \mathop{\rm vol} E } \ \textrm{ as } \ \ E \rightarrow t _ {0} . $$
Formulas (1), (2) and the properties of the gradient listed above indicate that the concept of a gradient is invariant with respect to the choice of a coordinate system.
In a curvilinear coordinate system $ x = ( x ^ {1} \dots x ^ {n} ) $, in which the square of the linear element is
$$ ds ^ {2} = \ \sum _ {i, j = 1 } ^ { n } g _ {ij} ( x) dx ^ {i} dx ^ {j} , $$
the components of the gradient of $ f $ with respect to the unit vectors tangent to coordinate lines at $ x $ are
$$ \sum _ {j = 1 } ^ { n } g ^ {ij} ( x) \frac{\partial f }{\partial x ^ {j} } ,\ \ 1 \leq i \leq n, $$
where the matrix $ \| g ^ {ij} \| $ is the inverse of the matrix $ \| g _ {ij} \| $.
The concept of a gradient for more general vector functions of a vector argument is introduced by means of equation (2). Thus, the gradient is a linear operator the effect of which on the increment $ t - t _ {0} $ of the argument is to yield the principal linear part of the increment $ f( t) - f( t _ {0} ) $ of the vector function $ f $. E.g., if $ f = ( f ^ { 1 } \dots f ^ { m } ) $ is an $ m $- dimensional vector function of the argument $ t = ( t ^ {1} \dots t ^ {n} ) $, then its gradient at a point $ t _ {0} $ is the Jacobi matrix $ J = J ( t _ {0} ) $ with components $ ( \partial f ^ { i } / \partial t ^ {j} ) ( t _ {0} ) $, $ 1 \leq i \leq m $, $ 1 \leq j \leq n $, and
$$ f ( t) = f ( t _ {0} ) + J ( t - t _ {0} ) + o ( t - t _ {0} ), $$
where $ o ( t - t _ {0} ) $ is an $ m $- dimensional vector of length $ o ( | t - t _ {0} | ) $. The matrix $ J $ is defined by the limit transition
$$ \tag{3 } \lim\limits _ {\rho \rightarrow 0 } \ \frac{f ( t _ {0} + \rho \tau ) - f ( t _ {0} ) } \rho = J \tau , $$
for any fixed $ n $- dimensional vector $ \tau $.
In an infinite-dimensional Hilbert space definition (3) is equivalent to the definition of differentiability according to Fréchet, the gradient then being identical with the Fréchet derivative.
If the values of $ f $ lie in an infinite-dimensional vector space, various types of limit transitions in (3) are possible (see, for example, Gâteaux derivative).
In the theory of tensor fields on a domain of an $ n $- dimensional affine space with a connection, the gradient serves to describe the principal linear part of increment of the tensor components under parallel displacement corresponding to the connection. The gradient of a tensor field
$$ f ( t) = \ \{ { f _ {j _ {1} \dots j _ {q} } ^ { i _ {1} \dots i _ {p} } ( t) } : { 1 \leq i _ \alpha , j _ \beta \leq n } \} $$
of type $ ( p, q) $ is the tensor of type $ ( p, q + 1 ) $ with components
$$ \{ { \nabla _ {k} f _ {j _ {1} \dots j _ {q} } ^ { i _ {1} \dots i _ {p} } ( t) } : { 1 \leq k, i _ \alpha , j _ \beta \leq n } \} , $$
where $ \nabla _ {k} $ is the operator of absolute (covariant) differentiation (cf. Covariant differentiation).
The concept of a gradient is widely employed in many problems in mathematics, mechanics and physics. Many physical fields can be regarded as gradient fields (cf. Potential field).
[1] N.E. Kochin, "Vector calculus and fundamentals of tensor calculus" , Moscow (1965) (In Russian)
[2] P.K. [P.K. Rashevskii] Rashewski, "Riemannsche Geometrie und Tensoranalyse" , Deutsch. Verlag Wissenschaft. (1959) (Translated from Russian)
[a1] W. Fleming, "Functions of several variables" , Addison-Wesley (1965) MR0174675 Zbl 0136.34301
Gradient. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Gradient&oldid=47110
This article was adapted from an original article by L.P. Kuptsov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Gradient&oldid=47110"
TeX auto
|
CommonCrawl
|
Submissions scipost_202109_00013v2
Interleaved Resonance Decays and Electroweak Radiation in the Vincia Parton Shower
by Helen Brooks, Peter Skands, Rob Verheyen
As Contributors: Rob Verheyen
Preprint link: scipost_202109_00013v2
Submitted by: Verheyen, Rob
High-Energy Physics - Phenomenology
Approaches: Computational, Phenomenological
We propose a framework for high-energy interactions in which resonance decays and electroweak branching processes are interleaved with the QCD evolution in a single com- mon sequence of decreasing resolution scales. The interleaved treatment of resonance decays allows for a new treatment of finite-width effects in parton showers. At scales above their offshellness (i.e., typically Q > Γ ), resonances participate explicitly as incom- ing and outgoing states in branching processes, while they are effectively "integrated out" of the description at lower scales. We implement this formalism, together with a full set of antenna functions for branching processes involving electroweak (W/Z/H) bosons in the Vincia shower module in Pythia 8.3, and study some of the consequences
Editor-in-charge assigned
We are grateful to the reviewers for their careful reading of the manuscript and their detailed comments. We reply to them here point-by-point. They are also added as replies to the previous version in pdf format.
Referee 1:
We agree that the enumerated list describing the interleaved algorithm could be improved. We have significantly expanded it and have tied the steps more explicitly to the illustration in figure 1 (now figure 2). To keep the list itself relatively concise, some of the content (such as the scale definitions under point 2) were moved up into the body text in the introduction to section 2, which also allowed to comment more extensively on those choices, cf our response to point 3 of referee 2.
We have expanded the discussion on the resolution measure, and the relevance of the parameter R.
In short, the exact form of the resolution measure is not of importance as long as it correctly separates the singular regions of phase space associated with each shower. For internal consistency, we have chosen to stay as close as possible to the more limited implementation in Pythia's default showers.
We have added two figures showing the computation time penalties for enabling interleaved resonance decays (fig 3) and the EW shower (fig 15). Regarding the point of validation of the individual EW splitting kernels, these have been validated to give identical collinear limits against the branching kernels of 2002.09248, which were calculated separately. We have added a note of this in the text. Furthermore, the tests shown in, for instance, figure 10 and 11 (now figs 12 and 13) serve as an indirect validation, as the EW shower would not be able to produce the correct spectra without the correct splitting kernels. This is also reflected in the text.
A check against a theoretical prediction is shown in figure 10 (now figure 12). We had struggled to format this figure, and accept that its message was not conveyed clearly. We have now revised it by emphasising the exact ME curve (which serves as theory baseline) by using a thick black line for that. We also dashed the lines that represent the individual components of the full Vincia result. Finally, to reduce number of curves, we also dropped the Pythia result, since that is anyway unrelated to the validation of the Vincia treatment as such. We hope that the figure now shows much more clearly that, if one does not enable the overlap veto, the sum of the QCD and the EW path overcount the exact matrix element by a large amount (left pane). On the other hand, with the veto enabled, the shower lines up quite well with the exact matrix element (right pane).
We have tried to clarify this further in the text.
This should have been the jet radius R of the anti-kT algorithm. We have adjusted the figure accordingly.
Referee 2: 1. We agree that both references should be included. arXiv:2108.10817 was released on the same day as this manuscript, but arXiv:1403.4788 should have been included in the first place, which we apologize for. We have added both.
The point about running widths leading to potential gauge violations is of course completely correct. We would like to keep our remarks about it relatively brief, since this aspect is not by itself a main point of our paper. In the introduction on p4, we have therefore sought to change the relevant paragraph while trying not to lengthen it, as follows:
...as well as options for allowing partial widths (and hence relative branching fractions) to vary with $Q^2$. The latter allows, for example, to account for kinematic thresholds and effects of running couplings across a reasonable range around the pole, but cannot be pushed too far, especially in the electroweak sector where masses and couplings are not independent of each other.
One further point to clarify is that the masses of resonances produced by the EW shower are also distributed according to Breit-Wigner distributions with running widths. As is already explained in Appendix C, the gauge violations that lead to dangerous high-energy behaviour are taken care of in the same way as in the calculation of the EW antenna functions. Furthermore, Figure 2 (4 in the new version) shows that at high invariant mass, for the high-energy resonances that the EW shower tends to produce, the shower dominates over the Breit-Wigner distribution, meaning that the exact treatment of running width effects are of little consequence in that region. We have verified that this is true for all SM resonances. A note of this has been added to the resonance matching section.
In Vincia, the default cutoff scale for final-state evolution is 0.75 GeV, so the given definitions of Q_res are typically above the shower cutoff, for top, Z, and W resonances in the SM (which all have widths larger than 1 GeV). This means that, when one of the dynamical scale choices is selected, most of these resonances will still decay before the cutoff is reached. (Although the Q_res$ distributions are peaked at zero, the integrated probabilities below the shower cutoff are still less than 50%). We added a new figure 1 to highlight this, which shows the surviving fraction of t, Z, and W resonances as functions of the shower scale, for the three different dynamic-scale options. The region of typical shower cutoff values (0.5 - 1 GeV) is also illustrated.
The reasoning for using a measure of offshellness is given in the introductory paragraph to section 2, specifically strong ordering of propagator virtualities, which dictates which diagrams are enhanced and which are suppressed. The default choice, eq.(4), explicitly represents the denominator structure of eq.(3); the alternatives are mainly provided to make it possible to study the sensitivity to variations on this choice. We have added the following two paragraphs to the beginning of section 2 to elaborate on this:
The desire to connect with the strong-ordering criterion in the rest
of the perturbative evolution, as the principle that should dictate the
leading amplitude structures, leads us to prefer a dynamical
scale choice for
resonance decays, whereby resonances that are highly off shell will
persist over shorter intervals in the evolution than ones that are almost
on shell. We note that this has the consequence
that the on-shell tail will be resolvable by soft photons or gluons,
albeit suppressed by the survival fraction. To illustrate this,
fig.~1 shows the survivial fractions (denoted $\Delta_R$)
as functions of
evolution scale, for $t$, $Z$, and $W$ resonances, for three different
options for dynamical scale choices, all of which are roughly
motivated by the propagator structure:
\begin{eqnarray}
i)~~~Q_\mathrm{RES}^2 & = & (m - m_0)^2~, \
\[2mm] ii)~~~Q_\mathrm{RES}^2 & \stackrel{\mbox{\tiny default}}{\equiv} &\left(\frac{m^2 - m_0^2}{m_0}\right)^2 > 0~, \label{eq:offsh} \\[2mm] iii)~~~Q_\mathrm{RES}^2 & \equiv & |m^2 - m_0^2|~, \end{eqnarray} where $m_0$ is the pole mass and $m$ its BW-distributed counterpart. Near resonance, options \emph{i)} and \emph{ii)}, illustrated in the left and middle panes of fig.~1, are functionally almost equivalent, differing mainly just by an overall factor 2, while for option \emph{iii)}, illustrated in the rightmost pane, $m = m_0 \pm \Gamma/2$ translates to $Q^2 \sim m_0 \Gamma$, so that option is primarily intended to give an upper bound on the effect that interleaving could have. Alternatively, our model also allows for using a fixed scale, $Q_\mathrm{RES} \equiv \Gamma$, irrespective of offshellness. In that case, the resonance will not be resolved at all by any photons or gluons with scales $Q < \Gamma$. We regard this as a good starting point for the width dependence but have not selected it as our default since the fixed-scale choice by itself does not automatically extend strong ordering to the resonance propagators; this can only be achieved by allowing the choice to be dynamical. Our default choice, eq.~(\ref{eq:offsh}), is constructed to have a \emph{median} scale of $\left< Q_\mathrm{RES} \right> = \Gamma$, while simultaneously respecting strong ordering event by event. This implies that soft quanta will be able to resolve the resonance with a suppressed magnitude $\propto \Delta_R$, which acts as a form factor. We comment on the $Q_res < Q_cut$ issue under point 4. 4. We hope for the referee's understanding that it is not the intent of this paper, which already presents a significant body of work, to also develop possible non-perturbative aspects of the tail of long-lived (coloured) resonances. In this work, that tail is left to be treated just as it would have been in the conventional non-interleaved framework. For the record, in principle, our view is that yes, top hadrons could be formed from (part of) that tail, but we would probably not simply identify Q_cut with the formation of fully formed top hadrons. Rather, one still has to consider a range of scales, between Lambda_QCD and Q_cut, with Lambda_QCD a more appropriate formation time for actual hadronic states. In the interval one would be dealing with top quarks that have started to build up a confining field, but which decay before it is fully formed. This would be a fun project but not one that we believe we could complete without significantly delaying the publication of the work we have already done. Rather than speculate, we believe we do as much as we can, for now, by pointing out that this aspect in principle exists, but leave it for future studies to consider it more carefully. 5. IF branchings are indeed not included in the current implementation. The reason is that, contrary to QCD, no natural choice for recoiler selection as a result of colour ordering exists. One option to select recoilers is indeed presented in sec. 3.3, and a similar procedure was outlined in arXiv:1611.00788. For now, as we make no attempt to correctly describe coherent EW gauge boson emissions, and the number of included antennae is already very large, we made the choice to limit the shower to FF and II branchings, which is sufficient to include the relevant singular limits. We have added an explanation of this to the manuscript. 6. We agree that this concept is relevant for the construction of parton shower histories in the context of merging. In fact, as described in arXiv:2003.00702, Vincia's QCD shower is now sectorized, which means that only a single shower history path is associated with any particular phase space point. The overlap veto procedure has a similar function, in that it sectorizes the QCD and EW showers. The EW shower is currently not sectorized by itself, but if it were, the overlap veto would ensure only a single shower history, through either the QCD or the EW shower, would be associated with a phase space point. We have added a note of this to the manuscript. 7. The shaded bands are the statistical uncertainties. We added this to the caption of figure 3 (now figure 5), hoping that this provides enough clarity for the other figures. 8. Correct, we have fixed this and appreciate the level of detail the reviewer has inspected the manuscript with. 9. We thoroughly agree that an analysis of the logarithmic accuracy of the EW shower would be very interesting and highly desirable. The EW sector presents some unique features, such as the mentioned spin-dependent subleading logarithms, for which it would be very intersting to find out how the current spin treatment performs. However, the recent work of the PanScales collaboration, the Deductor shower and the work of for instance arXiv:2003.06400 have shown that the analysis of the logarithmic accuracy of a parton shower is not a straightforward task. For instance, a numerical analysis requires running the shower in extreme limits, for which Pythia and Vincia are not currently equipped. We consider such an analysis to be out of the scope of the current paper, and hope to be able to return to this topic in the form of a dedicated study in the future. We have added a brief paragraph at the end of the conclusion describing possible paths for future research. 10. The main difference is that the QED shower does not treat particle helicities, while this is required for the EW shower. While not a major obstacle, the inclusion of spin-dependent antenna functions in the coherent QED shower is not entirely straightworward, for instance due to the possibility of spin flips of massive particles due to photon emissions. We added a brief note of this to the manuscript.
Listed in the author comments.
Resubmission scipost_202109_00013v2 on 12 January 2022
Submission scipost_202109_00013v1 on 10 September 2021
Author Reply by Dr Verheyen on 2022-01-12
|
CommonCrawl
|
Mathematical modelling of mantle convection at a high Rayleigh number with variable viscosity and viscous dissipation
Sumaiya B. Islam1,
Suraiya A. Shefa1 &
Tania S. Khaleque ORCID: orcid.org/0000-0002-6010-24501
Journal of the Egyptian Mathematical Society volume 30, Article number: 5 (2022) Cite this article
In this paper, the classical Rayleigh–Bénard convection model is considered and solved numerically for extremely large viscosity variations (i.e., up to \(10^{30}\)) across the mantle at a high Rayleigh number. The Arrhenius form of viscosity is defined as a cut-off viscosity function. The effects of viscosity variation and viscous dissipation on convection with temperature-dependent viscosity and also temperature- and pressure-dependent viscosity are shown through the figures of temperature profiles and streamline contours. The values of Nusselt number and root mean square velocity indicate that the convection becomes significantly weak as viscosity variation and viscous dissipation are increased at a fixed pressure dependence parameter.
Convection in mantle is responsible for most of the physical and chemical phenomena happening on the surface and in the interior of the Earth, and it is caused by the heat transfer from the interior to the Earth's surface. Even though there are some debates, it is quite well established that convection in the mantle is the driving mechanism for plate tectonics, seafloor spreading, volcanic eruptions, earthquakes, etc. [1]. However, the mechanism of mantle convection is still an unsolved mystery since the rheology of mantle rocks is extremely complicated [2,3,4]. Temperature, pressure, stress, radiogenic elements, creep, and many other factors influence the mantle's behavior on a large scale. One of its significant but complex characteristics is its viscosity, which is dependent mainly on temperature, pressure, and stress [5]. In earlier studies of mantle convection, scientists assumed constant viscosity (e.g. [6, 7]) but later, among many others Moresi and Solomatov [8, 9], studied the temperature-dependent viscosity case numerically and concluded that the formation of an immobile lithosphere on terrestrial planets like Mars and Venus seems to be a natural result of temperature-dependent viscosity. However, studies with purely temperature-dependent viscosity cannot portray the true convection pattern of the Earth's mantle. As a result, convection with temperature and pressure-dependent viscosity is becoming more important, and some notable works in this area have recently been published [10,11,12,13,14]. Christensen [10] showed that additional pressure dependence of viscosity strongly influences the flow regimes. In a 2D axi-symmetrical model, Shahraki and Schmeling [15] examined the simultaneous effect of pressure and temperature-dependent rheology on convection and geoid above the plumes, and Fowler et al. [16] studied the asymptotic structure of mantle convection at high viscosity contrast.
According to King et al. [17], when pressure increases through the mantle, there is a corresponding increase in density due to self-compression. In a vigorously convecting mantle, the rate at which viscous dissipation, which is the irreversible process that changes other forces into heat, is non-negligible and contributes to the heat energy of the fluid, resulting in adiabatic temperature and density gradients that reduce the vigour of convection. Conrad and Hager [18] proposed that the viscous dissipation and resisting force to plate motion may have significant effects on convection and the thermal evolution history of the Earth's mantle. Leng and Zhong [19] concluded that the dissipation occurring in a subduction zone is 10–20% of the total dissipation for cases with only temperature-dependent viscosity, whereas Morgan et al. [20] declared that when slabs subduct, about 86% of the gravitational energy for the whole mantle flow is mostly transformed into heat by viscous dissipation. According to Balachandar et al. [21], numerical simulations of 3D convection with temperature-dependent viscosity and viscous heating at realistic Rayleigh numbers for Earth's mantle reveal that, in the strongly time-dependent regime, very intense localized heating takes place along the top portion of descending cold sheets and also at locations where the ascending plume heads impinge at the surface. They also found that the horizontally averaged viscous dissipation is concentrated at the top of the convecting layer and has a magnitude comparable to that of radioactive heating. King et al. [17] worked on a benchmark for 2-D Cartesian compressible convection in the Earth's mantle where they used steady-state constant and temperature-dependent viscosity cases as well as time-dependent constant viscosity cases. In their work, the Rayleigh numbers are near \(10^6\) and dissipation numbers are between 0 and 2, and they conclude that the most unstable wavelengths of compressible convection are smaller than those of incompressible convection. As the research on mantle convection is growing, the importance of studying viscous dissipation is also increasing since it was suggested that the bending of long and highly viscous plates at subduction zones dissipates most of the energy that drives mantle convection [22]. Some notable recent works on numerical studies of convection and effects of variable viscosity and viscous dissipation have been done by Ushachew et al. [23], Megahed [24], Ferdows et al. [25], Ahmed et al. [26], Fetecau et al. [27].
Although mantle convection is a 3D problem, many 2D codes have been developed to gain an understanding of the fundamental mechanism and to minimize the computational cost and complexity. As the Earth's mantle has been affected by many complexities, its basic understanding has been constructed through research on simple Rayleigh–Bénard convection [2]. Over the years, the Rayleigh–Bénard convection has become a benchmark problem in computational geophysics as a paradigm for convection in the Earth's mantle. Although Rayleigh–Benard convection with viscosity variation is a well-known topic for mantle convection, very high viscosity variation (up to \(10^{30}\)) for mantle convection is not widely covered. To the best of our knowledge, mantle convection with strongly variable viscosity, which is temperature dependent and also both temperature and pressure dependent with the inclusion of viscous dissipation, has not been studied so far. The governing equation in two-dimensional form ensures the conservation of mass, momentum, and energy and the thermodynamic equation of state. In this study, incompressible mantle convection will be considered where the mantle viscosity depends strongly on both temperature and pressure, and viscous dissipation is also considered. The convection will be investigated at a high Rayleigh number with high viscosity variations across the mantle.
In "Methods" section the full governing equations for mantle convection and the appropriate boundary conditions for classical Rayleigh–Bénard convection in a 2D square cell are described. The equations are non-dimensionalized and the dimensionless parameters are identified. Though the variable viscosity is defined in an Arrhenius form, a modified form of viscosity is used to improve the efficiency of numerical computation. The computational method for simulation is also described, and the code is verified using some benchmark values. Then the governing model is solved numerically in a unit aspect-ratio cell for extremely large viscosity variations, and steady solutions for temperature and streamlines are obtained. The numerical and graphical results of the computation are described in "Result and discussion" section. Finally, in "Conclusion" section some concluding remarks on the results are given.
Governing equations
A classical Rayleigh–Bénard convection in a two-dimensional unit aspect ratio cell with a free slip boundary condition is taken into account. The temperature difference is fixed between the horizontal boundaries. The convective cell is assumed to be a section of a periodic structure in the associated infinite horizontal layer. When adopting Cartesian coordinates (x, z) with horizontal x-axis and vertical z-axis, the Boussinesq approximation is assumed, which suggests that density variation is barely vital within the buoyancy term of the momentum equation, so that mass conservation takes the shape of the incompressibility condition [16]. The inertia terms within the Navier–Stokes equations (taking the limit of an infinite Prandtl number) are neglected as well. According to Solomatov [28], the integral viscous dissipation within the layer is often balanced by the integral mechanical work done by thermal convection per unit time, and if the viscosity contrast is large, dissipation in the cold boundary layer becomes comparable with the dissipation in the internal region. Thus, in order to balance the energy equation, the extended Boussinesq approximation is used. Here, "extended Boussinesq approximation" means that apart from the driving buoyancy forces, the fluid is treated as being incompressible all over. The non-Boussinesq effects of the adiabatic gradient and frictional heating are introduced into the energy equation [29]. The governing equations ensure the conservation of mass, momentum, and energy. This also ensures a suitable thermodynamic equation of state. The Navier–Stokes equations, which describe the motion in component forms, are [30]
$$\begin{aligned} \begin{aligned} \frac{\partial {u}}{\partial {x}}+ \frac{\partial {w}}{\partial {z}}&= \,0,\\ \frac{\partial {P}}{\partial {x}}&= \frac{\partial {\tau _1}}{\partial {x}}+\frac{\partial {\tau _3}}{\partial {z}},\\ \frac{\partial {P}}{\partial {z}}&= \,\frac{\partial {\tau _3}}{\partial {x}}-\frac{\partial {\tau _1}}{\partial {z}}-\rho g,\\ \tau _{1}&= \,2\eta \frac{\partial {u}}{\partial {x}},\\ \tau _{3}&= \,\eta \left( \frac{\partial {u}}{\partial {z}}+\frac{\partial {w}}{\partial {x}}\right) ,\\ \rho&= \rho _{0}\left[ 1-\alpha (T-T_{b})\right] , \end{aligned} \end{aligned}$$
The energy equation is
$$\begin{aligned} \frac{\partial {T}}{\partial {t}}+{\varvec{{u}}}. \nabla T-\frac{\alpha T}{\rho C_{\mathrm{p}}}\left( \frac{\partial {P}}{\partial {t}}+{\varvec{{u}}}. \nabla P \right) = \kappa \left( \nabla ^2 T\right) + \frac{\tau ^2}{2\eta \rho C_{\mathrm{p}}}. \end{aligned}$$
Here, P is the pressure, \(\varvec{\tau }\) is the deviatoric stress tensor, t is time, \(\rho\) is the density, \({\varvec{u}}= (u,0, w)\) is the fluid velocity, where u and w are velocity components in the x- and z-directions, g is the assumed constant gravitational acceleration acting downwards (the variation of g across the mantle is quite small that it is taken as constant), \(\tau _1\) and \(\tau _3\) are the longitudinal and shear components of the deviatoric stress tensor, respectively, \(\eta\) is the viscosity, \(T_{\mathrm{b}}\) is the basal temperature, \(\rho _0\) is the basal density, \(\kappa\) is the thermal diffusivity, T is the absolute temperature, \(C_{\mathrm{p}}\) is the specific heat at constant pressure, and \(\alpha\) is the thermal expansion coefficient.
The deviatoric stress tensor, \(\varvec{\tau }\) can be expressed as
$$\begin{aligned} \varvec{\tau }=\tau _1^2+ \tau _3^2 \end{aligned}$$
where \(\tau _1\) and \(\tau _3\) are the longitudinal and shear components of the deviatoric stress tensor, respectively.
The Arrhenius form of viscosity function is
$$\begin{aligned} \eta =\frac{1}{2A\left( \tau _1^2+\tau _3^2\right) ^{\frac{n-1}{2}}} {\mathrm{exp}} \left[ \frac{E+pV}{RT}\right] , \end{aligned}$$
where A is the rate factor, n is the flow index, E is the activation energy, V is the activation volume, and R is the universal gas constant [5].
A unit aspect-ratio cell with a free-slip boundary condition is considered. The temperatures at the bottom and top boundaries are taken as constant, and thermal insulation is assumed on the side walls. The boundary conditions are
$$\begin{aligned} & w = 0,\quad \tau _{3} = 0,\quad = T_{{\text{b}}} \quad {\text{on}}\quad z = 0, \\ & w = 0,\quad \tau _{3} = 0,\quad T = T_{{\text{s}}} \quad {\text{on}}\quad z = d, \\ & u = 0,\quad \tau _{3} = 0,\quad \frac{{\partial T}}{{\partial x}} = 0\quad {\text{on}}\quad x = 0,d. \\ \end{aligned}$$
where d is the depth of the convection cell, \(T_{\mathrm{b}}\) and \(T_{\mathrm{s}}\) are the basal and top temperatures, respectively (Fig. 1).
Schematic diagram of a basally heated non-dimensional unit aspect-ratio cell in mantle
Throughout this work, Newtonian rheology is considered with \(n = 1\) in the viscosity relation and internal heating is neglected. To see the effects of variable viscosity (both temperature-dependent and temperature-and pressure-dependent viscosity) and viscous dissipation on convection, these assumptions are made to make the model less complicated.
Non-dimensionalization and simplification
In order to non-dimensionalize the model, the variables are set as [7, 30]
$$\begin{aligned} \begin{aligned} {\varvec{u}}=\frac{\kappa }{d}\varvec{u^*}, \quad (x,z)=d(x^*,z^*),\quad \varvec{\tau }=\frac{\eta _0\kappa }{d^2}\varvec{\tau ^*},\quad \eta =\frac{e^{(1+\mu )/\varepsilon }}{2A}\eta ^*=\eta _0\eta ^*, \\ P=\rho _0 g d(1-z^*)+\frac{\eta _0\kappa }{d^2}P^*,\quad \rho =\rho _0\rho ^*,\quad t=\frac{d^2}{\kappa }t^*, \quad T=T_{\mathrm{b}}T^* \end{aligned} \end{aligned}$$
Using these in equations from (1) to (3) and dropping the asterisk decorations, the dimensionless equations becomes
$$\begin{aligned} & \frac{{\partial u}}{{\partial x}} + \frac{{\partial w}}{{\partial z}} = 0, \\ & \frac{{\partial P}}{{\partial x}} = \frac{{\partial \tau _{1} }}{{\partial x}} + \frac{{\partial \tau _{3} }}{{\partial z}}, \\ & \frac{{\partial P}}{{\partial z}} = {\mkern 1mu} \frac{{\partial \tau _{3} }}{{\partial x}} - \frac{{\partial \tau _{1} }}{{\partial z}} - {\text{Ra}}(1 - T), \\ & \tau _{1} = {\mkern 1mu} 2\eta \frac{{\partial u}}{{\partial x}}, \\ & \tau _{3} = {\mkern 1mu} \eta \left( {\frac{{\partial u}}{{\partial z}} + \frac{{\partial w}}{{\partial x}}} \right), \\ \end{aligned}$$
$$\begin{aligned}&\frac{\partial {T}}{\partial {t}}+{\varvec{{u}}} \cdot \nabla T-DT\frac{{\bar{B}}}{{\mathrm{Ra}}}\frac{\partial {P}}{\partial {t}}+DTw-DT\frac{{\bar{B}}}{{\mathrm{Ra}}}u \cdot \nabla P = \nabla ^2 T+\frac{D}{{\mathrm{Ra}}} \frac{\tau ^2}{2\eta }, \end{aligned}$$
while the dimensionless version of constitutive relation (3) reads
$$\begin{aligned} \eta ={\mathrm{exp}}\left[ \frac{(1-T)(1+\mu )-\mu z+\mu {\bar{B}}p/{\mathrm{Ra}}}{\varepsilon T}\right] , \end{aligned}$$
in which the dimensionless parameters are,
$$\begin{aligned} & {\text{Dissipation number}},D = {\mkern 1mu} \frac{{\alpha gd}}{{C_{{\text{p}}} }}, \\ & {\text{Viscous temperature parameter}},\varepsilon = {\mkern 1mu} \frac{{RT_{{\text{b}}} }}{E}, \\ & {\text{Viscous pressure number}},\mu = {\mkern 1mu} \frac{{\rho _{0} g{\text{d}}V}}{E}, \\ & {\text{Boussinesq number}},\bar{B} = {\mkern 1mu} \alpha T_{{\text{b}}} , \\ & {\text{Rayleigh number}},{\text{Ra}} = {\mkern 1mu} \frac{{\alpha \rho _{0} gT_{{\text{b}}} d^{3} }}{{\eta _{0} \kappa }}. \\ \end{aligned}$$
Since this model was developed for the mantle, the typical values of the parameters are given in Table 1, and it is found that for \({\mathrm{Ra}}>> 1\), \({\bar{B}}/{\mathrm{Ra}}\) can be easily ignored. Therefore, the dimensionless energy equation (7) becomes
$$\begin{aligned} \frac{\partial {T}}{\partial {t}}+{\varvec{u}} \cdot \nabla T+DTw = \nabla ^2 T+\frac{D}{{\mathrm{Ra}}} \frac{\tau ^2}{2\eta }, \end{aligned}$$
and viscosity relation (8) becomes
$$\begin{aligned} \eta ={\mathrm{exp}}\left[ \frac{1-T+\mu (1-z-T)}{\varepsilon T}\right] . \end{aligned}$$
This Eq. (11) is known as full form of Arrhenius viscosity function.
Table 1 Typical parameter values for mantle convection with variable viscosity
The dimensionless boundary conditions (4) become
$$\begin{aligned} & w = 0,\quad \tau _{3} = 0,\quad T = 1\quad {\text{on}}\quad z = 0, \\ & w = 0,\quad \tau _{3} = 0,\quad T = \frac{{T_{{\text{s}}} }}{{T_{{\text{b}}} }} = \theta _{0} \quad {\text{on}}\quad z = 1, \\ & u = 0,\quad \tau _{3} = 0,\quad \frac{{\partial T}}{{\partial x}} = 0\quad {\text{on}}\quad x = 0,1. \\ \end{aligned}$$
The dimensionless model consists of governing Eqs. (6), (10), viscosity relation (11) and boundary conditions (12).
Low temperature cut-off viscosity
To investigate the convection with extremely high viscosity contrasts in the mantle layer, a low temperature cut-off viscosity function is used. This cut-off viscosity relation helps reduce the computational stiffness while retaining the sensitivity of the viscosity to the changes in temperature and pressure across the mantle. It is a well-established fact that in strongly temperature-dependent viscous convection, most of the viscosity variation occurs in a stagnant lid in which the velocity is essentially zero. Based on this fact, the sub-lid convection field is calculated accurately (but not the stress field) by cutting off the dimensionless viscosity at a sufficiently high value that the lid thickness, which essentially only depends on the interaction of the lid temperature with the underlying convection flow, is unaffected.
The low temperature cut-off viscosity function has the following form
$$\begin{aligned} \eta = {\left\{ \begin{array}{ll} {\mathrm{exp}}[Q/\varepsilon ] &{} \frac{Q}{\varepsilon }\le {\text {log}} 10^r,\\ 10^r &{} \frac{Q}{\varepsilon } > {\text {log}} 10^r, \end{array}\right. } \end{aligned}$$
$$\begin{aligned} Q= \frac{1-T+\mu (1-z-T)}{T}, \end{aligned}$$
and the cut-off viscosity value \(10^r\) is to be chosen appropriately; in numerical experiments, it is chosen \(r = 6\). Similar type of Arrhenius law with an imposed cut-off viscosity was applied by Huang et al. [31], Huang and Zhong [32], King [33] and Khaleque et al. [13]. A comparison between full-form viscosity function and cut-off viscosity function is shown in "Comparison with benchmark values and validation" section.
Three useful diagnostic quantities which will be used to characterize are viscosity contrast, Nusselt number and root mean square velocity respectively.
The viscosity contrast \(\Delta \eta\) is the ratio between the surface and basal values of the viscosity, defined as
$$\begin{aligned} \Delta \eta = {\mathrm{exp}}\left( \frac{1-\theta _0-\mu \theta _0}{\varepsilon \theta _0}\right) , \end{aligned}$$
where \(\theta _0 =\frac{T_{\mathrm{s}}}{T_{\mathrm{b}}}\).
The Nusselt number Nu is the ratio of the average surface heat flow from the convective solution to the heat flow due to conduction. It is calculated in the present case of a square cell by the dimensionless relation
$$\begin{aligned} {\mathrm{Nu}} = -\frac{1}{(1-\theta _0)}\int _{0}^{1}\frac{\partial {T}}{\partial {z}}(x,1){\mathrm{d}}x. \end{aligned}$$
Nu is equal to unity for conduction and exceeds unity as soon as convection starts.
The vigour of the circulating flow is characterised by the non-dimensional RMS (root mean square) velocity. Here RMS velocity is defined by
$$\begin{aligned} V_{{\mathrm{rms}}} = \left[ \int _{0}^{1}\int _{0}^{1}(u^2+w^2){\mathrm{d}}x{\mathrm{d}}z\right] ^{1/2}, \end{aligned}$$
where u is the horizontal component of velocity and w is the vertical component of velocity.
Computational method
In order to solve the dimensionless governing Eqs. (6), (10), (11) with boundary conditions (12) a finite element method based PDE solver 'COMSOL Multiphysics 5.3' is used. The modules for creeping flow, heat transfer in fluids, and Poisson's equation are chosen based on the physics of the model. Free triangular meshing with some refinement near the boundaries of \(200\times 200\) and COMSOL's "extra fine" setting results in a complete mesh of a total of 18,000 elements. As the basis functions or shape functions, Lagrangian P2–P1 elements for creeping flow are selected, which means the shape functions for the velocity field and pressure are Lagrangian quadratic polynomials and Lagrangian linear polynomials, respectively. Similarly, Lagrangian quadratic elements for both temperature in the heat equation and the stream function in Poisson's equation are chosen. For Lagrange elements, the values of all the variables at the nodes are called degrees of freedom (dof) and in this case, our specific discretization finally produces 153,816 degrees of freedom (\(N_{{\mathrm{dof}}}\)). The following convergence criterion is applied for all cases:
$$\begin{aligned} \left( \frac{1}{N_{{\mathrm{dof}}}}\sum _{i=1}^{N_{{\mathrm{dof}}}}|E_i|^2\right) ^\frac{1}{2}<\varepsilon \end{aligned}$$
where \(E_i\) is the estimated error and \(\varepsilon\) = \(10^{-6}\). Further details of the method can be found in Zimmerman [34].
Comparison with benchmark values and validation
The values of Nusselt number Nu and root mean square velocity \(V_{{\mathrm{rms}}}\) are compared with the benchmark values from Blankenbach et al. [35]\(^a\) and Koglin Jr et al. [36]\(^b\) in Table 2 for constant viscosity case. Their values were computed for Ra up to \(10^6\) and \(10^7\) respectively. From Table 2, it is evident that the agreement is within a very good range.
Table 2 Comparison of computed Nusselt number Nu and RMS velocity \(V_{{\mathrm{rms}}}\) with benchmark values from Blankenbach et al. [35]\(^{\mathrm{a}}\) and Koglin Jr et al. [36]\(^{\mathrm{b}}\)
Table 3 Comparison of Nusselt number, Nu of full-form viscosity function (11) and cut-off viscosity function (13) for \(\mu = 0.0\) and \(\mu = 0.5\) at \({\mathrm{Ra}} = 10^7\) and \(\theta _0\) = 0.1
Then the computation is done with variable viscosity with a high viscosity contrast across the mantle layer. The values of Nusselt number Nu that are compared in Table 3 are found using the full form viscosity function (11) and the cut-off viscosity function (13) for \(\mu = 0.5\) and \(\mu =0.0\). It should be noted that \(\mu =0.0\) indicates temperature-dependent viscosity, whereas \(\mu \ne 0\) implies that viscosity depends on both temperature and pressure. From Table 3 it can be seen that the values of Nusselt number, Nu with full form viscosity function and the values of Nusselt number, Nu with cut-off viscosity function are very close, which validates the use of the cut-off viscosity function for numerical computation.
Result and discussion
After validating the model, the governing Eqs. (6), (10) and (13) with boundary conditions (12) are solved. Throughout the computation, the constants \(\theta _0 = 0.1\) and \({\mathrm{Ra}} = 10^{7}\) are used, and the values of the Nusselt number, Nu, and root mean square velocity, \(V_{{\mathrm{rms}}}\) for different dissipation numbers, D, pressure dependent parameter \(\mu\), and temperature dependent parameter \(\varepsilon\) are calculated. By varying \(\mu\) and \(\varepsilon\), different viscosity contrast is obtained across the mantle layer. The numerical computations with \(D = 0.3\) and \(D = 0.6\) at \({\mathrm{Ra}} = 10^7\) when \(\mu =0.0\), \(\mu =0.5\) and \(\mu =1.0\) are performed, and the calculated Nusselt number and the RMS velocity values for high viscosity contrasts from \(10^{10}\) to \(10^{30}\) are shown in Tables 4 and 5.
Table 4 Nusselt number Nu computed for \(\mu =0.0\), \(\mu =0.5\), \(\mu =1.0\) with different viscous dissipation number D at \({\mathrm{Ra}} = 10^7\) and \(\theta _0\) = 0.1
Table 5 RMS velocity \(V_{{\mathrm{rms}}}\) computed for \(\mu =0.0\), \(\mu =0.5\), \(\mu =1.0\) with different viscous dissipation number D at \({\mathrm{Ra}} = 10^7\) and \(\theta _0\) = 0.1
Tables 4 and 5 show that for each fixed value of \(\mu\) and D, Nu and \(V_{{\mathrm{rms}}}\) decrease as the viscosity contrast increases (i.e., the temperature dependence parameter decreases) across the mantle. It confirms that at the higher viscosity variation, convection becomes weaker, which can also be seen clearly in the thermal distribution Figs. 2 and 3. Nu and \(V_{{\mathrm{rms}}}\) values also decrease as D increases for every particular value of \(\mu\).
It is also observed that at a specific viscosity contrast as the pressure dependence parameter \(\mu\) is increased, both Nu and \(V_{{\mathrm{rms}}}\) values increase for a fixed dissipation number \(D=0.3\) and \(D=0.6\). The reason behind this is that even though \(\mu\) is increased, \(\varepsilon\) is actually decreased to maintain the fixed viscosity contrast. However, for \(D=0.0\), the trend is not that smooth at higher viscosity variations. Comparing the \(V_{{\mathrm{rms}}}\) values between \(D=0.0\) and \(D=0.3\) at \(\mu =1.0\) it can be seen that at high viscosity contrasts, the \(V_{{\mathrm{rms}}}\) values for \(D=0.3\) are larger than those for \(D=0.0\) which are unlike the other values.
The thermal distribution and stream function contours for \(\mu\) = 0.0, \(\mu =0.5\) and \(\mu\) = 1.0 are presented in Figs. 2, 3 and 4.
Thermal distributions of a convection at different viscosity variations and at different pressure numbers for a fixed viscous dissipation number \(D=0.3\) with \(\theta _0 = 0.1\) and \({\mathrm{Ra}} = 10^7\)
In Figs. 2 and 3 the thermal distribution of the unit aspect ratio convection cell for the values of D = 0.3 and D = 0.6 respectively are presented for different viscosity contrasts. In panel 2a, b and 3a, b, the viscosity depends only on temperature (i.e. \(\mu\)=0.0) and in panel 2c, f and 3c, f, the viscosity depends on both temperature and pressure (i.e. \(\mu \ne 0.0\)). At each plot of the temperature profile, the blue region corresponds to the cooler temperature whereas the red region corresponds to the high temperature.
For \(\mu =0.0\), \(\mu =0.5\) and \(\mu =1.0\), from Figs. 2 and 3 we see that as viscosity contrast \(\Delta \eta\) increases the thickness of the cold thermal boundary layer at the top of the cell. At the lower mantle, which is near the core of the Earth, the boundary is hot as the temperature is very high and this temperature continues to increase as the viscosity contrast gets larger. The interior temperature decreases significantly as the pressure dependence parameter is included. The convection cell is quite different when viscosity is both temperature and pressure dependent rather than only temperature dependent. Compared to \(\mu =0.5\) the significance of pressure can be seen clearly for \(\mu =1.0\) from both Figs. 2 and 3.
The stream function contours where stream function \(\Psi (x,z)\) defined as
$$\begin{aligned} u=-\Psi _z,\qquad w=\Psi _x, \end{aligned}$$
are presented in Fig. 4 for \(D=0.3\). As the streamlines represent fluid flow, the absence of a streamline confirms that fluid in that region is immobile. In other words, this immobile region represents the stagnant lid. With increasing viscosity contrast and viscous dissipation, the changes in the convection pattern are very clear. It is observed that the cold thermal boundary layer thickness increases with viscosity contrast. But for a fixed dissipation number, the cold thermal boundary thickness is reduced with the inclusion of the pressure-dependent parameter \(\mu\). Clearly, the lid thickness decreases as the pressure dependence parameter is increased at a fixed viscosity variation. However, the lid thickness increases when viscosity variation is increased at a fixed pressure dependence parameter \(\mu\) and dissipation number D. The Tables 4 and 5 clearly indicate that the heat transfer rate and the root mean square velocity decrease, and Figs. 2, 3 and 4 show that the immobile lid thickness increases as the viscosity contrast at a fixed pressure dependent parameter is increased. The decrease in Nu and \(V_{{\mathrm{rms}}}\) values, as well as the increase in the thickness of the cold thermal boundary layer, imply that the convection becomes significantly weaker.
Stream function contours of a convection at different viscosity variations and at different pressure numbers for a fixed viscous dissipation number \(D=0.3\) with \(\theta _0 = 0.1\) and \({\mathrm{Ra}} = 10^7\)
Isothermal contours of a temperature dependent viscosity convection at different viscosity variations and viscous dissipation number with \(\theta _0 = 0.1\) and \({\mathrm{Ra}} = 10^7\)
a Isothermal contour and b distribution of \(\log _{10}\eta\) for \(\mu =1.0\) at \(\Delta \eta =10^{30}\) and viscous dissipation \(D=0.3\) with \(\theta _0 = 0.1\) and \({\mathrm{Ra}} = 10^7\)
Horizontally average temperature vs depth profiles at viscosity contrasts \(\Delta \eta =10^{15}\) and \(\Delta \eta =10^{30}\) for convection with \(\mu =0.0\), \(\mu =0.5\) and \(\mu =1.0\) at \(\theta _0 = 0.1\) and \({\mathrm{Ra}} = 10^7\) with viscous dissipation \(D=0.3\) and \(D=0.6\)
A visualization of the isothermal contours in Fig. 5 shows that the hot thermal boundary layer is very thin compared to the cold thermal boundary layer. This figure represents the isothermal contours of a convection cell with temperature dependent viscosity at different viscosity contrast (i.e \(\Delta \eta =10^{15}\) and \(\Delta \eta =10^{30}\)) when viscous dissipation numbers are \(D=0.3\) and \(D=0.5\). There might not be any significant difference in the convection pattern (i.e., isothermal contours), but the contours are not similar. They are clearly affected by different viscous dissipation numbers at different viscosity contrasts.
Isothermal contours (Fig. 6a) and viscosity distribution (Fig. 6b) for \(\mu =1.0\) at \(\Delta \eta =10^{30}\) and viscous dissipation \(D=0.3\) are shown in Fig. 6. The viscosity variation from top to bottom is shown in Fig. 6b, and the resulting color ranges from the lowest value (blue) to \(10^6\) (brown). Clearly, the cut-off viscosity function simply ignores the high value of the lid viscosity and considers it as a constant there. Figure 6b shows a low viscosity region in the upper mantle and a relatively high viscosity region in the lower mantle just above the bottom boundary layer. This implies that the interior is not isoviscous.
Horizontally average temperature vs depth profiles for viscous dissipation of \(D=0.3\) and \(D=0.6\) are presented in Fig. 7. These figures show how the horizontally averaged temperature varies with depth at different viscous dissipation numbers and at different viscosity variations. It also shows how it changes for temperature-dependent viscosity and temperature-and pressure-dependent viscosity. The rapid change in temperature near the cold upper boundary and the hot lower boundary explains the strong temperature gradients in those regions. The plots also indicate that the core of the mantle, i.e. the interior, is not isothermal for both the temperature dependent viscosity case and the temperature and pressure dependent viscosity case. The interior of the convection cell undergoes a larger jump in temperature when dissipation effect is stronger (\(D=0.6\)). The figures show that the interior temperature increases with the increase of viscosity contrast across the mantle layer for \(\mu\) = 0.0 and \(\mu\) = 0.5 at \(D = 0.3\) and \(D = 0.6\). Similar situation occurs for \(\mu\)= 1.0 at \(D = 0.6\) but when \(D = 0.3\), temperature decreases at high viscosity contrast (i.e. at \(\Delta \eta = 10^{30}\) ).
The study of a basally heated convection model with a strongly temperature and pressure dependent viscous fluid relative to the Earth's mantle in the presence of viscous dissipation has been the principal aim of this work. The classical Rayleigh–Bénard convection model was solved using a low temperature cut-off viscosity function to avoid the stiffness of computation. It was aimed to pursue viscosity that is dependent only on temperature and simultaneously dependent on both temperature and pressure, and a comparison is presented through figures and tables.
According to Jarvis and Mckenzie [37], the dissipation number is between 0.25 and 0.8, whereas Leng and Zhong [19] estimate D to be 0.5 to 0.7. Ricard [38] found that its value is about 1.0 near the surface, and decreases to about 0.2 near the CMB. From Table 1, \(D \approx 0.6\) has been found. Thus, the effect of various viscous dissipation numbers for mantle like convection with \({\mathrm{Ra}} = 10^7\) is checked. The different values of viscous dissipation number show the changes in heat transfer rate Nu and root mean square velocity \(V_{{\mathrm{rms}}}\). It is shown that the fluid is not isothermal and isoviscous in the presence of viscous dissipation in both cases when viscosity is temperature-dependent and temperature-pressure-dependent. The viscosity distribution at high viscosity contrast for \(\mu =1.0\) also showed that the fluid is not isoviscous.
Analysis of the results can predict that if the dissipation number is increased, the lid thickness will increase more and the convection rate will decrease notably. But it is also clear that the inclusion of viscous dissipation does not affect the convection pattern in any drastic way. The convection becomes weaker as viscosity contrast becomes larger and the viscous dissipation number is increased. However, the variation in Nu, \(V_{{\mathrm{rms}}}\) increase as \(\mu\) goes from 0 to 0.5, but the trend is different when \(\mu\) goes from 0.5 to 1.0. Thus, strong pressure dependence in viscosity affects the convection in a different way. For a temperature-dependent viscosity case and a temperature and pressure-dependent viscosity case, the horizontally averaged temperature increases with viscosity contrast in the interior, but the trend is opposite in the top boundary layer, i.e., the stagnant lid. In this study we investigated convection with high viscosity contrast, because for the typical parameter values, it is estimated that the viscosity contrast for the Earth's mantle is \(10^{50}\) or more. Without extreme parameter values, it is quite impossible to obtain a proper asymptotic structure of mantle convection for the Earth and other planets. Thus, it is believed that this study will have a significant impact on the study of thermal convection in the Earth's mantle and other planets where viscosity is strongly variable and the variation of the order of magnitude is very large.
The data is generated through a mathematical model solved by numerical simulation.
Runcorn, S.: Mechanism of plate tectonics: mantle convection currents, plumes, gravity sliding or expansion? Tectonophysics 63, 297–307 (1980)
Bercovici, D.: Treatise on Geophysics, Mantle Dynamics, vol. 7. Elsevier, Amsterdam (2010)
Karato, S.: Rheology of the Earth's mantle: a historical review. Gondwana Res. 18, 17–45 (2010)
Rose, I.R., Korenaga, J.: Mantle rheology and the scaling of bending dissipation in plate tectonics. J. Geophys. Res. Solid Earth 116 (2011). https://doi.org/10.1029/2010JB008004
Schubert, G., Turcotte, D.L., Olson, P.: Mantle Convection in the Earth and Planets. Cambridge University Press, Cambridge (2001)
Turcotte, D.L., Oxburgh, E.R.: Finite amplitude convective cells and continental drift. J. Fluid Mech. 28, 29–42 (1967). https://doi.org/10.1017/S0022112067001880
Jarvis, G.T., Peltier, W.R.: Mantle convection as a boundary layer phenomenon. Geophys. J. Int. 68, 389–427 (1982). https://doi.org/10.1111/j.1365-246X.1982.tb04907.x
Moresi, L.N., Solomatov, V.: Numerical investigation of 2d convection with extremely large viscosity variations. Phys. Fluids 7, 2154–2162 (1995). https://doi.org/10.1063/1.868465
Solomatov, V., Moresi, L.N.: Three regimes of mantle convection with non-Newtonian viscosity and stagnant lid convection on the terrestrial planets. Geophys. Res. Lett. 24, 1907–1910 (1997). https://doi.org/10.1029/97GL01682
Christensen, U.: Convection with pressure- and temperature-dependent non-Newtonian rheology. Geophys. J. Int. 77, 343–384 (1984). https://doi.org/10.1111/j.1365-246X.1984.tb01939.x
Doin, M.P., Fleitout, L., Christensen, U.: Mantle convection and stability of depleted and undepleted continental lithosphere. J. Geophys. Res. Solid Earth 77, 2771–2787 (1997)
Dumoulin, C., Doin, M.P., Fleitout, L.: Heat transport in stagnant lid convection with temperature- and pressure-dependent Newtonian or non-Newtonian rheology. J. Geophys. Res. Solid Earth 104, 12759–12777 (1999)
Khaleque, T., Fowler, A., Howell, P., Vynnycky, M.: Numerical studies of thermal convection with temperature- and pressure-dependent viscosity at extreme viscosity contrasts. Phys. Fluids 27, 076603 (2015)
Maurice, M., Tosi, N., Samuel, H., Plesa, A.C., Hüttig, C., Breuer, D.: Onset of solid-state mantle convection and mixing during magma ocean solidification. J. Geophys. Res. Planets 122, 577–598 (2017). https://doi.org/10.1002/2016JE005250
Shahraki, M., Schmeling, H.: Plume-induced geoid anomalies from 2D axi-symmetric temperature- and pressure-dependent mantle convection models. J. Geodyn. 50–60, 193–206 (2012)
Fowler, A.C., Howell, P.D., Khaleque, T.S.: Convection of a fluid with strongly temperature and pressure dependent viscosity. Geophys. Astrophys. Fluid Dyn. 110, 130–165 (2016). https://doi.org/10.1080/03091929.2016.1146264
King, S.D., Lee, C., Van Keken, P.E., Leng, W., Zhong, S., Tan, E., Tosi, N., Kameyama, M.C.: A community benchmark for 2-D Cartesian compressible convection in the Earth's mantle. Geophys. J. Int. 180, 73–87 (2010)
Conrad, C.P., Hager, B.H.: The thermal evolution of an earth with strong subduction zones. Geophys. Res. Lett. 26, 3041–3044 (1999)
Leng, W., Zhong, S.: Constraints on viscous dissipation of plate bending from compressible mantle convection. Earth Planet Sci. Lett. 297, 154–164 (2010)
Morgan, J.P., Rüpke, L.H., White, W.M.: The current energetics of earth's interior: agravitational energy perspective. Front. Earth Sci. 4, 46 (2016)
Balachandar, S., Yuen, D., Reuteler, D., Lauer, G.: Viscous dissipation in three-dimensional convection with temperature-dependent viscosity. Science 267, 1150–1153 (1995)
Conrad, C., Hager, B.: Effects of plate bending and fault strength at subduction zones on plate dynamics. J. Geophys. Res. 104, 17551–17571 (1999)
Ushachew, E.G., Sharma, M.K., Makinde, O.D.: Numerical study of MHD heat convection of nanofluid in an open enclosure with internal heated objects and sinusoidal heated bottom. Comput. Therm. Sci. 13(5), 1–16 (2021)
Megahed, A.M.: Williamson fluid flow due to a nonlinearly stretching sheet with viscous dissipation and thermal radiation. J. Egypt Math. Soc. 27, 12 (2019). https://doi.org/10.1186/s42787-019-0016-y
Ferdows, M., Murtaza, M.G., Shamshuddin, M.: Effect of internal heat generation on free convective power-law variable temperature past a vertical plate considering exponential variable viscosity and thermal conductivity. J. Egypt Math. Soc. 27, 56 (2019). https://doi.org/10.1186/s42787-019-0062-5
Ahmed, Z., Nadeem, S., Saleem, S., Ellahi, R.: Numerical study of unsteady flow and heat transfer CNT-based MHD nanofluid with variable viscosity over a permeable shrinking surface. Int. J. Numer. Method H 29, 4607–4623 (2019)
Fetecau, C., Vieru, D., Abbas, T., Ellahi, R.: Analytical solutions of upper convected Maxwell fluid with exponential dependence of viscosity under the influence of pressure. Mathematics 9(4), 334 (2021)
Solomatov, V.S.: Scaling of temperature and stress dependent viscosity convection. Phys. Fluids 7, 266–274 (1995). https://doi.org/10.1063/1.868624
Christensen, U.R., Yuen, D.A.: Layered convection induced by phase transitions. J. Geophys. Res. Solid Earth 90, 10291–10300 (1985)
Fowler, A.: Mathematical Geoscience, vol. 36. Springer, Berlin (2011)
Huang, J., Zhong, S., van Hunen, J.: Controls on sublithospheric small-scale convection. J. Geophys. Res. Solid Earth 108 (2003). https://doi.org/10.1029/2003JB002456
Huang, J., Zhong, S.: Sublithospheric small-scale convection and its implications for the residual topography at old ocean basins and the plate model. J. Geophys. Res. Solid Earth. 110 (2005). https://doi.org/10.1029/2004JB003153
King, S.D.: On topography and geoid from 2-d stagnant lid convection calculations. Geochem. Geophys. Geosyst. 10 (2009) https://doi.org/10.1029/2008GC002250
Zimmerman, W.B.: Multiphysics Modeling with Finite Element Methods. World Scientific Publishing Company, Singapore (2006)
Blankenbach, B., Busse, F., Christensen, U., Cserepes, L., Gunkel, D., Hansen, U., Harder, H., Jarvis, G., Koch, M., Marquart, G., et al.: A benchmark comparison for mantle convection codes. Geophys. J. Int. 98, 23–38 (1989). https://doi.org/10.1111/j.1365-246X.1989.tb05511.x
Koglin Jr, D.E., Ghias, S.R., King, S.D., Jarvis, G.T., Lowman, J.P.: Mantle convection with reversing mobile plates: a benchmark study. Geochem. Geophys. Geosyst. 6 (2005). https://doi.org/10.1029/2005GC000924
Jarvis, G.T., Mckenzie, D.P.: Convection in a compressible fluid with infinite Prandtl number. J. Fluid Mech. 96, 515–583 (1980)
Ricard, Y.: Physics of mantle convection. In: Bercovici, D. (ed.) Treatise on Geophysics. 7, 31–87 (2009)
The author TS Khaleque's research was partially supported by the University of Dhaka, University Grants Commission, Bangladesh.
Department of Applied Mathematics, University of Dhaka, Dhaka, 1000, Bangladesh
Sumaiya B. Islam, Suraiya A. Shefa & Tania S. Khaleque
Sumaiya B. Islam
Suraiya A. Shefa
Tania S. Khaleque
SBI and TSK derived the mathematical model and designed the first draft of the manuscript. SBI and SAS carried out the numerical simulations. SBI and TSK provided the literature review and final drafting. This work was carried out in collaboration among all authors. All authors read and approved the final manuscript.
Correspondence to Tania S. Khaleque.
Islam, S.B., Shefa, S.A. & Khaleque, T.S. Mathematical modelling of mantle convection at a high Rayleigh number with variable viscosity and viscous dissipation. J Egypt Math Soc 30, 5 (2022). https://doi.org/10.1186/s42787-022-00139-w
Received: 17 June 2021
Variable viscosity
Viscous dissipation
Rayleigh–Bénard convection
Viscosity variation
76D05
|
CommonCrawl
|
How does the Simplex method handle test ratios with zeros?
I've been running into an issue choosing a pivot when there are constraints with an RHS of zero. It appears that sometimes you should include zero test ratios when searching for the minimum test ratio, and sometimes you shouldn't. What is the hard and fast rule to handle zero test ratios?
For a simple demo, say you want to maximize $y$ subject to $x + y \le 1$ and $y \le x$. An x/y graph of the solution space shows a triangle with the top at $x = \frac{1}{2}, y = \frac{1}{2}$. To maximize $y$ we should end up at this point.
The first table: $$0: \begin{bmatrix} & x & y & s_1 & s_2 & = \\ s_1 & 1 & 1 & 1 & 0 & 1 \\ s_2 & -1 & 1 & 0 & 1 & 0 \\ & 0 & -1 & 0 & 0 & 0 \end{bmatrix} $$
The only possible entering variable is $y$ with a negative cost of -1. Then to select the pivot row, we find the two ratios. Row 1 $1 / 1 = 1$, row 2 $0 / 1 = 0$. Here is the problem. As I understood, in order to choose a pivot row, the ratio test must be positive. If we follow that rule, it leaves only one option of leaving variable, $s_1$. So let's try following the rule: $$1: \begin{bmatrix} & x & y & s_1 & s_2 & = \\ y & 1 & 1 & 1 & 0 & 1 \\ s_2 & -2 & 0 & -1 & 1 & -1 \\ & 1 & 0 & 1 & 0 & 1 \end{bmatrix} $$
This breaks another rule: $s_2$ shouldn't be negative, right? Second, there is no negative in the objective row, so we're done. But we're at $x = 0, y = 1$ which isn't even a solution, because $s_2$ is an illegal value. Let's try picking row 2 instead, the one with the test ratio of 0: $$1': \begin{bmatrix} & x & y & s_1 & s_2 & = \\ s_1 & 2 & 0 & 1 & -1 & 1 \\ y & -1 & 1 & 0 & 1 & 0 \\ & -1 & 0 & 0 & 1 & 0 \end{bmatrix} $$
Now $x$ must enter. We are supposed to only pivot on positive test ratios. We didn't follow that rule last time, but we'll arbitrarily follow it now and have $s_1$ leave: $$2: \begin{bmatrix} & x & y & s_1 & s_2 & = \\ x & 1 & 0 & \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} \\ y & 0 & 1 & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ & 0 & 0 & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \end{bmatrix} $$
And this is the answer. Out of curiosity, since we arbitrarily broke the 'positive' rule for pivot $1'$ and arbitrarily followed it for pivot $2$, let's try being consistent and always including zero when searching for the minimum test ratio. That means for pivot $2$ we should have chosen $y$ as the leaving variable since its test ratio ($0 / -1 = 0$) is smaller than the top row's ($1 / 2 = \frac{1}{2}$). $$2': \begin{bmatrix} & x & y & s_1 & s_2 & = \\ s_1 & 0 & 2 & 1 & 1 & 1 \\ x & 1 & -1 & 0 & -1 & 0 \\ & 0 & -1 & 0 & 0 & 0 \end{bmatrix} $$
Row 2 is always going to have a zero test ratio, apparently. So if we're going to consistently include zeros, we have to choose it again. $y$ enters, $x$ leaves. $$3: \begin{bmatrix} & x & y & s_1 & s_2 & = \\ s_1 & 2 & 0 & 1 & -1 & 1 \\ y & -1 & 1 & 0 & 1 & 0 \\ & -1 & 0 & 0 & 1 & 0 \end{bmatrix} $$ Which is identical to pivot $1'$, and we will cycle without arriving at a minimum.
To recap: how does one know exactly when and when not to include zeros when searching for the minimum test ratio, since excluding all zeros leads to invalid pivots and including all zeros leads to cycling? Or did I somehow start with a matrix that isn't in standard form?
linear-programming simplex
jnm2
jnm2jnm2
Indebted to zifyoip for clearing up a fundamental misunderstanding for me:
The rule is not that the test ratio must be positive, but that the pivot entry must be positive. You can't pivot on a nonpositive entry of the tableau.
There are several ways to prevent cycling. One is to break ties in the test ratios randomly—this will eventually (almost surely) break cycles. Another is to randomly perturb the entries of the initial tableau slightly (add or subtract small random quantities from all of them), or to add symbolic powers of ε to the entries (ε acts as an infinitesimal: ε > 0, but ε < x for all positive real numbers x; and ε ≫ ε2 ≫ ε3 ≫ ...; this requires some symbolic manipulation to keep track of powers of ε through the pivots). There are also deterministic rules for avoiding cycling, such as Bland's rule.
Not the answer you're looking for? Browse other questions tagged linear-programming simplex or ask your own question.
In Simplex Method, if the leaving variable fails for all candidates of MRT, what's wrong?
Why is this simplex procedure not working? $\min z = y - x + 1$
Simplex algorithm with initial negative slack variables
Simplex: duplicate constraints
Does zero considered as a leaving variable in simplex method?
simplex method: zero nonbasic variables, zero the leaving variable
Find the Maximum value using Big M method (algorithm)
Simplex Method with Nontrivial Initial Solution
How does row operation work in the simplex algorithm?
|
CommonCrawl
|
Time-stable SO(n) matrix synthesis algorithm
Consider an equation $S(t)b(t) = a$, where $a, b(t) \in S^{n-1}$ are given and the vector $b(t)$ is continuous, i.e. its endpoint traces a continuous curve on the unit sphere. The task is to find continuous solution $S(t) \in SO(n)$ (numerically). Consider a time net $t_0 < ... < t_n$.
In 2D case we can count the angle of $b(t)$ on the unit circle and then $S(t)$ will be the rotation matrix by this angle (inversed) corrected by the angle of $a$.
In 3D case we can use the Euler theorem and correct the matrix $S(t_k)$ on each step of our iterational method by multiplying it by the rotation matrix $S'$ that sends $b(t_k)$ to $b(t_{k+1})$.
In general case assume that $a=(1,0,...,0)^{T}$. I tried to construct some orthonormed basis ${b(t_0),e_2,...,e_n}$ at first step and to let $S(t_0) = [b(t_0),e_2,...,e_n]^{-1}$ using Gram-Schmidt process. Then at each step of our method I constructed the new one using Gram-Schmidt process and the previous basis as initial condition. It works, but it is time-consuming enough.
My question is how to obtain such $S(t)$ in general case more quickly?
linear-algebra matrix
AppliquéAppliqué
$\begingroup$ Could you describe what are the spaces $S^{n-1}$ and $SO(n)$? $\endgroup$ – Paul♦ Mar 21 '12 at 14:00
$\begingroup$ $SO(n)$ is the special orthogonal group of dimension $n$ (I'm assuming over the reals), which is the set of matrices $Q \in \mathbb{R}^{n \times n}$ such that $QQ^{T} = Q^{T}Q = I$, and $\det(Q) = 1$. $S^{n-1}$ is the $(n-1)$-dimensional unit sphere. $\endgroup$ – Geoff Oxberry Mar 21 '12 at 14:32
$\begingroup$ I'm curious about the description of method 3. When you say that you construct $S(t_{k+1})$ from $S(t_{k})$, are you applying Gram-Schmidt to $[b(t_{k+1}), e_{2}, \ldots, e_{n}]^{-1}$ again? What do you mean by "[using] the previous basis as initial condition"? $\endgroup$ – Geoff Oxberry Mar 21 '12 at 14:36
$\begingroup$ What's preventing you from using the infinitesimal generator of rotation in the general case? $\endgroup$ – Deathbreath Mar 21 '12 at 15:23
$\begingroup$ @GeoffOxberry, I mean that I apply the Gram-Schmidt process to $b(t_{k+1}), e_2(t_k),...,e_n(t_{k})$ and I receive the basis $b(t_{k+1}), e_2(t_{k+1}),...,e_n(t_{k+1})$. $\endgroup$ – Appliqué Mar 21 '12 at 15:53
Let $A(t_{k}) = [b(t_{k}), e_{2}, \ldots, e_{n}]$. If $b(t_{k+1}) = b(t_{k})$, nothing changes, so without loss of generality, assume that $b(t_{k+1}) - b(t_{k}) \neq 0$, and also assume that $A(t_{k})$ and $A(t_{k+1})$ are both invertible.
You can get $A(t_{k+1})^{-1}$ from $A(t_{k})^{-1}$ by noting that $A(t_{k+1})$ is a rank-one update of $A(t_{k})$, and using the Sherman-Morrison formula, with $u = b(t_{k+1}) - b(t_{k})$, and $v = e_{1}$; this rank-one update would be cheaper than naïvely inverting $A(t_{k+1})$ using an LU decomposition.
Then, since $A(t_{k+1})^{-1}$ is a rank-one update of $A(t_{k})^{-1}$, you can update your QR factorization (at least, that's how I would do Gram-Schmidt numerically) also using a rank-one updating scheme. One algorithm for accomplishing a rank-one update of a QR factorization can be found in Section 12.5.1 of the third edition of Matrix Computations by Golub and van Loan.
Both of these updates should reduce the complexity of $\mathcal{O}(n^{3})$ operations to $\mathcal{O}(n^{2})$ operations; you can even calculate $A(t_{0})^{-1}$ and its QR factorization in this fashion because $A(t_{0})$ is a rank-one update of the identity matrix.
Geoff OxberryGeoff Oxberry
$\begingroup$ Hm, $S(t_{k}) = [b(t_{k}),e_1(t_{k}),...,e_n(t_k)]^{T}$. Then I have to see only for update of decomposition $[b(t_{k+1}),e_1(t_{k}),...,e_n(t_k)] = QR$ and take $S(t_{k+1}) = Q^{T}$. Is it true? $\endgroup$ – Appliqué Mar 21 '12 at 17:47
$\begingroup$ That's another way you could do it. The first vector in your basis should just be scaled in a QR decomposition, so once you find the first QR decomposition (i.e., the identity matrix), you can repeatedly rank-one update it. $\endgroup$ – Geoff Oxberry Mar 21 '12 at 18:11
$\begingroup$ So if we receive $S(t_{k+1})$ using orthoghonalisation of $[b(t_{k+1}),e_2(t_k),...,e_n(t_k)]$ by one hand and updating a QR decomposition by the other we will get two different resulting matrices? Why the last will sent $b(t_{k+1})$ to $[1,0,...,0]^{T}$ and will be close to $S(t_k)$? I think that update algorithm doesn't garantee this $\endgroup$ – Appliqué Mar 21 '12 at 18:29
$\begingroup$ I don't know that the update algorithm will guarantee that the first column of the updated QR decomposition will be a scaled version of $b(t_{k+1})$, but the first update to the QR decomposition would be correct. $\endgroup$ – Geoff Oxberry Mar 21 '12 at 18:36
$\begingroup$ I tried the updating in MATLAB; it won't guarantee that the first column of the updated QR decomposition is a scaled version of $b(t_{k+1})$. $\endgroup$ – Geoff Oxberry Mar 21 '12 at 19:04
Rewrite your equation $$ b(t)=B(t)a $$
$B(t+\Delta t)=B^\prime B(t)$ where $$B^\prime=\left(\begin{matrix} b/\|b\| & \dot b/\|\dot b\| \end{matrix}\right)\left(\begin{matrix} \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta \end{matrix}\right)\left(\begin{matrix} b/\|b\| & \dot b/\|\dot b\| \end{matrix}\right)^\ast +\\ I-\left(\begin{matrix} b/\|b\| & \dot b/\|\dot b\| \end{matrix}\right)\left(\begin{matrix} b/\|b\|& \dot b/\|\dot b\| \end{matrix}\right)^\ast$$ and $\theta$ such that $B^\prime$ takes $b(t)$ to $b(t+\Delta t)$. Now $S(t)=B(t)^\ast$.
B was constructed such that your coordinate frame is defined by $b$ and $\dot b$. Since all other directions are unaffected, we project onto the complement of $\mbox{span }\{b,\dot b\}$ (second term on the right) and then rotate within $b$ and $\dot b$ (first term on right).
You may also want to consider that your problem can be understood as
$$ S\dot b +\dot S b = 0$$ such that $$ S\in SO(n).$$
DeathbreathDeathbreath
$\begingroup$ It's important to note that the differential condition is necessary, but not sufficient. $\endgroup$ – Geoff Oxberry Mar 21 '12 at 19:41
$\begingroup$ @GeoffOxberry: Yes, I've omitted the initial conditions as well, and of course any rotation on the complement of span $\{b,\dot b\}$ would still be a solution. Obviously $B^\prime$ is not unqiue. It would work without the projector as well. I just prefer having invariant spaces over null spaces :-) $\endgroup$ – Deathbreath Mar 21 '12 at 19:55
Not the answer you're looking for? Browse other questions tagged linear-algebra matrix or ask your own question.
Algorithm for Sparse-Matrix Inverse
Modification of Levinson algorithm for hermitian toeplitz matrix
Nullspace algorithm for a sparse matrix
Matrix completion algorithm
Efficient algorithm for a matrix product
Numerically stable computation of the Characteristic Polynomial of a matrix for Cayley-Hamilton Theorem
Discrete-time input matrix when one of the eigenvalues of the system matrix is zero
Fast algorithm for computing cofactor matrix
|
CommonCrawl
|
communications physics
Computing cliques and cavities in networks
Dinghua Shi1,
Zhifeng Chen2,
Xiang Sun2,
Qinghua Chen2,
Chuang Ma ORCID: orcid.org/0000-0002-5141-07343,
Yang Lou4 &
Guanrong Chen ORCID: orcid.org/0000-0003-1381-74185
Communications Physics volume 4, Article number: 249 (2021) Cite this article
Complex networks contain complete subgraphs such as nodes, edges, triangles, etc., referred to as simplices and cliques of different orders. Notably, cavities consisting of higher-order cliques play an important role in brain functions. Since searching for maximum cliques is an NP-complete problem, we use k-core decomposition to determine the computability of a given network. For a computable network, we design a search method with an implementable algorithm for finding cliques of different orders, obtaining also the Euler characteristic number. Then, we compute the Betti numbers by using the ranks of boundary matrices of adjacent cliques. Furthermore, we design an optimized algorithm for finding cavities of different orders. Finally, we apply the algorithm to the neuronal network of C. elegans with data from one typical dataset, and find all of its cliques and some cavities of different orders, providing a basis for further mathematical analysis and computation of its structure and function.
A network has three basic sub-structures: chain, star, and cycle. Chains are closely related to the concept of average distance, whereas a small average distance and a large clustering coefficient together signify a small-world network1, where the clustering coefficient is determined by the number of triangles, which are special cycles. Stars follow some heterogeneous degree distribution, under which the growth of node numbers and a preferential attachment mechanism together distinguish a scale-free network2 from a random network3. Cycles not only contain triangles but also have deeper and more complicated connotations. The cycle structure brings redundant paths into the network connectivity, creating feedback effects and higher-order interactions in network dynamics.
In [4], we introduced the notion of totally homogeneous networks in studying optimal network synchronization, which are networks with the same node degree, same girth (length of the smallest cycle passing the node in concern), and same path-sum (sum of all distances from other nodes to the node). We showed4 that totally homogeneous networks are the ones of easily self-synchronizing among all networks of the same size. Recently, we identified5 the special roles of two invariants of the network topology expressed by the numbers of cliques and cavities, the Euler characteristic number (alternative sum of the numbers of cliques of different orders), and the Betti number (number of cavities of different orders). In fact, higher-order cliques and smallest cavities are basic components of totally homogenous networks.
More precisely, higher-order cycles of a connected undirected network include cliques and cavities. A clique is a fullyvconnected sub-network, e.g., a node is a 0-clique, an edge is a 1-clique, a triangle is a 2-cliques, and a complete graph of four nodes is a 3-clique, and so on, where the numbers indicate the orders. Also, a triangle is the smallest first-order cycle, which consists of three edges. The number of 1-cycles with different lengths (number of edges) is huge in a large-scale network. Similarly, a 3-clique is the smallest two-cycle, and a two-cycle contains some triangles. A chain is a broken cycle, where an edge is the shortest 1-chain, a triangle and two triangles adjacent by one edge are two-chains, and so on, whereas a cycle is a closed chain. In the same manner, all these concepts can be extended to higher-order ones.
It is more challenging to study network cycles than node degrees; therefore, new mathematical concepts and tools are needed6,7, including such as simplicial complex, boundary operator, cyclic operation, and equivalent cycles, to classify various cycles and select their representatives for effective analysis and computation.
In higher-order topologies, the addition of two k-cycles, c and d, is defined5 by set operations as c + d = (c ∪ d) − (c ∩ d). They are said to be equivalent, if c + d is the boundary of a (k + 1)-chain5. All equivalent cycles constitute an equivalent class. The cavity is a cycle with the shortest length in each independent cycle-equivalent class (see ref. 5 for more details).
Cycles, cliques, and cavities are found to play important roles in complex systems such as biological systems especially the brain. In the studies of the brain, computational neuroscience has a special focus on cyclic structures in neuronal networks. It was found8 that cycles generate neural loops in the brain, which not only can transmit information all over the brain but also have an important feedback function. It was suggested8 that such cyclic structures provide a foundation for the brain functions of memories and controls. Unlike cliques, which are placed at some particular locations (e.g., cerebral cortexes), cavities are distributed almost everywhere in the brain, connecting many different regions together. It is pointed out9 that in both biological and artificial neural networks, one can find huge numbers of cliques and cavities which, despite being large and complex, have not been explored before. Of particular importance is that cavities play an indispensable role in brain functioning. All these findings indicate an encouraging and promising direction in brain science research. However, it remains unclear as to how all such neuronal cliques and cavities are connected and organized. This calls for a further endeavor into understanding the relationship between the complexity of higher-order topologies and the complexity of intrinsic neural functions in the brain. To do so, however, it is necessary to find most, if not all, cliques and especially cavities of different orders from the neuronal network.
Artificial intelligence, on the other hand, relies on artificial neural networks inspired by the brain neuronal network10, including recurrent neural networks, convolutional neural networks, Hopfield neural networks, etc. Now, given the recent discovery of higher-order cliques and cavities in the brain, the question is how to further develop artificial intelligence to an even higher level by utilizing all the new knowledge about brain topology. Notably, an effective neuronal network construction was recently proposed by a research team from the Massachusetts Institute of Technology, inspired by the real structure of the neuronal network of the Caenorhabditis elegans11.
It is important to understand how the brain stores information, learns new knowledge, and reacts to external stimuli. It is also essential to understand how the brain adaptively creates topological connections and computing patterns. All these tasks depend on in-depth studies of the brain neuronal network. Recently, the Brain Initiative project of USA (https://en.wikipedia.org/wiki/BRAIN_Initiative, https://braininitiative.nih.gov/), the Human Brain project of EU (https://en.wikipedia.org/wiki/Human_Brain_Project, https://www.humanbrainproject.eu/en/) and the China Brain project (https://en.wikipedia.org/wiki/China_Brain_Project) have been established to take such big challenges.
Many renowned mathematicians had contributed a lot of fundamental work to related subjects, such as Euler characteristic number, Betti number, groups of Abel and Galois, higher-order Laplacian matrices, as well as Euler-Poincaré formula and homology. This also demonstrates the importance of studying cliques and cavities for the further development of network science. In addition, the advance from pairwise interactions to higher-order interactions in complex system dynamics requires the knowledge of higher-order cliques and cavities of networks12. The numbers of zero eigenvalues of higher-order Hodge-Laplacian matrices are equal to the corresponding Betti numbers, while their associate eigenvectors are closely related to higher-order cavities13.
Motivated by all the above observations, this paper studies the fundamental issue of computability of a complex network, based on which the investigation continues to find higher-order cliques and their Euler characteristic number, as well as all the Betti numbers and higher-order cavities. The proposed approach starts from k-core decomposition14 and, through finding cliques of different orders, performs a sequence of computations on the ranks of the corresponding boundary matrices, to obtain all the Betti numbers. To that end, an optimized algorithm is developed for finding higher-order cavities. Finally, the optimized algorithm is applied to computing the neuronal network of C. elegans from a typical data set, identifying its cliques and cavities of different orders.
For computable undirected networks, the proposed approach is able to find all higher-order cliques, thereby obtaining the Euler characteristic number and all Betti numbers as well as some cavities of different orders. These can provide global information for understanding and analyzing the relationships between topologies and functions of various complex networks such as the brain neuronal network.
Computable networks
For undirected networks, the cliques and their numbers of different orders in a network are both important sub-networks and parameters for analysis and computation. A simple example is shown in Fig. 1, from which it is easy to find all cliques and their numbers of different orders.
Fig. 1: A sample network.
This network has 14 nodes, 26 edges, 13 triangles, and 1 tetrahedron, where the numbers are node indexes (see ref. 5).
For a given general large-scale complex network, however, finding all cliques of different orders is never an easy task. In fact, even just searching for a maximum clique (namely, a clique with the largest possible number of nodes) from a large network is a computationally NP-complete problem15. It is well known that to find all cliques of a large-scale undirected network, especially when the network is dense, the number of cliques is huge and will increase exponentially as the network size becomes larger. For example, in the real USair, Jazz, and Yeast networks16, if the number of cliques is limited to not >107 to be computable on a personal computer, the orders of the cliques are found to be only 9, 6, and 4, respectively, as summarized in Table 1. For these three real-world examples, it becomes impossible to compute the numbers of their higher-order cliques.
Table 1 Three real networks: their sizes (number of nodes |N| and number of edges |E|), maximum coreness kmax, the maximum number of cliques cmax, number of k-cliques mk, and the maximum order k of cliques with their numbers mk < 107.
It is noticed that, even for large and dense networks, k-core decomposition can be used to efficiently determine their cells (layers), where the kth cell has all nodes with degrees at least k, and the kernel of the network has the largest coreness value and is very dense. Therefore, the largest coreness value kmax can be used to estimate the order of a maximum clique. For this reason, k-core decomposition is used to determine whether a given network is computable or not, subject to the available limited computing resources. If the computing resources allow the number of cliques, with the first several lowest orders, to be no >107 to be computable, then the maximum coreness value should not be bigger than 30. In this paper, this coreness value threshold is set to kmax = 25, as detailed in Supplementary Note 1.
Clique-searching method
The Bron–Kerbosch algorithm17 is a popular scheme for finding all cliques of an undirected graph, whereas the Hasse-diagram algorithm9 is useful for finding all cliques of a directed network. For computable networks, we propose a method with an algorithm, named the common-neighbors scheme, which can find all cliques of different orders and the associate Euler characteristic number.
In a network, the average degree is denoted by 〈k〉 and the number of edges by |E|. The computational complexity of the proposed algorithm is estimated to be O(|E|〈k〉) for finding all 2-cliques.
For illustration, the sample network shown in Fig. 1 is used for clique searching, with a procedure in six steps, as follows:
For each node, all its neighbors are listed, whose index numbers are bigger than the index number of this node:
Node 1 {2, 3, 4, 5}, Node 2 {3, 4, 5}, Node 3 {4, 6, 8}, Node 4 {Ø}, Node 5 {9}, Node 6 {7, 14}, Node 7 {8}, Node 8 {Ø}, Node 9 {10, 11, 12, 13}, Node 10 {11, 13, 14}, Node 11 {12, 14}, Node 12 {13, 14}, Node 13 {14}, Node 14 {Ø}.
Then, the number of nodes in 0-clique is computed, yielding m0 = 14.
From the above list, edges are generated in increasing order of node indexes:
(1, 2), (1, 3), (1, 4), (1, 5), (2, 3), (2, 4), (2, 5), (3, 4), (3, 6), (3, 8), (5, 9), (6, 7), (6, 14), (7, 8), (9, 10), (9, 11), (9, 12), (9, 13), (10, 11), (10, 13), (10, 14), (11, 12), (11, 14), (12, 13), (12, 14), (13, 14).
Then, the number of edges in 1-clique is computed, yielding: m1 = 26.
For every edge, if its two end-nodes have a common neighbor whose index number is bigger than the index numbers of the two end-nodes, then all such neighbors are listed. For example:
edge (1, 2) has common neighbors {3, 4, 5}, edge (1, 3) has {4}, edge (2, 3) has {4}, edge (9, 10) has {11, 13}, edge (9, 11) has {12}, edge (9, 12) has {13}, edge (10, 11) has {14}, edge (10, 13) has {14}, edge (11, 12) has {14}, edge (12, 13) has {14}.
However, edge (1, 4) and edges (1, 5), (3, 4), (3, 6), (3, 8), (5, 9), (6, 7), (6, 14), (7, 8), (9, 13), (10, 14), (11, 14), (12, 14), (13, 14) do not have any common neighbor. Thus, the following triangles are obtained:
(1, 2, 3), (1, 2, 4), (1, 2, 5), (1, 3, 4), (2, 3, 4), (9, 10, 11), (9, 10, 13), (9, 11, 12), (9, 12, 13), (10, 11, 14), (10, 13, 14), (11, 12, 14), (12, 13, 14).
Then, the number of triangles in 2-cliques is computed, yielding: m2 = 13.
For each triangle, if its three nodes have a common neighbor whose index number is bigger than the index numbers of three nodes, then all such neighbors are listed.
Here, only the triangle (1, 2, 3) has a common neighbor, {4}, yielding 1 tetrahedron, (1, 2, 3, 4).
Then, the number of tetrahedrons in 3-cliques is computed, yielding: m3 = 1.
The above procedure is continued until it does not yield any more higher-order clique.
Finally, the Euler characteristic number is computed, as follows5:
$$\chi = m_0 - m_1 + m_2 - m_3 = 14 - 26 + 13 - 1 = 0.$$
Computing Betti numbers
The above-obtained cliques of various orders can be used to generate boundary matrices Bk, k = 1, 2, …. Here, B1 is the node-edge matrix, in which an element is 1 if the node is on the corresponding edge; otherwise, it is 0. Similarly, B2 is the edge-face matrix, in which an element is 1 if the edge is on the corresponding face (triangle); otherwise, it is 0. All higher-order boundary matrices Bk are generated in the same way. It is straightforward to compute the rank rk of matrices Bk for k = 1, 2, …, using linear row-column operations in the binary field, following the binary operation rules, namely 1 + 1 = 0, 1 + 0 = 1, 0 + 1 = 1, 0 + 0 = 0. Then, the Betti numbers5 can be obtained by using formulas βk = mk−rk−rk+1, for k = 1, 2, ….
One can also calculate the numbers of zero eigenvalues of higher-order Hodge-Laplacian matrices, so as to find the Betti numbers. To do so, some algebraic topology rules are needed to form oriented cliques13.
As an example, the left-hand sub-network shown in Fig. 1 is discussed, which has Nodes 1–8. The node-edge boundary matrix B1 of rank r1 = 7 is formed as follows, where the first row is linearly dependent on the other rows:
$$\begin{array}{*{20}{c}}\!\!\! {B_1} \; {\begin{array}{*{20}{l}} {\left( {1,2} \right)} \hfill & {\left( {1,3} \right)} \hfill & {\left( {1,4} \right)} \hfill & {\left( {1,5} \right)} \hfill & {\left( {2,3} \right)} \hfill & {\left( {2,4} \right)} \hfill & {\left( {2,5} \right)} \hfill & {\left( {3,4} \right)} \hfill & {\left( {3,6} \right)} \hfill & {\left( {3,8} \right)} \hfill & {\left( {6,7} \right)} \hfill & {\left( {7,8} \right)}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\qquad\;\; \hfill \end{array}} \\ {\begin{array}{*{20}{l}} 1 \hfill \\ 2 \hfill \\ 3 \hfill \\ 4 \hfill \\ 5 \hfill \\ 6 \hfill \\ 7 \hfill \\ 8 \hfill \end{array}} {\left( {\begin{array}{*{20}{l}} 1 \qquad\hfill & 1 \qquad\hfill & 1 \qquad\hfill & 1\quad \hfill & 0\quad \hfill & 0 \quad\hfill & 0 \quad\hfill & 0 \qquad\hfill & 0 \qquad\hfill & 0 \qquad\hfill & 0 \qquad\hfill & 0 \hfill \\ {\bf1} \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 1 \hfill & 1 \hfill & 1 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & {\bf 1} \hfill & 0 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 0 \hfill & 1 \hfill & 1 \hfill & 1 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & {\bf 1} \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & 0 \hfill & {\bf 1} \hfill & 0 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & {\bf 1} \hfill & 0 \hfill & 1 \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & {\bf 1} \hfill & 1 \hfill \\ 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & {\bf 1} \hfill & 0 \hfill & 1 \hfill \end{array}} \right)}\qquad\qquad\;\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{array}$$
Moreover, its edge-face boundary matrix of rank r2 = 4 is obtained as follows, where the rightmost column (column (2, 3, 4)) is linearly dependent on the other columns:
$$\begin{array}{*{20}{c}}\!\!\!\!\!\!\! {B_2} \quad\;\;\; \!{\begin{array}{*{20}{l}} \!\!\!\!\quad{\left( {1,2,3} \right)} \hfill & {\left( {1,2,4} \right)} \hfill & {\left( {1,2,5} \right)} \hfill & {\left( {1,3,4} \right)} \hfill & {\left( {2,3,4} \right)} \hfill \end{array}} \\ {\begin{array}{*{20}{l}} (1,2) \hfill \\ (1,3) \hfill \\ (1,4) \hfill \\ (1,5) \hfill \\ (2,3) \hfill \\ (2,4) \hfill \\ (2,5) \hfill \\ (3,4) \hfill \end{array}}\, \!\!\!\!\quad{\left( {\begin{array}{*{20}{c}} \;\;\;1\hfill \;\quad\qquad & \!\!\!\!\!1 \qquad\;\; \hfill & 1 \qquad\;\!\!\hfill & \;\; 0 \qquad\;\;\!\hfill & 0 \qquad\\ \;\;\;1 \hfill & \!\!\!\!\!0 \hfill & 0 \hfill & \;\;1 \hfill & 0 \hfill \\ \;\;\;0 \hfill & \!\!\!\!\!1 \hfill & 0 \hfill & \;\;1 \hfill & 0 \hfill \\ \;\;\;0 \hfill & \!\!\!\!\!0 \hfill & 1 \hfill & \;\;0 \hfill & 0 \hfill\hfill \\ \;\;\;{\bf1} \hfill & \!\!\!\!\!0 \hfill & 0 \hfill & \;\;0 \hfill & 1 \hfill\\ \;\;\;0 \hfill & \!\!\!\!\!{\bf1} \hfill & 0 \hfill & \;\;0 \hfill & 1 \hfill\\ \;\;\;0 \hfill & \!\!\!\!\!0 \hfill & {\bf1} \hfill & \;\;0 \hfill & 0 \hfill\\ \;\;\;0 \hfill & \!\!\!\!\!0 \hfill & 0 \hfill & \;\;{\bf1} \hfill & 1 \hfill\end{array}} \right)} \end{array}$$
Table 2 summarizes all calculation results of the network shown in Fig. 1, in which the Euler characteristic number and Betti numbers satisfy the Euler-Poincaré formula5
$$\chi = \beta _0 - \beta _1 + \beta _2 - \beta _3 = 1 - 2 + 1 - 0 = 0$$
Table 2 Computational results of the network are shown in Fig. 1.
Cavity-searching method
The concept of cavity comes from the homology group in algebraic topology. As a large-scale network has many 1-cycles, for instance, the network shown in Fig. 1 has hundreds, to facilitate investigation they are classified into equivalent classes. In a network, each 1-cavity belongs to a linearly independent cycle-equivalent class5 with the total number equal to the Betti number β1. It is relatively easy to understand 1-cavity, which has boundary edges consisting of 1-cliques. Imagination is needed to understand higher-order cavities, which have boundaries consisting of some higher-order cliques of the same order. In the literature, only one two-cavity consisting of eight triangles is found and reported8. In this paper, we found all possible smallest cavities and list them up to order 11 in Supplementary Note 2.
Since a cavity belongs to a cycle-equivalent class, only one representative from the class with the shortest length (namely, the smallest number of cliques) is chosen for further discussion. To find the smallest one, however, optimization is needed.
Finding cavity-generating cliques
The procedure is as follows.
First, a maximum linearly independent group of column vectors is selected from the boundary matrix Bk, used as the minimum kth-order spanning tree, which consists of rk k-cliques, where rk is the rank value of the boundary matrix Bk discussed above. Then, linear row-column binary operations are performed to reduce it to be in the simplest form. In every row of the resultant matrix, the column index of the first nonzero element is used as the index of the k-clique in the spanning tree. As an example, for the sub-network with Nodes 1–8 on the left-hand side of the network shown in Fig. 1, the bold-faced 1's in matrix B1 correspond to the columns indicated by (1, 2), (1, 3), (1, 4), (1, 5), (3, 6), (3, 8), (6, 7) shown at the top of the matrix, which constitutes a spanning tree in the sub-network with Nodes 1–8 in Fig. 1. It should be noted that the minimum kth-order spanning trees are not unique in general.
Next, the maximum group of linearly independent column vectors from boundary matrix Bk+1 is found, obtaining rk+1 (k + 1)-cliques as a group of linearly independent cliques. From this group, one continues to search for a k-clique that belongs to the boundary of the (k+1)-clique but does not belong to the kth-order spanning tree. In other words, the rk+1 k-cliques should not be a k-clique in the minimum spanning tree. If this cannot be found, then one can choose another maximum group of linearly independent column vectors from boundary matrix Bk+1 and search again. In this way, rk+1 (k + 1)-cliques are found. As an example, for the sub-network with Nodes 1–8 in Fig. 1, the bold-faced 1's in the boundary matrix B2 correspond to the edges indicated by (2, 3), (2, 4), (2, 5), (3, 4). These are edges on the left-hand side of the boundary matrix Bk+1, which are different from the cliques in the spanning tree.
Then, the formula of Betti numbers, βk = mk − rk − rk+1, is used for computing, which is the number of linearly independent k-cavities. The task now is to find the rest k-cliques that are not in the kth-order minimum spanning tree and also not on the boundaries of linearly independent (k + 1)-cliques. These are called cavity-generating k-cliques. In the same sub-network example, there is only one such clique: (7, 8). On the minimum spanning tree, after including all linearly independent boundaries, adding any cavity-generating k-clique will create a linearly independent k-cavity; in this example, the created one is the 1-cavity (3, 6, 7, 8).
Searching cavities by 0–1 programming
Every cavity-generating k-clique corresponds to at least one k-cavity. However, a cavity-generating k-clique may correspond to several cavities of different lengths, where the length is equal to the number of cliques. As a cavity is a linearly independent cycle with the smallest number of cliques, the task of searching for a cavity can be reformulated as a 0–1 programming problem.
As shown above, there are mk k-cliques, Bk is the boundary matrix between a (k − 1)-clique and a k-clique, Bk+1 is the boundary matrix between a k-clique and a (k + 1)-clique, and a k-cavity consists of some k-cliques. In the following, the vector space formed by k-cliques is denoted by Ck. A k-cavity can be expressed as \({{{{{{{\boldsymbol{x}}}}}}}} = ( {x_1,\,x_2, \ldots ,\,x_{m_k}}) \in C_k,\) in which each component xi takes value 1 or 0, where 1 represents a k-clique with index i in the cavity, whereas 0 means no such cliques there. Now, a cavity-generating k-clique with index v is taken from all k-cliques and a vector e = (1, 1, …, 1)T is introduced. Then, the problem of searching for a k-cavity becomes the following optimization problem, which is to be solved for a nonzero solution:
$$ \, \mathop{\min}\nolimits_{{{{{{\boldsymbol{x}}}}}} \in C_k}\;f\left( {{{{{\boldsymbol{x}}}}}} \right) = {{{{{\boldsymbol{xe}}}}}}\\ \, {{{{{\mathrm{s}}}}}}{{{{{\mathrm{.t}}}}}}{{{{{\mathrm{.}}}}}}\quad\left( {{{{{\mathrm{i}}}}}} \right)\; x_v = 1\\ \qquad\hskip4pt\left( {{{{{{\mathrm{ii}}}}}}} \right)\; B_k{{{{{\boldsymbol{x}}}}}}^T = 0\left( {{{{{{\mathrm{mod}}}}}}\,2} \right)\\ \qquad\hskip4pt\left( {{{{{{\mathrm{iii}}}}}}} \right)\;{{{{{\mathrm{rank}}}}}}({{{{{\boldsymbol{x}}}}}}^T,\,B_{k + 1})_{{{{{{\boldsymbol{F}}}}}}_2} = 1 + r_{k + 1}.$$
Here, the first constraint means that the cavity comes from the cavity-generating k-clique with index v. The second constraint implies that the cavity is a k-cycle, namely the boundaries of k-cliques that form the cavity should appear in pairs. The third constraint shows that the k-cavity to be found will not be a linear representation of the (k + 1)-cliques, where F2 indicates that the operations are performed in the binary field, which can avoid generating false cavities.
To ensure that the βk cavities found are linearly independent, the following 0–1 programming is performed, where x(l) is the lth k-cavity: (i) \(x_v^{(l)} = 1\); (ii) \(B_k{\it{x}}^{(l)T} = 0\left( {{{{{{{{\mathrm{mod}}}}}}}}\,2} \right)\); (iii) \({{{{{{{\mathrm{rank}}}}}}}}\left( {x^{\left( 1 \right)T},...,x^{(l)T},B_{k + 1}} \right)_{F_2} = l + r_{k + 1}\), for l = 1, 2, …, βk.
It was found that the sample network in Fig. 1 has two 1-cavities, where two cavity-generating 1-cliques are x14 = 1 corresponding to edge (7, 8) and x11 = 1 corresponding to edge (5, 9). This optimization is detailed as follows:
$$ \, \mathop{\min}\limits_{{{{{{\boldsymbol{x}}}}}} \in C_1}f\left( {{{{{\boldsymbol{x}}}}}} \right) = {{{{{\boldsymbol{xe}}}}}}\\ \, {{{\rm{s.t.}}}}\;\left( {{{\rm{i}}}} \right)\; x_{14} = 1,\;\left( {{{{\rm{ii}}}}} \right)\; B_1x^T = 0\;\left( {{{{\mathrm{mod}}}}\;2} \right),\;{{{\mathrm{namely}}}}\\ \quad\;\;\; x_1 + x_2 + x_3 + x_4 = 0,\;x_1 + x_5 + x_6 + x_7 = 0,x_2 + x_5 + x_8 + x_9 + x_{10} = 0,\\ \quad\;\;\; x_3 + x_6 + x_8 = 0,x_4 + x_7 + x_{11} = 0,x_9 + x_{12} + x_{13} = 0,x_{12} + x_{14} = 0,x_{10} + x_{14} = 0,\\ \quad\;\;\; x_{11} + x_{15} + x_{16} + x_{17} + x_{18} = 0,x_{15} + x_{19} + x_{20} + x_{21} = 0,\\ \quad\;\;\; x_{16} + x_{19} + x_{22} + x_{23} = 0,x_{17} + x_{22} + x_{24} + x_{25} = 0,\\ \quad\;\;\; x_{18} + x_{20} + x_{24} + x_{26} = 0,x_{13} + x_{21} + x_{23} + x_{25} + x_{26} = 0,\\ \quad\;\;\; \left( {{{\rm{iii}}}} \right)\;{{{{{\mathrm{rank}}}}}}\;({{{{{\boldsymbol{x}}}}}}^T,\;B_2)_{{{{{{\boldsymbol{F}}}}}}_2} = 1 + r_2.$$
By solving the above 0–1 programming problem, with x14 = 1 corresponding to (7, 8), it yields x10 = 1 corresponding to (3, 8), and with x12 = 1 corresponding to (6, 7), it yields x9 = 1 corresponding to (3, 6), which generates the first cavity (3, 6, 7, 8). Then, replacing x14 = 1 with x11 = 1 yields the second cavity (1, 5, 9, 10, 14, 6, 3), which has 8 equal-length cavities, including 1-cavity (2, 5, 9, 10, 14, 6, 3) and 1-cavity (1, 5, 9, 11, 14, 6, 3), etc. Finally, checking the \({{{{{{{\mathrm{rank}}}}}}}}({{{{{{{\boldsymbol{x}}}}}}}}^{\left( 1 \right)T}, \cdots ,{{{{{{{\boldsymbol{x}}}}}}}}^{(l)T},\,B_2)_{{{{{{{{\boldsymbol{F}}}}}}}}_2} = l + r_2\), for l = 1, 2, verifies that the optimization meets all the constraints.
Cliques and cavities of C. elegans
For a data set of C. elegans with 297 neurons and 2148 synapses18, all cliques and some cavities are obtained here by using the above-described techniques and algorithms, which are compared with the Erdös-Rényi (ER) random network with the same numbers of nodes and edges. The results are shown in Fig. 2 and Table 3.
Fig. 2: Cliques and Betti numbers.
The number of cliques and the Betti numbers for the C. elegans neuronal network versus the Erdös-Rényi (ER) random network.
Table 3 Euler characteristic number, Betti numbers, and the Euler-Poincaré formula for the C. elegans network and the Erdös-Rényi (ER) network.
Since the highest-order nonzero Betti number is β3 = 4, the C. elegans has four linearly independent three-cavities, and these four cavities have cavity-generating 3-cliques {164, 163, 119, 118}, {119, 167, 118, 227}, {195, 185, 119, 118} and {227, 195, 119, 118}. The cavity-generating 3-clique {164, 163, 119, 118} forms a three-cavity with eight nodes {85, 13, 3, 164, 163, 119, 118, 158}, which is the smallest three-cavities5, with structures shown in Fig. 3a. The cavity-generating 3-clique {119, 167, 118, 227} forms a three-cavity with 11 nodes {163, 3, 162, 119, 154, 167, 118, 227, 85, 13, 164} as shown in Fig. 3b. The cavity-generating 3-clique {195, 185, 119, 118} forms a three-cavity with eight nodes {171, 13, 3, 195, 185, 119, 118, 173}, as shown in Fig. 3c. The cavity-generating 3-clique {227, 195, 119, 118} forms a three-cavity with eight nodes {173, 13, 3, 227, 195, 119, 118, 185}, as shown in Fig. 3d. All details are included in Supplementary Note 3.
Fig. 3: Four 3-cavities in the C. elegans neuronal network.
a 3-cavity with 8 nodes; b 3-cavity with 11 nodes; c 3-cavity with 8 nodes; d 3-cavity with 8 nodes.
For a given directed network, how can one analyze its higher-order cliques and cavities? In [9], a Hasse algorithm was designed to find all directed cliques. However, both concepts of cycle and cavity were not precisely defined there. For an undirected network, the length of a cavity, namely the number of cliques that compose it, is longer than the lengths of the cliques (the order of the cavity plus 1) as a cycle. For example, an undirected triangle of length 3 not only is a 2-clique but also is a 1-cycle, whereas 1-cavity at least is a quadrangle of length 4. For a directed network, however, this may not be true. For instance, the smallest directed 1-cavity could be composed of two reversely directed edges between two nodes, where both edges have length 2. But, a directed 2-clique could be a directed triangle of length 3. This shows the extreme complexity of directed cavities, which will be a research topic for future investigation.
It should be noted that the key technique used in this paper is to examine various combinations of cliques and cavities, which differs from the studies based on node degrees in the current literature, where the focus is on statistical rather than topological properties of the network. After comparing the neuronal network of C. elegans to a random network, it was found that they are very different regarding the numbers of cliques and cavities. From the perspective of brain science, various combinations of higher-order topological components such as cliques and cavities are of extreme importance, without which it is very difficult or even impossible to understand and explain the functional complexity of the brain. In fact, this provides reasonable supports to many recent studies on the brain12,13,14.
The intrinsic combination of cliques and cavities also brings some unexpected problems to programming the proposed optimization algorithm. For example, because a minimum spanning tree of a network is not unique, the algorithm may not produce the expected results when searching for cavities. Efforts have been made to determine the information of cavities by eigenvectors corresponding to zero eigenvalues of higher-order Hodge-Laplacian matrices. However, a similar non-uniqueness problem occurred in finding eigenvectors, demonstrating the extreme complexity of the clique and cavity-searching problems.
Method and algorithms
To solve the above optimization problem (Eq. (1)) with βk k-cavities is difficult due to the third constraint in the optimization, and because the minimum spanning tree is not unique. As a remedy, the optimization problem is separated into two parts.
The first part is to use the following 0–1 programming to search for all possible cycles that contain the βk k-cavities x(l), l = 1, 2, …, βk:
$$ \, \mathop{\min}\nolimits_{{{{\boldsymbol{x}}}} \in C_k}f\left( {{{\boldsymbol{x}}}} \right) = {{{\boldsymbol{x}}}}^{(l)}{{{\boldsymbol{e}}}}\\ \; {{{\rm{s.t.}}}}\quad ({{{\rm{i}}}})\; x_v^{(l)} = 1;\\ \hskip23pt\, ({{{\rm{ii}}}})\; B_{k} {{{\boldsymbol{x}}}}^{(l)T} = 0 \; ({{{\rm{mod}}}}\; 2)$$
The second part is to use the third constraint to find all the βk k-cavities within all possible cycles, i.e., to check if \({{{{{{{\mathrm{rank}}}}}}}}(x^{\left( 1 \right)T}, \cdots ,x^{(l)T},B_{k + 1})_{F_2} = l + r_{k + 1}\), for the lth cavity x(l), l = 1, 2, …, βk.
Searching for specific cycles
Because there is a constraint Bkx(l)T = 0 (mod 2) in Eq. (3), it is not a traditional 0–1 linear programming problem. To reformulate the problem, a couple of notations are introduced: \(\tilde B_k = [B_k,\, - 2{{{{{{{\boldsymbol{I}}}}}}}}]\) and \({{{\tilde{\boldsymbol x}}}}^{(l)T} = [{{{{{{{\boldsymbol{x}}}}}}}}^{(l)},\,{{{{{{{\boldsymbol{y}}}}}}}}]\), where I is the identity matrix and \({{{{{{{\boldsymbol{y}}}}}}}} = [y_1, \ldots ,y_{m_k}]\). Then, Bkx(l)T = 0 (mod 2) can be equivalently rewritten as \(\tilde B_k{{{\tilde{\boldsymbol x}}}}^{(l)T} = 0\). As the minimum length of the k-cavity is Lmin = 2k+1, Eq. (3) can be transformed to the following 0–1 programming problem for a linear system of equations:
$$ \, \mathop{\min}\nolimits_{{{{{{\boldsymbol{x}}}}}} \in C_k}f\left( {{{{{\boldsymbol{x}}}}}} \right) = {{{{{\boldsymbol{x}}}}}}^{(l)}{{{{{\boldsymbol{e}}}}}}\\ \,\left( {{{{{\mathrm{i}}}}}} \right)\; x_v^{(l)} = 1,\\ \,\left( {{{{{{\mathrm{ii}}}}}}} \right)\;{{{{{\boldsymbol{x}}}}}}^{(l)}{{{{{\boldsymbol{e}}}}}} = L_{{{\rm{min}}}},\\ \,\left( {{{{{{\mathrm{iii}}}}}}} \right)\; \tilde B_k{{{\tilde{\boldsymbol x}}}}^{(l)T} = 0,\\ \, \left( {{{{{{\mathrm{iv}}}}}}} \right)\; x_i,y_i = 0\; {{{{{\mathrm{or}}}}}}\;1,\;\;l = 1,2, \ldots ,\beta _k.$$
Equation (4) can be solved by using Matlab toolbox for 0–1 linear system of equations, and the algorithm is described as follows.
Searching specific cycles (x* = Find Cycle (Bk, v, Lmin)).
Input: boundary matrix Bk
indices of all cavity-generating cliques {v1, …, \(v^{\beta _k}\)}
length of the smallest cycle Lmin = 2k+1
Output: specific cycles x*
Finding all cavities
The third constraint in the optimization problems (Eqs. (3) and (4)) is needed to check, so as to identify which cycles found by Algorithm 1 are k-cavities and then to determine their cavity-generating cliques. For cavity-generating cliques not included in the list in Algorithm 1, or if there are many cavity-generating cliques appearing in the same cavity, one has to search new cycles obtained by Algorithm 1 again by increasing the lengths of the 2k‒1 cliques.
Summarizing the above steps gives the following cavity-searching algorithm:
Checking all k-cavities (\(\{ {{{{{{{{\boldsymbol{x}}}}}}}}_1^ * , \ldots ,{{{{{{{\boldsymbol{x}}}}}}}}_{\beta _k}^ * } \} = {{{{{\mathrm{Find}}}}}}\;{{{{{\mathrm{Cavity}}}}}}\; ( {B_k,\;B_{k + 1},\;v,\;L_{{{{{\rm{min}}}}}}} )\))
Input: boundary matrices \(B_k\) and \(B_{k + 1}\)
indices of all cavity-generating cliques \(\left\{ {v^1, \ldots ,v^{\beta _k}} \right\}\)
length of cycle \(L_{{{{{\rm{min}}}}}} + j2^{k - 1},\;j = 0,\;1, \ldots\)
Output: all k-cavities \(\{ {{{{{{{{\boldsymbol{x}}}}}}}}_1^ * , \ldots ,{{{{{{{\boldsymbol{x}}}}}}}}_{\beta _k}^ * } \}\)
Data used in this work can be accessed at http://linkprediction.org/index.php/link/resource/data/1.
The code for the numerical simulations presented in this article is available from the corresponding authors upon reasonable request.
Watts, D. J. & Strogatz, S. H. Collective dynamics of 'small-world' networks. Nature 393, 440–442 (1998).
ADS Article Google Scholar
Barabási, A.-L. & Albert, R. Emergence of scaling in random networks. Science 286, 509–512 (1999).
ADS MathSciNet Article Google Scholar
Erdös, P. & Rényi, A. On random graphs. Publicationes Mathematicae 6, 290–291 (1959).
MathSciNet MATH Google Scholar
Shi, D. H. et al. Searching for optimal network topology with best possible synchronizability. IEEE Circ. Syst. Magaz. 13, 66–75 (2013).
Shi, D. H., Lü, L. Y. & Chen, G. R. Totally homogeneous networks. Natl. Sci. Rev. 6, 962–969 (2019).
Zomorodian, A. & Carlsson, G. Computing persistent homology. Discret. Comput. Geom. 33, 249–274 (2005).
Gu, X. F., Yau, S. T. Computational Conformal Geometry. Theory. International Press of Boston, Inc., 2008.
Sizemore, A. E. et al. Cliques and cavities in the human connectome. J. Comput. Neurosci. 44, 115–145 (2018).
Reimann, M. W. et al. Cliques of neurons bound into cavities provide a missing link between structure and function. Front. Comput. Neurosci. 11, 00048 (2017).
Mohamad H. Hassoun. Fundamentals of artificial neural networks (MIT Press, 1995).
Lechner, M. et al. Neural circuit policies enabling auditable autonomy. Nat. Mach. Intell. 2, 542–652. (2020).
Battiston, F. et al. Networks beyond pairwise interactions: structure and dynamics. Phys. Rep. 05, 004 (2020).
MathSciNet Google Scholar
Millan, A. P., Torres, J. J. & Bianconi, G. Explosive higher-order dynamics on simplicial complexes. Phys. Rev. Lett. 124, 218301 (2020).
Kitsak, M. et al. Identification of influential spreaders in complex networks. Nat. Phys. 6, 888–893 (2010).
Bomze, I. M., Budinich, M., Pardalos, P. M., Pelillo, M. The maximum clique problem. In Handbook of Combinatorial Optimization, pp. 1–74 (Springer, 1999).
Fan, T. L., Lü, L. Y., Shi, D. H., Zhou T. Characterizing cycle structure in complex networks. https://arxiv.org/abs/2001.08541.
Bron, C. & Kerbosch, J. Algorithm 457: finding all cliques of an undirected graph. Commun. ACM 16, 575–577 (1973).
Rossi, R. A., Ahned, N. K. The network data repository with interactive graph analysis and visualization. In Twenty-Ninth AAAI Conference, AAAI Press; 4292–4293 (2015).
The authors would like to thank the research supports by the National Natural Science Foundation of China (Grants no. 61174160, 12005001), the Natural Science Foundation of Fujian Province (Grant no. 2019J01427), the Program for Probability and Statistics: Theory and Application (no. IRTL 1704), and by the Hong Kong Research Grants Council through General Research Funds (Grant CityU11206320).
Department of Mathematics, College of Science, Shanghai University, Shanghai, China
Dinghua Shi
Department of Statistics, School of Mathematics and Statistics, Fujian Normal University, Fuzhou, China
Zhifeng Chen, Xiang Sun & Qinghua Chen
Department of Internet Finance, School of Internet, Anhui University, Hefei, China
Chuang Ma
Department of Computing and Decision Sciences, Lingnan University of Hong Kong, Hong Kong, China
Yang Lou
Department of Electrical Engineering, City University of Hong Kong, Hong Kong, China
Guanrong Chen
Zhifeng Chen
Xiang Sun
Qinghua Chen
D.S. and G.C. developed the theory and wrote the text. Z.C., X.S., C.M., Y.L., and Q.C. performed the simulations and computations for cross-check. All authors checked and verified the entire manuscript.
Correspondence to Dinghua Shi, Qinghua Chen or Guanrong Chen.
The authors declare no competing interests.
Peer review information Communications Physics thanks Hanlin Sun and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Shi, D., Chen, Z., Sun, X. et al. Computing cliques and cavities in networks. Commun Phys 4, 249 (2021). https://doi.org/10.1038/s42005-021-00748-4
Higher-order interaction networks
Focus Collections
Communications Physics (Commun Phys) ISSN 2399-3650 (online)
|
CommonCrawl
|
2017 SSC CGL 10 Aug Shift-3
For the following questions answer them individually
The average runs conceded by a bowler in 5 matches is 45 and 15.75 in other 4 matches. What is the average runs conceded by the bowler in 9 matches?
A person bought pens at 25 for a rupee and sold at 15 for a rupee. What is his profit percentage?
$$16\frac{2}{3}$$
$$40$$
80 litre mixture of milk and water contains 10% milk. How much milk (in litres) must be added to make water percentage in the mixture as 80%?
A bus starts running with the initial speed of 21 km/hr and its speed increases every hour by 3 km/hr. How many hours will it take to cover a distance of 252 km?
A sum of Rs 400 becomes Rs 448 at simple interest in 2 years. In how many years will the sum of Rs 550 amounts to Rs 682 at the same rate?
What is the value of $$\frac{1+x}{1-x^4}\div\frac{x^2}{1+x^2}\times x(1-x)$$ ?
$$\frac{1}{x}$$
$$x^2-1$$
$$x + 1$$
$$x$$
If $$x+\frac{1}{x}=17$$, then what is the value of $$\frac{x^4+\frac{1}{x^2}}{x^2-3x+1}$$ ?
What is the value of x in the equation $$\sqrt{\frac{1+x}{x}}-\sqrt{\frac{x}{1+x}}=\frac{1}{\sqrt{6}}$$ ?
If $$2[x^2+\frac{1}{x^2}]-2[x-\frac{1}{x}]-8=0$$, then what are the two values of $$x-\frac{1}{x}$$ ?
-1 or 2
1 or -2
-1 or -2
In ΔABC, ∠BAC = 90° and AD is drawn perpendicular to BC. If BD = 7 cm and CD = 28 cm, then what is the length (in cm) of AD?
|
CommonCrawl
|
Solvability of $p$-Laplacian parabolic logistic equations with constraints coupled with Navier-Stokes equations in 2D domains
EECT Home
Boundary stabilization of the Navier-Stokes equations with feedback controller via a Galerkin method
March 2014, 3(1): 167-189. doi: 10.3934/eect.2014.3.167
Boundary approximate controllability of some linear parabolic systems
Guillaume Olive 1,
LATP, UMR 7353, Aix-Marseille université, Technopôle Château-Gombert, 39, rue F. Joliot-Curie, 13453 Marseille cedex 13, France
Received April 2013 Revised December 2013 Published February 2014
This paper focuses on the boundary approximate controllability of two classes of linear parabolic systems, namely a system of $n$ heat equations coupled through constant terms and a $2 \times 2$ cascade system coupled by means of a first order partial differential operator with space-dependent coefficients.
For each system we prove a sufficient condition in any space dimension and we show that this condition turns out to be also necessary in one dimension with only one control. For the system of coupled heat equations we also study the problem on rectangle, and we give characterizations depending on the position of the control domain. Finally, we prove the distributed approximate controllability in any space dimension of a cascade system coupled by a constant first order term.
The method relies on a general characterization due to H.O. Fattorini.
Keywords: boundary controllability, Parabolic systems, distributed controllability, Hautus test..
Mathematics Subject Classification: 93B05, 93C05, 35K0.
Citation: Guillaume Olive. Boundary approximate controllability of some linear parabolic systems. Evolution Equations & Control Theory, 2014, 3 (1) : 167-189. doi: 10.3934/eect.2014.3.167
F. Alabau-Boussouira and M. Léautaud, Indirect controllability of locally coupled wave-type systems and applications, J. Math. Pures Appl., 99 (2013), 544-576. doi: 10.1016/j.matpur.2012.09.012. Google Scholar
F. Ammar-Khodja, A. Benabdallah, C. Dupaix and M. González-Burgos, A Kalman rank condition for the localized distributed controllability of a class of linear parbolic systems, J. Evol. Equ., 9 (2009), 267-291. doi: 10.1007/s00028-009-0008-8. Google Scholar
F. Ammar-Khodja, A. Benabdallah, C. Dupaix and M. González-Burgos, A generalization of the Kalman rank condition for time-dependent coupled linear parabolic systems, Differ. Equ. Appl., 1 (2009), 427-457. doi: 10.7153/dea-01-24. Google Scholar
F. Ammar-Khodja, A. Benabdallah, M. González-Burgos and L. de Teresa, The Kalman condition for the boundary controllability of coupled parabolic systems. Bounds on biorthogonal families to complex matrix exponentials, J. Math. Pures Appl., 96 (2011), 555-590. doi: 10.1016/j.matpur.2011.06.005. Google Scholar
F. Ammar-Khodja, A. Benabdallah, M. González-Burgos and L. de Teresa, Recent results on the controllability of linear coupled parabolic problems: A survey, Math. Control Relat. Fields, 1 (2011), 267-306. doi: 10.3934/mcrf.2011.1.267. Google Scholar
M. Badra and T. Takahashi, On the Fattorini criterion for approximate controllability and stabilizability of parabolic systems, to appear in ESAIM Control Optim. Calc. Var., (2014). Available from: http://hal.archives-ouvertes.fr/hal-00743899. Google Scholar
A. Benabdallah, F. Boyer, M. González-Burgos and G. Olive, Sharp estimates of the one-dimensional boundary control cost for parabolic systems and application to the $N$-dimensional boundary null-controllability in cylindrical domains, to appear in SIAM J. Control Optim., (2014). Available from: http://hal.archives-ouvertes.fr/hal-00845994. Google Scholar
A. Benabdallah, M. Cristofol, P. Gaitan and L. de Teresa, A new Carleman inequality for parabolic systems with a single observation and applications, C. R. Math. Acad. Sci. Paris, 348 (2010), 25-29. doi: 10.1016/j.crma.2009.11.001. Google Scholar
F. Boyer and G. Olive, Approximate controllability conditions for some linear 1D parabolic systems with space-dependent coefficients, to appear in Math. Control Relat. Fields, (2014). Available from: http://hal.archives-ouvertes.fr/hal-00848709. Google Scholar
N. Dunford and J. T. Schwartz, Linear Operators. Part III : Spectral Operators, Wiley-Interscience, New York, 1971. Google Scholar
K.-J. Engel and R. Nagel, One-Parameter Semigroups for Linear Evolution Equations, Springer, New York, 2000. Google Scholar
H. O. Fattorini, Some remarks on complete controllability, SIAM J. Control, 4 (1966), 686-694. doi: 10.1137/0304048. Google Scholar
H. O. Fattorini, Boundary control of temperature distributions in a parallelepipedon, SIAM J. Control, 13 (1975), 1-13. doi: 10.1137/0313001. Google Scholar
E. Fernández-Cara, M. González-Burgos and L. de Teresa, Boundary controllability of parabolic coupled equations, J. Funct.Anal., 259 (2010), 1720-1758. doi: 10.1016/j.jfa.2010.06.003. Google Scholar
M. González-Burgos and L. de Teresa, Controllability results for cascade systems of $m$ coupled parabolic PDEs by one control force, Port. Math., 67 (2010), 91-113. doi: 10.4171/PM/1859. Google Scholar
S. Guerrero, Null controllability of some systems of two parabolic equations with one control force, SIAM J. Control Optim., 46 (2007), 379-394. doi: 10.1137/060653135. Google Scholar
M. L. J. Hautus, Controllability and observability conditions for linear autonomous systems, Ned. Akad. Wetenschappen, Proc. Ser. A, 31 (1969), 443-448. Google Scholar
L. Hörmander, Linear Partial Differential Operators, Springer Verlag, Berlin-New York, 1976. Google Scholar
O. Kavian and L. de Teresa, Unique continuation principle for systems of parabolic equations, ESAIM Control Optim. Calc. Var., 16 (2010), 247-274. doi: 10.1051/cocv/2008077. Google Scholar
R. C. MacCamy, V. J. Mizel and T. I. Seidman, Approximate boundary controllability for the heat equation, Jour. of Math. Anal. and Appl., 23 (1968), 699-703. doi: 10.1016/0022-247X(68)90148-0. Google Scholar
A. S. Markus, Introduction to the Spectral Theory of Polynomial Operator Pencils Amer. Math. Soc. 71, Providence (R.I.), 1988. Google Scholar
K. Mauffrey, On the null controllability of a $3\times3$ parabolic system with non-constant coefficients by one or two control forces, J. Math. Pures Appl., 99 (2013), 187-210. doi: 10.1016/j.matpur.2012.06.010. Google Scholar
L. Miller, On the null-controllability of the heat equation in unbounded domains, Bull. Sci. Math., 129 (2005), 175-185. doi: 10.1016/j.bulsci.2004.04.003. Google Scholar
G. Olive, Null-controllability for some linear parabolic systems with controls acting on different parts of the domain and its boundary, Math. Control Signals Systems, 23 (2012), 257-280. doi: 10.1007/s00498-011-0071-x. Google Scholar
L. Rosier and L. de Teresa, Exact controllability of a cascade system of conservative equations, C. R. Math. Acad. Sci. Paris, 349 (2011), 291-296. doi: 10.1016/j.crma.2011.01.014. Google Scholar
L. de Teresa, Insensitizing controls for a semilinear heat equation, Comm. Partial Differential Equations, 25 (2000), 39-72. doi: 10.1080/03605300008821507. Google Scholar
Touria Karite, Ali Boutoulout. Global and regional constrained controllability for distributed parabolic linear systems: RHUM approach. Numerical Algebra, Control & Optimization, 2021, 11 (4) : 555-566. doi: 10.3934/naco.2020055
Kuntal Bhandari, Franck Boyer. Boundary null-controllability of coupled parabolic systems with Robin conditions. Evolution Equations & Control Theory, 2021, 10 (1) : 61-102. doi: 10.3934/eect.2020052
Larbi Berrahmoune. Null controllability for distributed systems with time-varying constraint and applications to parabolic-like equations. Discrete & Continuous Dynamical Systems - B, 2020, 25 (8) : 3275-3303. doi: 10.3934/dcdsb.2020062
Lahcen Maniar, Martin Meyries, Roland Schnaubelt. Null controllability for parabolic equations with dynamic boundary conditions. Evolution Equations & Control Theory, 2017, 6 (3) : 381-407. doi: 10.3934/eect.2017020
Felipe Wallison Chaves-Silva, Sergio Guerrero, Jean Pierre Puel. Controllability of fast diffusion coupled parabolic systems. Mathematical Control & Related Fields, 2014, 4 (4) : 465-479. doi: 10.3934/mcrf.2014.4.465
Farid Ammar Khodja, Franz Chouly, Michel Duprez. Partial null controllability of parabolic linear systems. Mathematical Control & Related Fields, 2016, 6 (2) : 185-216. doi: 10.3934/mcrf.2016001
Lingyang Liu, Xu Liu. Controllability and observability of some coupled stochastic parabolic systems. Mathematical Control & Related Fields, 2018, 8 (3&4) : 829-854. doi: 10.3934/mcrf.2018037
Damien Allonsius, Franck Boyer. Boundary null-controllability of semi-discrete coupled parabolic systems in some multi-dimensional geometries. Mathematical Control & Related Fields, 2020, 10 (2) : 217-256. doi: 10.3934/mcrf.2019037
Klaus-Jochen Engel, Marjeta Kramar FijavŽ. Exact and positive controllability of boundary control systems. Networks & Heterogeneous Media, 2017, 12 (2) : 319-337. doi: 10.3934/nhm.2017014
Mu-Ming Zhang, Tian-Yuan Xu, Jing-Xue Yin. Controllability properties of degenerate pseudo-parabolic boundary control problems. Mathematical Control & Related Fields, 2020, 10 (1) : 157-169. doi: 10.3934/mcrf.2019034
Patrick Martinez, Judith Vancostenoble. The cost of boundary controllability for a parabolic equation with inverse square potential. Evolution Equations & Control Theory, 2019, 8 (2) : 397-422. doi: 10.3934/eect.2019020
Brahim Allal, Abdelkarim Hajjaj, Jawad Salhi, Amine Sbai. Boundary controllability for a coupled system of degenerate/singular parabolic equations. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021055
Thuy N. T. Nguyen. Uniform controllability of semidiscrete approximations for parabolic systems in Banach spaces. Discrete & Continuous Dynamical Systems - B, 2015, 20 (2) : 613-640. doi: 10.3934/dcdsb.2015.20.613
Orazio Arena. A problem of boundary controllability for a plate. Evolution Equations & Control Theory, 2013, 2 (4) : 557-562. doi: 10.3934/eect.2013.2.557
Tatsien Li (Daqian Li). Global exact boundary controllability for first order quasilinear hyperbolic systems. Discrete & Continuous Dynamical Systems - B, 2010, 14 (4) : 1419-1432. doi: 10.3934/dcdsb.2010.14.1419
Tatsien Li, Bopeng Rao, Zhiqiang Wang. A note on the one-side exact boundary controllability for quasilinear hyperbolic systems. Communications on Pure & Applied Analysis, 2009, 8 (1) : 405-418. doi: 10.3934/cpaa.2009.8.405
Lianwen Wang. Approximate controllability and approximate null controllability of semilinear systems. Communications on Pure & Applied Analysis, 2006, 5 (4) : 953-962. doi: 10.3934/cpaa.2006.5.953
John E. Lagnese. Controllability of systems of interconnected membranes. Discrete & Continuous Dynamical Systems, 1995, 1 (1) : 17-33. doi: 10.3934/dcds.1995.1.17
Yassine El Gantouh, Said Hadd, Abdelaziz Rhandi. Approximate controllability of network systems. Evolution Equations & Control Theory, 2021, 10 (4) : 749-766. doi: 10.3934/eect.2020091
Farid Ammar Khodja, Cherif Bouzidi, Cédric Dupaix, Lahcen Maniar. Null controllability of retarded parabolic equations. Mathematical Control & Related Fields, 2014, 4 (1) : 1-15. doi: 10.3934/mcrf.2014.4.1
PDF downloads (87)
Guillaume Olive
|
CommonCrawl
|
About Iieta
Search IIETA Content -Any-ArticleBasic pageBlog entryJournalEventFeature
Home Journals PSEES Numerical analysis of helically finned tubes
PSEES
Citation List
CiteScore 2018: ℹ
SCImago Journal Rank (SJR) 2018: ℹ
Source Normalized Impact per Paper (SNIP): ℹ
240x200fu_ben_.jpg
Numerical analysis of helically finned tubes
Narmin B. Hushmandi
Department of Mechanical Engineering, Urmia University of Technology, Urmia P. O. Box 57155-419, Iran
Corresponding Author Email:
[email protected]
https://doi.org/10.18280/psees.020105
| Citation
02.01_05.pdf
Numerical analysis was done in this study to predict the behaviour of turbulent flow inside enhanced tubes. The tubes had helically inserted fins along the domain at multiple starts. Two different enhanced pipes at high and micro fin categories together with a smooth pipe were investigated numerically. Validations were done against existing experimental works published in open literature.
The RANS equations together with Realizable k-epsilon turbulence modelling were used. The tubes were horizontal with accounting for buoyancy effects along one coordinate perpendicular to the main flow. The open source CFD tool, OpenFOAM was used for computations. Validations showed fair agreement compared to measured data. Improvement of the heat transfer was assessed by comparison of the Nusselt numbers and friction factors with smooth pipe data and with acknowledged correlations.
CFD, heat transfer, finned tubes, geothermal energy
It is well known that turbulent flow could cause higher heat transfer rates compared to laminar flow. Several design techniques exist to enhance the turbulence of internal flow inside pipes used in heat exchanger applications. These techniques include inlet disturbances within pipes, twisted tapes or coil inserts, dimpled or corrugated tubes, helically finned tubes at single or multiple number of starts. These designs increase the secondary flows, sweeps away the laminar flow close to the walls, increase the swirl and rotation within the flow and thus increases the heat transfer rates. Whatever method is used to increase the turbulence, the drawback is that the friction factor and the pressure drops are affected considerably. A compromise should be made between the amount of heat transfer gained and the resulting pressure drop.
Heat transfer in pipes and channels of various cross sections are used in many engineering applications such as chemical, power, energy and air supply systems. Both heating and cooling of fluids are common, and the flow can be laminar or turbulent as well as in single-phase or multiphase. Enhanced, or augmented, geometries are often used to increase the heat transfer rate for improved performance, higher energy gain or better design. By using enhanced pipes in a heat exchanger savings can be made in terms of operating and material costs.
The largest thermal resistance in a pipe or channel is close to the wall and the laminar boundary layer, therefore many techniques are used for enhancing the convective heat transfer related to breaking up the laminar boundary layer and promoting turbulence near the walls. Increasing the heat transfer area, with internal fins by example, leads to a higher heat transfer rate. Disruption of the laminar sublayer and boundary layer growth as well as boundary layer separation is positive for the heat transfer. Secondary flows and reattachment is also something that should be considered in heat transfer design. Features such as rotation, swirl and shedding lead to more turbulence which is a well known heat transfer promoter and increases the mixing of the fluid.
Common techniques to achieve these types of heat transfer improvements are straight, transversal or angled ribs, corrugated or dimpled tubes, twisted tapes and coil inserts. All techniques can be seen to increase the wall roughness promoting turbulence. The heat transfer area is often larger and swirling motion increases heat transfer for twisted tape inserts and ribbed fins.
The most important factor to investigate the heat transfer performance of a fluid flow through a pipe or channel is the Nusselt number which describes the convective to conductive heat transfer. All heat transfer increasing techniques will induce a higher flow resistance, or friction factor. The increased friction leads to a higher pressure drop and energy loss. A commonly used expression for investigated enhanced pipes is the efficiency expressed as the Nusselt to friction factor ratio divided by the same ratio for its corresponding smooth pipe.
The heat transfer of a smooth pipe can be expressed by several correlations developed for Nusselt number and friction factor, respectively. Dittus-Boelter or Gnielinski [3] are commonly used for Nusselt number predictions, and Blasius or Filonenko [2] are widely used for friction factor predictions. Ji, et al. [5] reviewed the experimental studies performed for single-phase heat transfer enhancement techniques of laminar and turbulent flows. They divided the techniques into four categories covering internal integral fins, twisted tape and coil inserts, corrugated tubes and dimpled or three-dimensional roughened tubes. They concluded that the enhancement ratio over the Dittus-Boelter correlation is between 2-4 for internally finned tubes, between 1.5-6 for twisted tape and coil inserts and 1.5-4 for corrugated and dimpled tubes. The increase in friction factor ratio is 1-4 over the fanning friction factor for internally finned tubes while between 2-13 for twisted tape and coil inserts and 2-6 and 3-5 for corrugated and dimpled tubes, respectively. They also found that internally finned tubes showed the best thermal hydraulic performance among others. Twisted tape and coil inserts showed good results for laminar flow, but the friction factor increased drastically in turbulent flow indicating restricted applicability for the technique.
Jensen and Vlakancic [4], experimentally investigated heat transfer inside helically finned pipes with varying number of starts, fin width, fin height and helix angles. Based on the friction factors two different type of pipes could be distinguished. The high finned pipes with less number of starts that had the same friction factor curve slope as the smooth pipe. The pipes with smaller fins and more number of starts, called micro-finned tubes, showed potential for higher Nusselt numbers than the high-finned pipes, but in the transitional lower Reynolds number region, the high finned pipes had higher Nusselt values, explained by the higher capacity for swirl motion.
In a numerical study by Xiaoyue and Jensen [9], the same variables as above was investigated. They found that both the friction factor and Nusselt number increased with number of starts and helix angle. Increased fin height yielded moderately higher friction factors and Nusselt numbers at lower helix angles but significantly higher at helix angles above 20 degrees. They concluded that the choice of fin width and the number of starts was important as the internal region between fins should not get too small. They also investigated fin tip profiles and found that for higher Reynolds numbers a rounded fin profile had lower friction factors, if the internal region between fins was large enough, while rectangular and triangular fins showed values similar to each other.
Meyer and Olivier [7] investigated the friction factor of enhanced pipes in the laminar to turbulent region experimentally. They found transition to occur earlier than the corresponding smooth pipes. A secondary transition was present between Reynolds numbers of 3000-10,000.
Kim et al. [6] investigated both micro-finned and high-finned tubes numerically. They showed the high performance of micro-finned tubes to a highly diffusive turbulent flow in connection with fins geometry rather than the increased heat transfer surface area. They also suggested that there are no differences in the governing heat transfer mechanism between a high finned tube and a micro-finned tube. Furthermore they showed the flow inside the internal region between the fins of micro-finned tubes to be predominantly laminar and the bulk turbulent flow not to be in contact with the surface, this tendency was thought to be connected to the surface resistance caused by the micro-fins. A k-epsilon model was used and was validated against the experimental data by Jensen and Vlakancic [4]. Both a high-finned pipe and a micro-finned pipe were simulated and agreement of friction factor and Nusselt numbers where within 15% for Reynolds numbers above 18,000. They acknowledged that an isotropic turbulence model was not able to capture the complex flow phenomenon such as separations and re-laminarization, but the acceptable overall agreement with the experimental data justified the usage of the novel turbulence models.
Many experimental investigations of heat transfer and flow characteristics of internally finned tubes have been made over the years for flow in the laminar or fully turbulent region. Published investigations with numerical tools are limited but offers potential as the actual flow field can be described and analysed whereas the experimental investigations only deliver performance parameters. The current investigation covers internally finned pipes used in geothermal applications.
2. Geometry of the Tubes
Table 1 shows the details of the geometry of the computed smooth and helically finned tubes. One high fin and one micro fin tubes are selected for investigation.
Table 1. Geometry of computed smooth and internally finned tubes
Tube Number
Di (mm)
Number of fins
e (mm)
Fin thickness
Fin helix
$\gamma$$\left(^{\circ}\right)$
1(smooth)
2 (ref 1)
3. Non-dimensional Parameters
Experimental measurement data of Jensen and Vlakanic [4] was used to validate the numerical computations presented in this paper. Fanning friction factor based on nominal internal diameter and reduced Nusselt numbers are used for the comparisons. The friction factor is computed according to Eq. 1:
$f=\frac{\Delta P \times D}{4 \times L \times \rho \times V^{2}}$ (1)
where $\Delta P$ is the difference between the area-weighted static pressure averages of two surfaces placed one complete turns apart at a fully developed region. L is the corresponding length of one complete turn. D is the nominal internal diameter of the pipe. $\rho$ is the fluid density and is assumed constant. V is the average velocity in the pipe section.
Nusselt numbers are computed according to Eq. 2 and are normalized with the Prandtl number at the average fluid temperature along the domain.
$N u=\frac{h \times D}{k}$ (2)
where h is the convective heat transfer coefficient and k is the thermal conductivity of the fluid. The type of the flow weather to be laminar, turbulent or in transition depends mainly on the Reynolds number. Reynolds number is defined for the flow inside pipes according to Eq. 3:
$R e=\frac{V_{a v g} D}{v}=\frac{\rho V_{a v g} D}{\mu}$ (3)
It is common practice to assume the flow under Re=2300 inside circular pipes to be laminar and above Re=4000 to be fully turbulent.
4. Computational Methods
Due to the limitation in computational resources, this study is confined to turbulent flows. The hydrodynamic entry length for turbulent flow can be approximated according to Shah and Bhatti [8] as Eq. (4):
$L_{h, t u r b u l e n t}=1.359 . D . R e^{1 / 4}$ (4)
Since the Reynolds number studied in this paper is 10000, the longest entry length corresponding to Re=10000 is equal to about 13.6D according to Eq. (4). This length is much shorter than the length of computational grids in this study.
Linux Ubuntu 12.0.4 was used as operating system. Open source CFD Solver Open FOAM 2.3.0 and post processor ParaView 4.1.0 was used. Computations were done with parallel processing on a single PC of 16 GB RAM, 8 CPUs, 16 processors and 3.1 Hz CPU speed. A steady-state solver for buoyant, turbulent flow of incompressible fluids was employed in OpenFOAM. Open FOAM uses a finite volume based method. In the momentum equation, the pressure-velocity coupling was done using SIMPLE algorithm. Main flow was incompressible water with constant density. The governing equations were the Reynolds-averaged Navier Stokes equations and the Realizable k-epsilon turbulence modeling was used. Hexahedral cells in a structured manner were used to mesh the computational volume.
Longitudinal extent of the computational domain was chosen to be 20 complete turns. Thus the longitudinal extent for a pipe with a helix angle of 45 degrees equals to 20$\pi D$. Total number of volume cells were about 10 m cells in each computational case.
All of the tubes were taken horizontal with z-axis along the main flow direction. Gravity acted perpendicular to the main flow direction. The entrance Reynolds number was taken 10000 in all of the computational cases. One reason was to keep the flow in the turbulent region and to reach a fully developed flow within the computational grid and the other reason was that the friction factor and heat transfer data existed only above this Reynolds number. The y+ values were kept around 30. In the proceeding sections, each of the computed cases is presented separately and comparisons against the correlations or experimental data are performed.
Velocity at the inlet was uniform velocity corresponding to the specified Reynolds number, the pipe walls were no-slip condition. The entrance flow temperature was 273.16K and the pipe walls were at a constant temperature condition equal to 277.16K.
5. Tube No. 1 (Smooth Channel)
Figure 1. Details of the computational grid shown at a cross section and along the domain for tube number 1
Details of the computational grid at a cross section perpendicular to the longitudinal axis and along the main flow direction are presented in Fig. 1.
Figure 2 shows the contours of velocity component along the main flow direction (Vz) at plane cut through the centre of the pipe. Vz is normalized with the inlet velocity and plotted in longitudinal domain for tube number 1 at Re=10000. The length of the channel is normalized with the pipe total length.
Figure 2. Longitudinal velocity normalized with the inlet velocity for tube number 1 at a section passing the center
It can be seen from the figure above that a fully developed flow is reached after 20 percent of the channel length is passed. Similarly, profiles of temperature along the domain are plotted in Fig. 3 for smooth tube. Development of the temperature profile is seen along the channel.
Figure 3. Longitudinal temperature profile for tube number 1 at a cross section passing the center
Profiles of turbulent kinetic energy along the domain are plotted in Fig. 4 for smooth tube.
Figure 4. Longitudinal profile of turbulent kinetic energy for tube number 1 at a cross section passing the center
Figure 5 shows the Nusselt number on the outer walls of the smooth tube while the pipe diameter is taken as the characteristic length. The non-dimensional heat transfer coefficient is considerably higher at the entrance region.
Figure 5. Contours of Nusselt number along the domain on the outer walls of tube number 1
Two lines perpendicular to each other, one along the x-axis and the other along the y-axis are marked at the inlet and streamlines originating from them are plotted along the domain in Fig. 6.
Figure 6. Streamlines originating from two perpendicular lines at the inlet (x and y-axis) plotted along the domain for tube number 1
The computational results of smooth tube are compared to correlations of Gnielinski [3] and Filonenko [2]. Friction factors are computed according to Eq. 1. ΔP is the area-averaged pressure difference between two cross sections at the fully developed region. Similarly, for the Nusselt number, an area-weighted average at the fully developed region on the outer walls are used.
Table 2. Numerical results of smooth tube compared with correlations by Gnielinski [3] and Filonenko [2]
Computed Value
Nusselt Number
Friction Factor
6. Tube No. 2 (N=54, Γ=45°)
The velocity component along the main flow direction (Vz) is normalized with the inlet velocity and plotted along the domain at a cross section passing the centre for tube number 2 at Re=10000 in Fig. 8. The length is normalized with the pipe total length.
Figure 8. Longitudinal velocity normalized with the inlet velocity for tube number 2 at a section passing the centre
Profiles of temperature along the domain are plotted in Fig. 9 for tube number 2.
Profiles of turbulent kinetic energy along the domain are plotted in Fig. 10. The turbulent kinetic energy has its local higher value at half of the pipe length.
Figure 10. Longitudinal profile of turbulent kinetic energy for tube number 2 at a cross section passing the center
Figure 11 shows the non-dimensional heat transfer coefficient (Nusselt number) on the outer walls of the tube. The Nusslet numbers are considerable higher compared to the corresponding smooth pipe.
Figure 11. Contours of Nusselt number along the domain on the outer walls of tube number 2
Two lines perpendicular to each other, one along the x-axis and the other along the y-axis are marked at the inlet and streamlines originating from them are plotted along the domain in Fig. 12.
Figure 12. Streamlines originating from two perpendicular lines at the inlet (x and y-axis) plotted along the domain for tube number 2
Table 3. Numerical results of tube number 2 compared with experimental data of Jensen and Vlakanic, [4]
A fair agreement is observed for the Nusselt number values. The difference in friction factor results is expected due to the coarse mesh near the wall regions.
Details of the computational grid at a cross section perpendicular to the longitudinal axis and along the main flow direction are presented in Fig. 13.
Figure 13. Details of the computational grid shown at a cross section and along the domain for tube number 3
The velocity component along the main flow direction (Vz) is normalized with the inlet velocity and are plotted along the domain at a cross section passing the centre for tube number 3 at Re=10000 in Fig. 14. The length is normalized with the pipe total length.
Figure 14. Longitudinal velocity normalized with the inlet velocity for tube number 3 at a section passing the center
Profiles of temperature along the domain are plotted in Fig. 15.
Figure 15. Longitudinal temperature profile for tube number 3 at a cross section passing the center
Profiles of turbulent kinetic energy along the domain are plotted in Fig. 16 for tube number 3. Here the higher kinetic energy is seen up to 25 percent of the total length.
Figure 17 shows the non-dimensional heat transfer coefficient (Nusselt number) on the outer walls of the tube number 3. The observed local Nusselt numbers are considerably lower in this case compared to the low-fin pipe.
Two lines perpendicular to each other, one along the x-axis and the other along the y-axis are marked at the inlet and streamlines originating from them are plotted along the domain in Fig. 18. Longitudinal coordinate is non-dimensionalzed with the total length of the channel.
The computational results of tube number 3 are compared to experimental data by Jensen and Vlakanic, [4]. Friction factors are computed according to Eq. 1. ΔP is the area-averaged pressure difference between two cross sections at the fully developed region. Similarly, for the Nusselt number, an area-weighted average at the fully developed region on the outer walls are used and the tube inside diameter is taken as the characteristic length.
Table 4. Numerical results of tube number 3 compared with correlations by experimental data of Jensen and Vlakanic, [4]
Here, the computed average Nusselt numbers does not show very good agreement with the experimental results. The reason could be due to the coarse mesh used in the computations which did not allow to computer the laminar sub-layer profile correctly. Due to the less number of fins in the high-fin pipe, it is expected to have a laminar flow in the internal region between the fins and correctly modelling of the laminar sub-layer is crucial in obtaining a better agreement with the experimental results.
A numerical model was developed to simulate the flow and heat transfer fields in helically finned tubes used in geothermal applications. Two finned tubes corresponding to high and low fin types together with a smooth pipe were computed numerically. The chosen Reynolds number was 10000 to be in fully developed region and yet to be low enough to be suitable for geothermal applications. Twenty complete turns were computed in the longitudinal direction. Due to the computational limitations, the near wall y-plus values were kept at around 30. The computations showed that a fair agreement was seen between the numerically computed values and experimental data for the low-fin pipe but the disagreement was large for the high-fin case. Overall, the numerical simulations gave a very good view to the flow behaviour in such applications and insight to improvement of the design.
Convective heat transfer coefficient
$\left[W m^{-2} K^{-1}\right]$
non-dimensional fin height (= 2e/di )
Thermal conductivity of fluid
[Pa]
Reynolds number
[m·s-1]
Greek symbols
Fin helix angle
[°]
Dynamic viscosity
[Pa·s-1]
[kg·m-3]
Shear stress
$v$
Kinematic viscosity
Subscripts
hydrodynamic
Components in respective coordinate directions
[1] Cengel YA, Cimbala JM. (2006). Fluid Mechanics: Fundamental and Applications. The McGraw-Hill Companies. Inc.
[2] Filonenko GK. (1960). Hydraulischer wiederstand von rohrleitungen. Teploenergetika 1: 1098-1099.
[3] Gnielinski V. (1976). New equations for heat and mass transfer in turbulent pipe and channel flow. International Journal of Chemical Engineering 16: 359-368.
[4] Jensen MK, Vlakancic A. (1999). Technical note: Experimental investigation of turbulent heat transfer and fluid flow in internally finned tubes. International Journal of Heat and Mass Transfer 1343-1351.
[5] Ji W, Jacobi A, He Y, Tao W. (2015). Review: Summary and evaluation on single-phase heat transfer enhancement techniques of liquid laminar and turbulent pipe flow. International Journal of Heat and Mass Transfer 88735-754.
[6] Kim J, Jansen K, Jensen M. (2004). Analysis of heat transfer characteristics in internally finned tubes. Numerical Heat Transfer: Part A – Applications 46(1).
[7] Meyer JP, Olivier JA. (2011). Transitional flow inside enhanced tubes for fully developed and developing flow with different types of inlet disturbances: Part II-heat transfer. International Journal of Heat and Mass Transfer 54: 1598-1607.
[8] Shah RK, Bhatti MS. (1987). Laminar Convective Heat Transfer in ducts. Handbook of single-phase convective heat transfer, John Wiley & Sons, New York, pp. 3.1-3.137.
[9] Xiaoyue L, Jensen M. (2001). Geometry effects on turbulent flow and heat transfer in internally finned tubes. Journal of Heat Transfer 123(6).
Latest News & Announcement
Email: [email protected]
JNMES
IJHT
MMEP
EJEE
JESA
IJSDP
IJSSE
IJDNE
EESRJ
IJES
AMA_A
MMC_A
Please sign up to receive notifications on new issues and newsletters from IIETA
Select Journal/Journals:
IJHTMMEPACSMEJEEISII2MJESARCMARIATSIJSDPIJSSEIJDNEJNMESIJESEESRJRCESAMA_AAMA_BAMA_CAMA_DMMC_AMMC_BMMC_CMMC_D
Copyright © 2022 IIETA. All Rights Reserved.
|
CommonCrawl
|
A comparative study of the microstructure and water permeability between flattened bamboo and bamboo culm
Hong Chen1,
Yutian Zhang1,
Xue Yang1,
Han Ji1,
Tuhua Zhong2 &
Ge Wang3
The objective of this study is to investigate the microstructure, water permeability and the adhesion of waterborne coating on the flattened bamboo. The flattened bamboo was obtained by softening bamboo culm at 180 °C followed by compression. The microstructure and chemical component of flattened bamboo were investigated by scanning electron microscopy, Fourier transforms infrared spectroscopy, and X-ray diffraction. The adhesion and interface structure of waterborne coating onto flattened bamboo surface were also examined. The result indicated that the parenchyma cells in flattened bamboo were compressed, and starch in the parenchyma cell was extracted during the softening and flattening process in which the main chemical component did not change significantly. The water permeability of both flattened bamboo and bamboo culm is dependent on the direction: longitudinal direction > tangential direction > radial direction. However, the water permeability in all three directions in flattened bamboo was higher than those in the untreated bamboo. In addition, alkali dye solution was found to more easily permeate through the flattened bamboo when compared to acid dye solution, and the permeability varied depending on alkali dye or acid dye concentration. The adhesion of water-based polyurethane coating on the flattened bamboo can reach the second level.
Bamboo, the fastest growing plant on earth, has attracted growing attention from both academia and industry because of excellent mechanical properties, beautiful color and special grain of bamboo. There are 88 families and more than 1642 species of bamboo around the world, including 39 families and 837 species in China [1]. As wood forest is decreasing quickly, the global bamboo forest is increasing by 3% every year [2]. The area of bamboo forest in China is 20% of that around the world (more than 32 million hm2), and the output value of bamboo industry in China increased from about 7.13 billion US dollar in 2007 to 33.7 billion US dollar in 2017 [3]. However, the tubular shape with hollow and anisotropic structure of the bamboo culm still limits its practical application (Fig. 1). To date, laminated bamboo and bamboo scrimber, made from bamboo culm, are two mainly used bamboo-based panels intended for furniture and interior decoration application. However, for laminated bamboo, the low utilization ratio of bamboo (less than 40%) is a primary drawback because the inner and outer layers of bamboo need to be removed [4, 5]. Bamboo scrimber has very high content of adhesive (15–30%) and high density (1.05–1.25 g/cm3) [6], which is not environment friendly and too heavy when used in furniture and interior decoration. Therefore, to overcome the drawbacks in both laminated bamboo and bamboo scrimber, new manufacturing technologies for a new kind of bamboo-based panels have been explored.
Bamboo culm and its structure
In the 1980s, Maori [7] and Zhang et al. [8] invented the method of flattening bamboo culm to the bamboo board. However, the apparatus and technology were not good enough at that time, especially the apparatus, which limited its industrial production. Recently, as both the technology and the apparatus were improved, flattening of bamboo has been studied increasingly in academia and produced in the industry. Some researchers have reported how to flatten the bamboo culm into bamboo board effectively, such as softening in hot oil [9] or in saturated high-pressure steam [10], with different moisture contents [11], different press process [12], etc. Currently, one of the most efficient approaches is softening bamboo culm with saturated high-pressure steam at high temperature in a sealed container; the temperature and the pressure could be 180–190 °C and around 1.2 MPa [13].
As the flattening of bamboo developed, there have been enormous potentials for furniture and interior decoration application in which the water permeability is of great importance and has been taken into account prior to its practical application. The water permeability affects the modification, adhesive bonding properties, adhesion of coatings on the flattened bamboo surface, etc. For example, as used in furniture and interior decoration, the flattened bamboo needs to be treated differently to obtain the high durability, different colors, and high-quality coatings, which are closely related to water permeability. At present, only a few researches on the water permeability of bamboo culm [14,15,16] and the effect of different treatments on the water permeability of bamboo culm [17, 18] have been reported. The water permeability of bamboo culm was determined by the structure in different directions and can be improved by the hydrochloric acid treatment, microwave treatment, and freeze-drying treatment.
Flattened bamboo is a new material which is ready to be produced in large scale. However, there has been no reported research to evaluate the possibility of using flattened bamboo in furniture and interior decoration, especially about the water permeability of flattened bamboo, as well as the adhesion of the waterborne coating on its surface. Therefore, the objective of this study is to investigate the microstructure, chemical component, water permeability, and adhesion of the waterborne coating on the flattened bamboo, which will provide technical data for the application of flattened bamboo in furniture and interior decoration.
4-year-old Moso bamboo (Phyllostachys heterocycla) was obtained from Zhejiang, China. Flattened bamboo, provided by Zhejiang Dechang Bamboo & Wood Co. Ltd, China, was produced as shown in Fig. 2. The bamboo culm (moisture content > 18%) was cut into 1 m. The node and diaphragm were removed. Then the culm was cut into two half-tubular culms and then grooved at about 2–3 mm in the inner surface and flattened after being heated by steam at 180 °C in a sealed container for 1–3 min [13]. The hot softened bamboo was pressed at 0.5–0.8 MPa to maintain its flattened shape and then sanded for a smooth surface.
The process of producing flattened bamboo
The outer layer and inner layer of bamboo culm were first removed. Then the bamboo and flattened bamboo were ground into a powder, passed through a 200 mesh and dried in the oven at 103 °C for 2 h for Fourier transform infrared spectroscopy (FTIR) and X-ray diffraction (XRD) testing.
Microstructure and chemical component test
Flattened bamboo and bamboo were selected randomly to observe the morphology of the cross section and radial section with a field emission scanning electron microscope (FE-SEM, XL30 ESEM FEG, FEI Company, OR, USA), the acceleration of which was 7 kV.
The FTIR spectra of flattened bamboo and bamboo were measured in a spectrometer (VERTEX 80V, Bruker, German) within the range of 4000–400 cm−1, with a resolution of 4 cm−1 and 64 scans. KBr pellet consisting of KBr and flattened bamboo and bamboo powder was prepared with a weight ratio of 100:1.
The crystal structure of cellulose in the flattened bamboo and bamboo was characterized using a X-ray diffractometer (Ultima IV, Rigaku, Japan). The XRD patterns of the flattened bamboo and bamboo were obtained in the diffractometer with a CuKα radiation source (X-ray wavelength k = 0.154178 nm). 2θ was from 5° to 45°. The current and voltage for X-ray generation were 30 mA and 40 kV, respectively. The crystallinity index of cellulose was calculated from the height ratio between the intensity of the crystalline peak (I002–IAM) and total intensity (I002).
Water permeability test
Rectangular samples of flattened bamboo and bamboo culm were cut to 70 mm (longitudinal) × 25 mm (tangential) × 5 mm (radial). The water permeability was investigated in acid dye (brilliant crocein) solution and alkali dye (basic red) solution as the two dye types are usually used to obtain different colors. Brilliant crocein was bought according to GB/T 25816-2010, and basic red reached the requirement in HG/T 2551-2007.
The specimens were weighed and then put on the wire mesh in a container with 1% brilliant crocein solution in different directions, as shown in Fig. 3. The specimens were coated with silicon rubber in other directions to prevent penetration. After 1, 2, 4, 7, 12, and 24 h, all the bamboo and flattened bamboo were weighed, respectively. The results of BLD (bamboo in the longitudinal direction), BRD (bamboo in the radial direction), BTD (bamboo in the tangential direction), FBLD (flattened bamboo in the longitudinal direction), FBRD (flattened bamboo in the radial direction), and FBTD (flattened bamboo in the tangential direction) were obtained. Five replicates were tested for each type of sample.
Specimens in the container with solutions in different directions
The specimens were weighed and then put on the wire mesh in radial direction in a container with brilliant crocein and basic red solution with the concentration of 1, 3, and 5%, respectively. After 1, 2, 4, 7, 12, and 24 h, all the bamboo and flattened bamboo were weighed, respectively. Five specimens were tested for each type.
Water absorption weight and water absorption rate were calculated by Eqs. (1) and (2) [15]:
$$ P = \frac{{M_{1} - M_{0} }}{A}, $$
$$ V = \frac{{M_{1} - M_{0} }}{AT}. $$
P—water absorption weight, g/cm2; M0—weight of samples before absorption, g; M1—weight of samples after absorption, g; A—area of absorption, cm2; V—water absorption rate, g/cm2 h; T—time of absorption, h.
Adhesion of waterborne coating on the flattened bamboo test
The flattened bamboo was cut into 15 cm (longitudinal) × 12 cm (tangential) × 1 cm (radial). The waterborne polyurethane coatings were applied by brush on the flattened bamboo as one layer. It was heated at 50 °C and kept for 2 h. The coated samples were dried in a cool, dry environment for 7 days.
The degree of adhesion of the coating film was classified according to GB/4893.4-85 standard methods by a cross-cut test. Classification 1: the edges of the cuts are completely smooth; none of the squares of the lattice is detached. Classification 2: there is detachment of small flakes of the coating at the intersections of the cuts; slight detachment along the edges of the cuts. Classification 3: the coating has flaked partly or wholly discontinuously or continuously along the edges of the cuts. Classification 4: the coating has flaked partly or wholly on different parts of the squares. A cross-cut area not greater than 50% is affected. Classification 5: some squares have flaked partly or wholly. A cross-cut area greater than 50% is affected.
Microstructure and chemical components of flattened bamboo
The photographs of bamboo culm and flattened bamboo are presented in Fig. 4. There is slight difference in the color and the grain between the inner layer and outer layer of flattened bamboo. It mainly resulted from the chemical component and structure of bamboo.
Photographs of bamboo culm and flattened bamboo (a half-tubular bamboo culm and flattened bamboo, b inner layer of bamboo after flattening, c outer layer of bamboo after flattening)
The microstructures of bamboo and flattened bamboo in cross section and radial section are shown in Fig. 5. The basic units in bamboo are vascular bundles where bamboo fibers exist, and parenchyma consists of parenchyma cells. After softening and flattening, the parenchyma was compressed to a certain extent, and the starch in the parenchyma cells was extracted during the steam treatment (Fig. 5a, b). There was not significant change in the fibers in vascular bundles, although a few small cracks between fibers were expected (Fig. 5c, d), as the interface between fibers was weak [19]. However, a large number of cracks were developed in the parenchyma cell wall, including both in the layers and the interface between layers as shown in Fig. 5d. Moreover, the damage in the interface between the layers of the parenchyma cell wall was more pronounced when compared with the layers themselves. From the radial section (Fig. 5e, f), it also can be observed that the starch was removed, and the parenchyma cells were compressed where lots of cracks appeared the same as observed in the cross section.
Microstructure of bamboo culm and flattened bamboo (a cross section of bamboo culm, b cross section of flattened bamboo, c cross section of fiber and parenchyma cell in bamboo, d cross section of fiber and parenchyma cell in flattened bamboo, e radial section of bamboo, f radial section of flattened bamboo)
The chemical structure of bamboo and flattened bamboo are evaluated with FTIR spectra from 4000 to 400 cm−1 as shown in Fig. 6. The peaks in the FTIR spectra at 1605, 1510 and 1261 cm−1 were due to lignin [20, 21], and the peak at 1737 cm−1 in was due to C=O in hemicellulose [22], which were similar in both types of bamboo. As reported by Zhang et al. [10], the hemicellulose in flattened bamboo steadily decreased with the increase of softening temperatures and disappeared when treated at 160 and 180 °C for 8 min. In our study, the bamboo was steam heated at 180 °C for 1–3 min; it indicated if the treatment time was short, the hemicellulose would not degrade much even when the temperature was up to 180 °C.
FT-IR spectra of bamboo and flattened bamboo
The XRD patterns of bamboo and flattened bamboo are shown in Fig. 7. There were three characteristic peaks in the XRD patterns of bamboo and flattened bamboo around 15.76°, 22°, and 34.74°, attributed to (110) and (200) and (004) reflections of the crystalline structure in cellulose Ι, and the reflection (004) is related to the longitudinal structure of cellulose [23]. Similar XRD patterns of cellulose in flattened bamboo were observed to that in bamboo, showing the similar cellulose Ι structure. It indicated that the softening and flattening process did not change the crystal form of cellulose. While the intensity of (200) reflection in flattened bamboo was weaker than that in bamboo, it emphasized the decrease in the fraction of crystalline cellulose [24]. Also, the crystallinity index of cellulose in bamboo and flattened bamboo was 61.8% and 50.4%, respectively. The combination of steam treatment at 180 °C and compression in the flattening process might account for the reduction in the crystallinity index, which needs be further studied.
XRD patterns of bamboo and flattened bamboo
Permeability of flattened bamboo in different directions
The water absorption weight and rate of flattened bamboo and bamboo are presented in Fig. 8. The water absorption weight and rate were different in different directions in both bamboo and flattened bamboo. In the longitudinal direction, the water absorption weight and rate were much higher than that in the other two directions, for both flattened bamboo and bamboo. The water absorption weight and rate in tangential direction were higher compared with that in the radial direction for both types of bamboo. This was in accordance with the water vapor diffusion resistance of bamboo in the longitudinal direction being remarkably lower than that in the tangential and radial directions [16].
Water absorption weight (a) and water absorption rate (b) of bamboo and flattened bamboo in different directions
The water permeability was dependent on the structure of bamboo and flattened bamboo in different directions. Bamboo consists of parenchyma cells, with embedded vascular bundles composed of fibers, metaxylem vessels, and sieve tubes with companion cells [25]. As shown in Fig. 9, in the longitudinal direction, there were a number of vascular bundles, the structure of which was beneficial for water permeability, especially vessels and sieve tubes. However, in the tangential and radial direction, there was no tissue with straight conduits and interconnectivity, and water transport mainly relied on the pits in fibers, parenchyma cell wall and the pores between parenchyma cells. The vascular bundles consisting of fibers prevented the water transport [16], as there were much fewer pits in the fiber and almost no pores between fibers in comparison to those in the parenchyma cell wall (shown in Fig. 9d). Therefore, the water permeability of flattened bamboo in different directions was similar to that of bamboo: longitudinal direction > tangential direction > radial direction, which was in accordance with the previous research on the water permeability of bamboo [14].
The vascular bundle and parenchyma distribution and the pit distribution (a cross section, b cross section of bamboo, c cross section of flattened bamboo, d radial section of bamboo)
For flattened bamboo, the water permeability in three directions was higher than that in bamboo. As shown in Figs. 5 and 7, the extraction of starch and the cracks in parenchyma cell wall helped to improve the water permeability [17, 18], which happened in flattened bamboo. Also, the decreased crystallinity in cellulose may be in part attributed to improving the water permeability of flattened bamboo compared with unflattened bamboo.
Permeability of flattened bamboo with different liquids
The water absorption weight and rate of flattened bamboo and bamboo in different liquids with different concentrations are investigated in Fig. 10. In the brilliant crocein solutions, the water absorption weight and rate of bamboo decreased with increase in concentration, while those of flattened bamboo in the solution with concentration at 3% and 5% were similar when the absorption time was less than 12 h. Up to 24 h, the water absorption weight in 5% brilliant crocein solution was higher than in 3% solution. This might be because the damaged cell wall in flattened bamboo was more easily affected by the acid solution with higher concentration, as acid pretreatment enhanced the water permeability [17].
Water absorption weight and rate of bamboo and flattened bamboo in different solutions with different concentrations
In basic red solutions, the water absorption weight and rate of flattened bamboo and bamboo decreased when the concentration increased. With increase in the concentration, the increased dye molecules block up the pits in the cell wall and the pores between parenchyma cells. When the concentration increased from 3 to 5%, the water absorption weight and rate of flattened bamboo were similar when the absorption time was less than 12 h. A similar phenomenon happened in the bamboo when the absorption time was less than 7 h.
With the same concentration, the water permeability of flattened bamboo was higher in basic red solution compared with that in brilliant crocein solution. By contrast, the water permeability of bamboo in basic red solution was lower than that in brilliant crocein solution when the concentration was lower (1% and 3%). As the concentration increased up to 5%, the water permeability of bamboo in brilliant crocein solution was higher than that in basic red solution. Regardless whether in basic red or brilliant crocein solution, with the same concentration, the water permeability of flattened bamboo was higher in comparison to that of unflattened bamboo.
Adhesion of waterborne coating film on the flattened bamboo
Figure 11 shows the images of coated samples and coating films after the cross-cut test. After the cross-cut test, a little coating film had flaked along the edges and at the intersections of the cuts on both the outer layer and inner layer of the flattened bamboo (Fig. 11a, b). It indicated that the adhesion classification of the waterborne coating on both the outer layer and inner layer was 2. Figure 11c, d show the coating film off from the samples; there was a slightly more bamboo tissue on the coating film from the outer layer than from the inner layer. It might suggest that the interface between the outer layer and coating was different from that between the coating and inner layer.
Images of coated samples and films after cross-cut test (a coated outer layer, b coated inner layer, c coating film off from the outer layer, d coating film off from the inner layer)
Furthermore, as the shape of bamboo culm was tubular, the adhesion cannot be measured with the cross-cut test, and the interface between bamboo and coating was observed compared with that between flattened bamboo and coating in Fig. 12. For both bamboo and flattened bamboo, there were two kinds of interface: one was the interface between fiber and coating, and the other was the interface between parenchyma cell and coating. The interface between fiber and coating was greater in the outer layer because there were a large number of vascular bundles (Fig. 11a, c). But in the inner layer, the interface between parenchyma cell and the coating was more dominant because of the presence of a large amount of parenchyma there (Fig. 11b, d). Even in some areas of the inner layer, there was no interface between fiber and coating, but only interface between parenchyma cell and coating (Fig. 12d). From the FE-SEM observation, in flattened bamboo, the interface between fiber and coating was similar to that in bamboo, whereas the interface between parenchyma cell and coating was very different in the two substrates. The interface between parenchyma cell and coating included the interface between the outer surface of parenchyma cell and coating, and between the inner surface and coating. The parenchyma cell in flattened bamboo was compressed and the starch was extracted, and the cell lumen decreased and the cell shape changed. When coated, the interface between parenchyma cell and coating in flattened bamboo was very different from that in bamboo, as shown in Fig. 12b, d. In fact, the interface and the bonding between bamboo and coating are very complicated and very important in the industry and need to be further studied in the future.
Interface between the coating and bamboo and flattened bamboo surface
Flattened bamboo was produced for improving the utilization ratio of tubular bamboo culm and widening the applications. In this paper, the flattened bamboo produced with one of the most popular technologies in China was studied. The microstructure, chemical components, water permeability, and the adhesion and interface of waterborne coating on the surface of flattened bamboo were investigated. The results obtained are as follows: (1) the microstructure of flattened bamboo changed significantly, especially the parenchyma cells in which the shape and the lumen, the cell wall, and the starch changed. The chemical components in flattened bamboo were almost unchanged, but the crystallinity index of cellulose slightly decreased. (2) The water permeability of flattened bamboo, as well as that of bamboo, was dependent of the direction: longitudinal direction > tangential direction > radial direction. Whichever the direction, higher water permeability of flattened bamboo was obtained in comparison with bamboo. The water permeability was affected combined with the concentration and the kind of the solution. Flattened bamboo has higher permeability in alkali solution compared with that in acid solution, while that of bamboo depends on the concentration. (3) The adhesion classification of waterborne polyurethane coating on the surface of both outer layer and inner layer of flattened bamboo is two, even though the interfaces between fiber and coating as well as between coating and parenchyma cell were different.
All data generated or analyzed during this study are included in this published article.
BLD:
bamboo in the longitudinal direction
BRD:
bamboo in the radial direction
BTD:
bamboo in the tangential direction
FBLD:
flattened bamboo in the longitudinal direction
FBRD:
flattened bamboo in the radial direction
FBTD:
flattened bamboo in the tangential direction
Vorontsova MS, Clark LG, Dransfield J, Govaert RS, Baker WJ (2016) World checklist if bamboos and rattans. International Network of Bamboo and Rattan & the Board of Trustees of the Royal Botanic Gardens, Kew. ISBN: 978-92-95098-99-2
State Forestry Administration (2013) National plan of bamboo industry development in China (2013–2020). (In Chinese)
Sun ZJ, Fei BH (2019) Opportunities and challenges for the development of bamboo industry in China. World Bamboo Rattan 17(1):1–5 (In Chinese)
Sharma B, Gatóo A, Bock M, Ramage M (2015) Engineered bamboo for structural applications. Constr Build Mater 81:66–73
Penellum M, Sharma B, Shah DU, Foster RM, Ramage MH (2018) Relationship of structure and stiffness in laminated bamboo composites. Constr Build Mater 165:241–246
Yu Y, Liu R, Huang Y, Meng F, Yu W (2017) Preparation, physical, mechanical, and interfacial morphological properties of engineered bamboo scrimber. Constr Build Mater 157:1032–1039
Maori M (1987) Process of flattening bamboo pieces utilizing microwave heating. Mokuzai GakkaiShi. 33:630–636
Zhang QS (1988) Studies of bamboo plywood I—the softening and flattening of bamboo. J Nanjing Univ 4:13–20
Parkkeeree T, Matan N, Kyokong B (2015) Mechanisms of bamboo flattening in hot linseed oil. Eur J Wood Prod 73:209–217
Zhang X, Zhou Z, Zhu Y, Dai J, Yu Y, Song P (2019) High-pressure steam: a facile strategy for the scalable fabrication of flattened bamboo biomass. Ind Crop Prod 129:97–104
Matan N, Kyokong B, Preechatiwong W (2011) Softening behavior of black sweet-bamboo (Dendrocalamus asper Backer) at various initial moisture contents. Walailak J Sci Tech 4:225–236
Liu J, Zhang H, Chrusciel L, Na B, Lu X (2013) Study on a bamboo stressed flattening process. Eur J Wood Prod 71:291–296
Fang CH, Jiang ZH, Sun ZJ, Liu HR, Zhang XB, Zhang R, Fei BH (2018) An overview on bamboo culm flattening. Constr Build Mater 171:65–74
Sun ZB, Yang Y, Gu L (2005) Study on liquid permeability of Dendrocalamus giganteus under normal pressure. China For Prod Ind 1(5):17–19 (In Chinese)
Deng BD (2012) Study of the surface penetration of the household bamboo production carbonized materials. Master thesis, Nanjing Forestry University. (In Chinese)
Huang P, Latif E, Chang WS, Ansell MP, Lawrence M (2017) Water vapour diffusion resistance factor of Phyllostachys edulis (Moso bamboo). Constr Build Mater 141:216–221
Huang XD, Hse CY, Shupe TF (2015) Study of moso bamboo's permeability and mechanical properties. Emerg Mater Resh 4(1):130–138
Xu J, He S, Li J, Yu H, Zhao S, Chen Y, Ma L (2018) Effect of vacuum freeze-drying on enhancing liquid permeability of moso bamboo. BioResources 13(2):4159–4174
Chen H, Cheng H, Wang G, Yu Z, Shi SQ (2015) Tensile properties of bamboo in different sizes. J Wood Sci 61(6):552–561
Zhang YC, Qin MH, Xu WY, Fu YJ, Wang ZJ, Li ZQ, WillfÖr S, Xu CL, Hou QL (2018) Structural changes of bamboo-derived lignin in an integrated process of autohydrolysis and formic acid inducing rapid delignification. Ind Crop Prod 115:194–201
Zhang YC, Hou QX, Xu WY, Qin MH, Fu YJ, Wang ZJ, WillfÖr S, Xu CL (2017) Revealing the structure of bamboo lignin obtained by formic acid delignification at different pressure levels. Ind Crop Prod 108:864–871
Wang XQ, Ren HQ (2009) Surface deterioration of moso bamboo (Phyllostachys pubescens) induced by exposure to artificial sunlight. J Wood Sci 55:47–52
French AD (2014) Idealized powder diffraction patterns for cellulose polymorphs. Cellulose 21:885–896
Kathirselvam M, Kumaravel A, Arthanarieswaran VP, Saravanakumar SS (2019) Characterization of cellulose fibers in Thespesia populnea barks: influence of alkali treatment. Carbohydr Polym 217:178–189
Liese W, Köhl M (eds) (2015) Bamboo: the plant and its uses. Springer, Berlin
The authors thank Dr. Xiubiao Zhang from International Centre for Bamboo and Rattan for providing the bamboo, and Dr. Yan Wu from Nanjing Forestry University for providing the polyurethane coating. The authors also thank Shuting Tang from Nanjing Forestry University for help in measuring the adhesion of coating on the flattened bamboo.
This work was financed by the National Natural Science Foundation of China (31770598).
College of Furnishings and Industrial Design, Nanjing Forestry University, Nanjing, 210037, China
Hong Chen, Yutian Zhang, Xue Yang & Han Ji
Composite Materials and Engineering Center, Washington State University, Pullman, WA, 99164, USA
Tuhua Zhong
International Centre for Bamboo and Rattan, Beijing, 100102, China
Ge Wang
Hong Chen
Yutian Zhang
Xue Yang
Han Ji
HC performed the SEM examination and was a major contributor to data analysis and writing the manuscript. YZ and XY performed FT-IR and XRD test, and YZ drew part of the images. HJ did the permeability test. TZ partly analyzed the data and wrote the manuscript. GW designed and financed the research. All authors read and approved the final manuscript.
Correspondence to Ge Wang.
Chen, H., Zhang, Y., Yang, X. et al. A comparative study of the microstructure and water permeability between flattened bamboo and bamboo culm. J Wood Sci 65, 64 (2019). https://doi.org/10.1186/s10086-019-1842-0
Flattened bamboo
Water permeability
|
CommonCrawl
|
Updated Apr-29-2019
Adaptive-treed bandits
Bull, Adam D.
arXiv.org Machine Learning Sep-29-2015
We describe a novel algorithm for noisy global optimisation and continuum-armed bandits, with good convergence properties over any continuous reward function having finitely many polynomial maxima. Over such functions, our algorithm achieves square-root regret in bandits, and inverse-square-root error in optimisation, without prior information. Our algorithm works by reducing these problems to tree-armed bandits, and we also provide new results in this setting. We show it is possible to adaptively combine multiple trees so as to minimise the regret, and also give near-matching lower bounds on the regret in terms of the zooming dimension.
artificial intelligence, bandit, big data, (21 more...)
arXiv.org Machine Learning
doi: 10.3150/14-BEJ644
Information Technology > Artificial Intelligence > Machine Learning (1.00)
Information Technology > Data Science > Data Mining > Big Data (0.50)
By concept tags
Adaptive Contract Design for Crowdsourcing Markets: Bandit Algorithms for Repeated Principal-Agent Problems
Ho, Chien-Ju, Slivkins, Aleksandrs, Vaughan, Jennifer Wortman
Journal of Artificial Intelligence Research Jan-1-2016
Crowdsourcing markets have emerged as a popular platform for matching available workers with tasks to complete. The payment for a particular task is typically set by the task's requester, and may be adjusted based on the quality of the completed work, for example, through the use of "bonus" payments. In this paper, we study the requester's problem of dynamically adjusting quality-contingent payments for tasks. We consider a multi-round version of the well-known principal-agent model, whereby in each round a worker makes a strategic choice of the effort level which is not directly observable by the requester. In particular, our formulation significantly generalizes the budget-free online task pricing problems studied in prior work. We treat this problem as a multi-armed bandit problem, with each "arm" representing a potential contract. To cope with the large (and in fact, infinite) number of arms, we propose a new algorithm, AgnosticZooming, which discretizes the contract space into a finite number of regions, effectively treating each region as a single arm. This discretization is adaptively refined, so that more promising regions of the contract space are eventually discretized more finely. We analyze this algorithm, showing that it achieves regret sublinear in the time horizon and substantially improves over non-adaptive discretization (which is the only competing approach in the literature). Our results advance the state of art on several different topics: the theory of crowdsourcing markets, principal-agent problems, multi-armed bandits, and dynamic pricing.
contract, crowdsourcing, social media, (23 more...)
Journal of Artificial Intelligence Research
Country: North America > United States (0.67)
Industry: Banking & Finance (0.45)
Information Technology > Communications > Social Media > Crowdsourcing (1.00)
Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
Introduction to Multi-Armed Bandits
arXiv.org Artificial Intelligence Apr-29-2019
Multi-armed bandits a simple but very powerful framework for algorithms that make decisions over time under uncertainty. An enormous body of work has accumulated over the years, covered in several books and surveys. This book provides a more introductory, textbook-like treatment of the subject. Each chapter tackles a particular line of work, providing a self-contained, teachable technical introduction and a review of the more advanced results. The chapters are as follows: Stochastic bandits; Lower bounds; Bayesian Bandits and Thompson Sampling; Lipschitz Bandits; Full Feedback and Adversarial Costs; Adversarial Bandits; Linear Costs and Semi-bandits; Contextual Bandits; Bandits and Zero-Sum Games; Bandits with Knapsacks; Incentivized Exploration and Connections to Mechanism Design.
book review, contextual bandit algorithm, survey article, (22 more...)
arXiv.org Artificial Intelligence
Leisure & Entertainment > Games (0.92)
Information Technology > Services (0.67)
Multi-Armed Bandits with Metric Movement Costs
Koren, Tomer, Livni, Roi, Mansour, Yishay
Neural Information Processing Systems Dec-31-2017
We consider the non-stochastic Multi-Armed Bandit problem in a setting where there is a fixed and known metric on the action space that determines a cost for switching between any pair of actions. The loss of the online learner has two components: the first is the usual loss of the selected actions, and the second is an additional loss due to switching between actions. Our main contribution gives a tight characterization of the expected minimax regret in this setting, in terms of a complexity measure $\mathcal{C}$ of the underlying metric which depends on its covering numbers. In finite metric spaces with $k$ actions, we give an efficient algorithm that achieves regret of the form $\widetilde(\max\set{\mathcal{C}^{1/3}T^{2/3},\sqrt{kT}})$, and show that this is the best possible. Our regret bound generalizes previous known regret bounds for some special cases: (i) the unit-switching cost regret $\widetilde{\Theta}(\max\set{k^{1/3}T^{2/3},\sqrt{kT}})$ where $\mathcal{C}=\Theta(k)$, and (ii) the interval metric with regret $\widetilde{\Theta}(\max\set{T^{2/3},\sqrt{kT}})$ where $\mathcal{C}=\Theta(1)$. For infinite metrics spaces with Lipschitz loss functions, we derive a tight regret bound of $\widetilde{\Theta}(T^{\frac{d+1}{d+2}})$ where $d \ge 1$ is the Minkowski dimension of the space, which is known to be tight even when there are no switching costs.
artificial intelligence, big data, metric space, (18 more...)
Neural Information Processing Systems
Information Technology > Artificial Intelligence > Machine Learning > Supervised Learning > Representation Of Examples (0.60)
Lipschitz Bandit Optimization with Improved Efficiency
Zhu, Xu, Dunson, David B.
We consider the Lipschitz bandit optimization problem with an emphasis on practical efficiency. Although there is rich literature on regret analysis of this type of problem, e.g., [Kleinberg et al. 2008, Bubeck et al. 2011, Slivkins 2014], their proposed algorithms suffer from serious practical problems including extreme time complexity and dependence on oracle implementations. With this motivation, we propose a novel algorithm with an Upper Confidence Bound (UCB) exploration, namely Tree UCB-Hoeffding, using adaptive partitions. Our partitioning scheme is easy to implement and does not require any oracle settings. With a tree-based search strategy, the total computational cost can be improved to $\mathcal{O}(T\log T)$ for the first $T$ iterations. In addition, our algorithm achieves the regret lower bound up to a logarithmic factor.
algorithm, artificial intelligence, health & medicine, (17 more...)
Industry: Health & Medicine > Pharmaceuticals & Biotechnology (0.56)
Information Technology > Artificial Intelligence > Representation & Reasoning > Search (0.88)
|
CommonCrawl
|
A temporal visualization of chronic obstructive pulmonary disease progression using deep learning and unstructured clinical notes
Selected articles from the IEEE BIBM International Conference on Bioinformatics & Biomedicine (BIBM) 2018: medical informatics and decision making (part 2)
Chunlei Tang1,2,
Joseph M. Plasek1,
Haohan Zhang3,4,
Min-Jeoung Kang1,
Haokai Sheng5,
Yun Xiong3,
David W. Bates1,2 &
Li Zhou1
Chronic obstructive pulmonary disease (COPD) is a progressive lung disease that is classified into stages based on disease severity. We aimed to characterize the time to progression prior to death in patients with COPD and to generate a temporal visualization that describes signs and symptoms during different stages of COPD progression.
We present a two-step approach for visualizing COPD progression at the level of unstructured clinical notes. We included 15,500 COPD patients who both received care within Partners Healthcare's network and died between 2011 and 2017. We first propose a four-layer deep learning model that utilizes a specially configured recurrent neural network to capture irregular time lapse segments. Using those irregular time lapse segments, we created a temporal visualization (the COPD atlas) to demonstrate COPD progression, which consisted of representative sentences at each time window prior to death based on a fraction of theme words produced by a latent Dirichlet allocation model. We evaluated our approach on an annotated corpus of COPD patients' unstructured pulmonary, radiology, and cardiology notes.
Experiments compared to the baselines showed that our proposed approach improved interpretability as well as the accuracy of estimating COPD progression.
Our experiments demonstrated that the proposed deep-learning approach to handling temporal variation in COPD progression is feasible and can be used to generate a graphical representation of disease progression using information extracted from clinical notes.
Chronic obstructive pulmonary disease (COPD) is a progressive life threatening lung disease, affecting an estimated 251 million patients globally [1,2,3]. 5% of all deaths globally are caused by COPD, making it the third leading cause of death [4]. Quality of life deteriorates as COPD progresses from mild symptoms such as breathlessness, chronic cough, and fatigue to serious illness. Death from COPD results most frequently from respiratory failure, heart failure, pulmonary infection, or pulmonary embolism [5]. COPD is not curable [3]. Management of COPD is focused on relieving chronic symptoms, handling exacerbations appropriately, lowering the risk of progression and death, and improving quality of life [3].
The ongoing process of monitoring and assessing a patient's symptoms and comorbid conditions is essential to effectively managing COPD via appropriate interventions (such as a change in medications). Structured data from clinical research studies is often utilized to study disease progression. For COPD, valuable structured data would include forced expiratory volume in one second (FEV1), forced vital capacity (FVC), the FEV1/FVC ratio, and slow vital capacity (SVC). However, this data may convey an incomplete picture of the patient as these elements may miss critical data stored only in unstructured clinical notes, such as radiology data (e.g., chest X-ray, cardiac radiography) collected for diagnostic and surveillance purposes. Important data for classifying patients to a COPD stage and predicting disease progression may be embedded within these radiology notes and other clinical documents, such as an interpretation of test results and associated clinical findings. Extraction of this knowledge from the electronic health record (EHR) system requires the utilization of data mining and other computational methods [6,7,8].
There exists a gap in the availability of methods for providing substantial interpretation on the mechanism, progression, and key indicators/measurements for COPD. There are numerous challenges inherent in visualizing COPD progression using large amounts of unstructured clinical documents and classifying these documents into different COPD stages due to:
Irregularly sampled temporal data: Clinical notes are only generated when a patient has a clinical encounter with a clinician at an affiliated medical facility. Thus, the density of relevant clinical documentation in the EHR varies significantly over the span of care for this chronic condition. Although disease progression is a continuous-time process, data for each individual patient is often irregularly sampled due to availability. High density periods may signify the presence of a COPD stage transition as these time periods typically correspond to serious illness. For example, frequent visits or long hospitalizations might indicate a progression whereas less frequent visits may indicate a relatively stable patient state.
Individual variability in disease progression: COPD develops slowly as it often takes ten plus years to evolve from the mild stage to the very severe stage [5]. The rate of disease progression is variable for each individual patient as the primary risk factor is tobacco smoke, thus quitting smoking may delay progression to more severe stages [3]. Conversely, respiratory infections and other exacerbations may move the patient to a more severe stage. Patterns and speed of progression vary across the population.
Incompleteness of data: As COPD is a long term chronic condition, patients may seek COPD care outside of our network.
Modeling a time lapse for each disease stage is the first and foremost step. Utilizing long constant disjoint time windows (e.g., 1 year) may cause issues as that window encompass multiple COPD stages. Short constant disjoint time windows (e.g., 30 days) have been previously utilized by temporal segmentation methods [6] to associate a specific clinical note with its COPD stage. However, constant disjoint time windows cannot adequately represent the dynamics from the temporal autocorrelations that are present.
Capturing the structure of irregular time series data is possible utilizing a recurrent neural network (RNN) [9] or hidden Markov models. RNNs are neural networks with multiple hidden layers where the connections between hidden units form a directed cycle, enabling history to be preserved in internal memory via in these hidden states. RNNs are highly useful in applications where contextual information needs to be stored and updated [10]. Unlike hidden Markov models that are bound by the Markov property where future states depend only upon the present state, not on the sequence of events preceding, RNNs are not bound and can thus keep track of long-distant dependencies. The long-short term memory (LSTM) variant of an RNN is particularly useful as it uses a gated structure to handle long-term event dependencies in order to solve the vanishing and exploding gradient problem. As standard LSTMs cannot handle irregular time intervals [7], prior studies [7, 11] have modified the architecture. Pham et al. [12] solved the irregularly sampled time window issue by setting the forget gate in LSTM to ignore. Similarly, Baytas et al. [7] modified the memory cell of LSTM to account for the elapsed time. The approach of [7, 12] is to adjust the existing data to conform to a regular time interval. Thus, a common limitation of both approaches is that they require that a continuous time hypothesis be formulated [7, 12].
The specific aims of this study were to assess the feasibility (1) in utilizing deep learning to model irregular time segments without the need to formulate a continuous time hypothesis, and (2) of developing a graphical representation (called a COPD atlas) that can visualize and describe COPD conditions during different stages of disease progression in manner interpretable by clinicians and that validly conveys the underlying data.
We present a two-step approach for visualizing COPD progression at the level of unstructured clinical notes. First, we developed a four-layer deep learning model extending the LSTM architecture to automatically adjust time interval settings and to represent irregularly sampled time series data. Second, we created a temporal visualization (the COPD atlas) based on those irregular time segments to demonstrate COPD progression. We evaluated the COPD atlas' performance using human judgement.
A four-layer model to capture irregular time lapse segments
The components of the model include (Fig. 1): 1) a pre-processing and word embedding layer to prepare the data, 2) a LSTM layer to predict death date, and 3) a flatten and dense layer combination to capture the irregular time lapse of segments. An interpretation of notation utilized in this manuscript is available in Table 1. Our model was implemented in Keras (version 2.2.0) on top of Python (version 3.7.0).
An illustration of the proposed model that includes an embedding layer, long short term memory (LSTM) layer, flatten layer, and dense layer. See Table 1 and Eqs. (1) to (6)
Table 1 Meaning of notation
Pre-processing and word embeddings
A one-hot encoding enables categorical data to have a more expressive representation. We created one-hot encodings of a given regular time interval B for each sample (i.e., input data) to as a pre-processing step. The second step in the pre-processing pipeline utilized Keras padding to ensure that all input samples are the same length and to remove excess data unrelated to COPD. The third step in the pre-processing pipeline utilized an embedding layer in Keras as a hidden layer such that the words extracted from the textual data were represented by dense vectors where a vector represents the projection of the word in continuous vector space. A pre-requisite of this embedding layer is that the input data is integer encoded such that each word is represented by a unique integer. We initialize the embedding layer with random weights. Based on a preliminary analysis of the length and focus of the COPD notes, we defined an embedding layer with a vocabulary V of 10,000, a vector space v of 64 dimensions in which words will be embedded, and input documents T that have 1000 words each. The output of the preprocessing pipeline is an embedding with a dimensionality of (B, T).
Long short-term memory unit
LSTMs are well-suited to the task of making predictions given time lags of unknown size and duration between events. The standard LSTM is comprised of input gates, forget gates, output gates, and a memory cell. This standard architecture has the implicit assumption of being uniformly distributed across the elapsed time of a sequence. Detailed mathematical expressions of the LSTM used are given below, in which (1) to (6) are the input gate, forget gate, output gate, input modulation gate, current memory, and current hidden state, respectively (Fig. 1). The output of the LSTM Layers have dimensionality of, (B, T, v), (B, T, L), (B, T × L), and (B, P), and are intermediate results from our model. For the dense layer, we can estimate a patient's mortality if we specify P = 1 as the output. Each LSTM matrix is the output from one batch of the period.
$$ {i}_t:= \mathrm{sigmoid}\left({W}_{h_i}\times {h}_{t-1}+{W}_{x_i}\times {x}_t+{b}_i\right) $$
$$ {f}_t:= \mathrm{sigmoid}\left({W}_{h_f}\times {h}_{t-1}+{W}_{x_f}\times {x}_t+{b}_f\right) $$
$$ {o}_t:= \mathrm{sigmoid}\left({W}_{h_o}\times {h}_{t-1}+{W}_{x_o}\times {x}_t+{b}_o\right) $$
$$ {g}_t:= \tanh \left({W}_{h_g}\times {h}_{t-1}+{W}_{x_g}\times {x}_t+{b}_g\right) $$
$$ {c}_t:= \left({f}_t\cdot {c}_{t-1}\right)+\left({i}_t\cdot {g}_t\right) $$
$$ {h}_t:= {o}_t\cdot \tanh {c}_t $$
Capturing of time lapse segments
To capture irregularly sampled time windows, we used a flatten layer to facilitate the unfolding process followed by a dense layer to combine the time segments into a fully-connected network. We then used a sigmoid activation function for each LSTM matrix to output a sequence (whose dimension is 1) consisting of 0 and 1 as the irregular time lapse segments. Next, iterative learning occurred along the descending direction of gradient descent via the loss function.
Pseudocode is presented below.
Two baselines for prediction accuracy
We compared performance of the LSTM-based model on the standard metrics against two baseline classifiers: linear regression (LR) and support vector machines (SVMs). Partitioning the time dimension is a linear segmentation problem. We considered different settings for the initial size of the time segments hyperparameter in our proposed model of 30 days, 90 days, and 360 days.
We evaluated our model using a corpus of real-world COPD patient's clinical notes using 70:30 ratio between the training set and held-out evaluation set. We evaluated our model using standard performance metrics: positive predictive value, and prediction accuracy. We estimate the risk of death in patients using our LSTM-based model on the held-out evaluation dataset using a given clinical note to predict risk of death within a specified period (e.g., 30 days). We calculated positive predictive value of the baselines as the standard for judging whether obtaining irregularly sampled time window from the model is correct or not. Prediction accuracy for the LSTM-based model is calculated as means of comparison between the SoftMax output (which returns a date range corresponding to the predicted patient death date based on one sample) and a patient's actual death date. Prediction accuracy for LR and SVM was calculated as follows, for each given clinical note: if the absolute difference between the predicted death date from the model and the actual death date is within a given time window set the positive predictive value to 1, else the value is 0.
Baseline for COPD atlas
Our regional classifier utilizes a spiral timeline to visualize data by presenting topic words identified via latent Dirichlet allocation (LDA) under different themes in a spiral map to show the chronological development of focused themes [13]. To enhance interpretability of our themes, we utilized a representative sentence instead of theme words. More specifically, a representative sentence can be generated by comparing whether the sentence has 3–4 theme words (e.g., 30% of an average sentence length if the entire sentence has 10–14 words) that belong to a specific topic identified by LDA. A spiral timeline is an ideal representation for disease progression as it 1) compactly displays the longest possible length of time in a limited space and 2) avoids having a situation where a correlation between two parallel events is missed if all comparable parameters are similar. Combining timelines with a geographical map enables the depiction of temporal patterns of events with respect to spatial attributes [14]. We utilize the regional classifier as a baseline because it only considers windows of equal-time (e.g., year) rather than irregular time windows, thus enabling us to determine the impact of irregularly sampled time windows for this task. The goal is to compare the top k representative sentences captured by the regional classifier to our LSTM-based model to determine this impact on the pulmonary notes' corpus.
We manually constructed a condensed COPD atlas with the top k (=10) representative sentences and invited a panel of subject matter experts consisting of 3 physicians to assist with the evaluation. Our evaluation consisted of two steps: 1) we selected the most recent n (=7) enlarged time segments related to the periods prior to death; 2) we generated a list of the top k (=10) representative sentences for each time segment.
LSTM prediction accuracy at mutiple epochs on merged reports
Our modified LSTM model outperformed the SVM and LR; for example, it achieved a prediction accuracy of 78.85% on our corpus when setting 30 days as the initial size of the temporal segment, compared to the baselines of 8.33 and 0.35% corresponding to SVM and LR, respectively (Table 2).
Table 2 LSTM prediction accuracy compared to the baselines
Figure 2 indicates that the initial size of temporal segment is inversely proportional to the number of training epochs. With the window hyperparameter set to 360 days, our model converged in 23 epochs.
LSTM Prediction accuracy along a sufficient number of epochs
A visualization of the most recent seven time-lapse segments prior to death date on the spiral timeline
Based on the 50 epochs, we obtained a sequence of time lapse segments from the corpus of pulmonary notes using 90 days as the initial size for each time segment. As shown in Fig. 3, we illustrated the most recent seven time-lapse segments prior to death date.
Visualization of the Regional Classifiers standard spiral timeline (i.e., green line with an initial 30-day time window) compared to the first seven irregular time lapse segments (i.e., red line) from our proposed model
The COPD atlas generated from pulmonary notes
According to the first seven prior to death captured by our deep learning method, we constructed a condensed COPD atlas using a subset of the identified representative sentences (Fig. 4) Our annotators compared the insights generated from the COPD atlas against the gold version of GOLD criteria, and found that this fluctuating pattern can be utilized by physicians to detect the point at which patients begin to deteriorate and where action may be taken to slow progression. Second, physicians should focus on controlling complications (e.g., heart failure representative sentence #6: "Sinus tachycardia 127 bpm, Nonspecific ST/T- wave changes" was found in the [0–65] day window before death).
COPD atlas generated from pulmonary notes in the most recent seven time segments prior to death
The main findings of this study were the establishment of feasibility for our LSTM-based model to predict COPD progression without needing to formulate a continuous time hypothesis, and for generating a COPD atlas. The time windows produced by our LSTM-based model were more interpretable, accurate, and reliable in estimation of COPD mortality compared to baseline methods. Further, our model was found to be robust to the size of the initial time window.
The ability to effectively and efficiently convey detailed information related to disease progression for a particular patient represents an unmet need for chronic diseases (such as COPD, Alzheimer's, and diabetes) as it could be helpful in informing therapeutic and disease management decisions. This deep learning-based method not only helps us elicit important information regarding progression stage or timing but also is a potentially useful clinical enhancement to generate the COPD atlas. The updated 2018 GOLD guideline uses a combined COPD assessment approach to group patients according to symptoms and their prior history of exacerbations [2]. A COPD atlas enhanced with additional potentially relevant data (such as symptoms, hospitalization history, or additional clinical note types) could then be used for predictive modeling of COPD progression that can then be used to inform COPD guideline modifications. Future telemedicine workflows, patient diaries, and monitoringOther potential clinical applications of the COPD atlas (and potentially a generalized clinical atlas) include: the simultaneous prediction of survival probabilities, signs of developing related diseases, and symptom-associated evolutionary trajectories at different stages of disease progression. The atlas may also address the proxy problem – to predict the probability of death for a given patient within a permissible tolerance range, and to help make recommendations for palliative care referral.
Our approach may be applicable in the palliative and hospice care settings to assist clinician decision making regarding the application of palliative and hospice care to terminal COPD patients. The severe stages of COPD manifest as a lack of physical, social, and emotional functioning, which directly degrade quality of life. In the moderate to severe stages, terminal COPD patients suffer from extreme dyspnea and shortness of breath. 90% of COPD patients suffer from anxiety or depression [14], indicating that COPD patients require emotional support and treatments to relieve the symptoms from COPD related pain. Palliative care and hospice care do improve end-stage patient's quality of life. However, there often exists a mismatch between patients' desired and received care at the end of life. In the United States, up to 60% of deaths happen in acute care facilities where patients receive aggressive end of life care due to physicians' tendencies to over-estimate prognoses and/or their ability to treat the patient [15]. Our research may help reduce physician over-estimates of prognosis and may be instrumental as a decision aid for terminal COPD patients in palliative or hospice care settings.
Our study provides new insights into the visualization of disease progression by investigating methods for general clinical notes corpora instead of the patients who are carefully chosen from clinical trials. This approach makes it much easier to abstract knowledge from clinical practice for use in clinical research. Compared with other studies, our approach combines clinical experience with machine learning. Specifically, selecting the pre-set time windows to partition disease progression comes from physician experience; meanwhile a machine learning approach is utilized to adjust (enlarge) these pre-set time windows by merging clinical notes via the similarity of their content. Considering the frequency of sentence representatives based on the native output of latent Dirichlet allocation (an alternative to embedding or word sense disambiguation techniques) is ingenious but straightforward. Most deep learning embedding approaches require expensive operations (like running a convolutional neural network) to generate (often uninterpretable) representations.
As pulmonary, cardiology, and radiology notes for a patient from the same date may have different correlations to different stages of COPD progression, merging them together using a heuristic merger that does not consider these relationships may not be ideal. This limitation to our study could be mitigated by applying learning methods that compute a score to balance the differences (e.g., priority, dataset size) amongst the three domains. Another limitation is that further research on the COPD atlas is needed to more fully describe each sub-stage clinical characteristics that capture the entire patient experience rather than just what is in the pulmonary notes. For example, although we used clinical reports from multiple domains, we didn't consider the potentially complex relationships among corpora nor any structured clinical data (e.g., symptoms documented in the problem list of the EHR).
We developed a novel two-step approach to visualize COPD progression at the level of clinical notes utilizing a four-layer LSTM-based model to capture irregularly sampled time windows. The main findings of this study were the establishment of feasibility for our LSTM-based model to predict COPD progression without needing to formulate a continuous time hypothesis, and for generating a COPD atlas. We addressed a gap in the literature related to the need to formulate a continuous time hypothesis for modeling irregularly sampled time windows. The COPD atlas based on our results produced insightful, interpretable, and reliable results.
The data used in this study is real-word chronic obstructive pulmonary disease corpus and consists of three types of free-text clinical notes (i.e., pulmonarynotes, radiology reports, cardiology reports), which were extracted from the Research Patient Data Registry at Partners Healthcare, an integrated healthcare delivery network located in the greater Boston area of Massachusetts. We retrieved patients' death dates from Massachusetts Death Certificate files. A cohort of 15,500 COPD patients who both received care at any Partners Healthcare facility and died between 2011 and 2017 was extracted. This study was approved by the Partners Institutional Review Board (IRB).
Pulmonary notes: We extracted physician's interpretation of patients' lung function from pulmonary notes. Each pulmonary note contains indicators for measuring the air movement in and out of the lungs during respiratory maneuvers (e.g., FVC, FEV1, the FEV1/FVC ratio), as well as a PHYSICIAN INTERPRETATION section. A total of 78,489 pulmonary notes corresponding to 2,431 unique patients were extracted. The average time span of a patient for the pulmonary data source was 724.4 days, with a max span of 3,003 days.
Radiology reports: We extracted chest X-ray radiology reports and focused on two main sections of each report: FINDINGS and IMPRESSION. In our cohort, we had 1,893,498 radiology reports corresponding to 13,414 unique patients. The average time span of a patient using the radiology data source was 843.8 days, with a max span of 2,469 days.
Cardiology reports: We utilized abnormal electrocardiogram reports, and their corresponding patient ID, date of test, and last test date. In our cohort, we had 1,029,363 cardiology reports for 13,918 patients. The average time span of a patient using the cardiology data source was 740.8 days, with a max span of 2,459 days.
Our research data (i.e., the corpus of clinical notes) is unavailable for access because it is confidential, and it would be cost prohibitive to sufficiently de-identify such a large corpus of clinical documents to remove all patient identifying data according to the HIPAA standard.
FEV1:
Forced expiratory volume in one second
FVC:
Forced vital capacity
LSTM:
Long-short term memory
RNNs:
Recurrent neural networks
SVC:
Slow vital capacity
SVMs:
American Lung Association," Trends in COPD (Chronic Bronchitis and Emphysema): Morbidity and mortality" 2013. Avariable from: http://www.lung.org/assets/documents/research/copd-trend-report.pdf [Accessed July 2018].
Global Initiative for Chronic Obstructive Lung Disease, Inc," Global strategy for the diagnosis, management, and prevention of chronic obstructive pulmonary disease," 2017. Avariable from: https://goldcopd.org/wp-content/uploads/2017/11/GOLD-2018-v6.0-FINAL-revised-20-Nov_WMS.pdf [Accessed July 2018].
World Health Organization," Chronic obstructive pulmonary disease (COPD)," 2017. Avariable from: https://www.who.int/en/news-room/fact-sheets/detail/chronic-obstructive-pulmonary-disease-(copd) [Accessed September 2019].
World Health Organization," The top 10 causes of death," 2018. Avariable from: https://www.who.int/news-room/fact-sheets/detail/the-top-10-causes-of-death [Accessed September 2019].
Barr RG, Bluemke DA, Ahmed FS, et al. Percent emphysema, airflow obstruction, and impaired left ventricular filling. N Engl J Med. 2010;362(3):217–27.
Wang X, Sontag D, Wang F. "Unsupervised learning of disease progression models," KDD'14 Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: KDD; 2014. p. 85–94. https://dl.acm.org/citation.cfm?id=2623330.
Xiao H, Gao J, Vu L, et al. "Learning temporal tisheng diabetes patients via combining behavioral and demographic data," Kdd'17 Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Canada: KDD; 2017. p. 2081–9.
Baytas IM, Xiao C, Zhang X, et al. "Patient subtyping via time-aware lstm networks," Kdd'17 Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Canada: KDD; 2017. p. 65–74.
Che C, Xiao C, Liang J, et al. "An RNN Architecture with Dynamic Temporal Matching for Personalized Predictions of Parkinson's Disease," SIAM International Conference on Data Mining (SDM 2017). Siam; 2017. p. 198–206. https://epubs.siam.org/doi/book/10.1137/1.9781611974768.
Bengio Y, Simard P, Frasconi P. Learning long-term dependencies with gradient descent is difficult. IEEE Trans Neural Netw. March 1994;5(2):157–66.
Pham T, Tran T, Phung D, et al. "Deepcare: A deep dynamic memory model for predictive medicine," The 20th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining (PAKDD 2016). New Zealand; 2016. p. 30–41.
Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997;9(8):1735–80.
Tang C, Zhang H, Lai KH, et al. "Developing a regional classifier to track patient needs in medical literature using spiral timelines on a geographical map," IEEE International Conference on Bioinformatics and Biomedicine (BIBM 2017). USA: IEEE; 2017. p. 874–9.
Hewagamag KP, Hirakawa M, Ichikawa T. "Interactive visualization of spatiotemporal patterns using spirals on a geographical map," IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC 1999). Japan: IEEE; 1999. p. 296–303.
Blei DM, Lafferty JD. "Dynamic topic models." The 23rd International Conference on Machine Learning (ICML 2006). USA: ACM; 2006. p. 113–20.
The authors would like to thank Zhengmei Xu MD, Laura C Myers MD, and Francois Bastardot MD for clinical review as subject matter experts or for thoughtful comments.
This article has been published as part of BMC Medical informatics and Decision Making Volume 19 Supplement 8, 2019: Selected articles from the IEEE BIBM International Conference on Bioinformatics & Biomedicine (BIBM) 2018: medical informatics and decision making (part 2). The full contents of the supplement are available online at https://bmcmedinformdecismak.biomedcentral.com/articles/supplements/volume-19-supplement-8.
This work was partially funded by the Partners Innovation Fund, the CRICO/Risk Management Foundation of the Harvard Medical Institutes Incorporated, the National Natural Science Foundation of China Projects No. U1636207, No.91546105, and the Shanghai Science and Technology Development Fund No. 19511121204, No. 16JC1400801.
Division of General Internal Medicine and Primary Care, Brigham and Women's Hospital, Boston, MA, USA
Chunlei Tang, Joseph M. Plasek, Min-Jeoung Kang, David W. Bates & Li Zhou
Clinical and Quality Analysis, Partners HealthCare System, Boston, MA, USA
Chunlei Tang & David W. Bates
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, Shanghai, China
Haohan Zhang & Yun Xiong
School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA
Haohan Zhang
Loomis Chaffee School, Windsor, CT, USA
Haokai Sheng
Chunlei Tang
Joseph M. Plasek
Min-Jeoung Kang
Yun Xiong
David W. Bates
Li Zhou
All authors provided substantial contribution to the conception and design of this work, its data analysis and interpretation, and helped draft and revise the manuscript. All the authors are accountable for the integrity of this work. All authors read and approved the final manuscript.
Correspondence to Min-Jeoung Kang.
This study was approved by the Partners Healthcare Institutional Review Board (IRB). The informed consent requirement was waived by the IRB due to the low risks of the study.
Tang, C., Plasek, J.M., Zhang, H. et al. A temporal visualization of chronic obstructive pulmonary disease progression using deep learning and unstructured clinical notes. BMC Med Inform Decis Mak 19 (Suppl 8), 258 (2019). https://doi.org/10.1186/s12911-019-0984-8
chronic obstructive,"
neural networks (computer)
|
CommonCrawl
|
When would we detect a tiny meter size natural satellit in a geostationary orbits?
A natural, tiny (meters-size, maybe 10.000kg mass) natural satellite could be trapped in a geostationary orbit. I wondered for quite some time:
When and how are we able to detect these satellites?
I assume the distance of 35.700km of way too far for naked eye detection. So the next realistic opportunity would have been Galilei who first used telescopes for scientific observations of the sky. Could he have detected such an object?
Now even if he had enough resolution, he certainly didn't systematically cover the whole 4$\pi$ of the sky (especially as the satellite could be syncronous with the other side of the earth, and he could never see it).
Then, would it be at the times of Hubble (because of the Mount Wilson Observatory and other similarly powerful telescopes)? Would it at the times when quality equipment for hobby astronomers became cheap and therefore widespread enough (to cover huge areas of the sky)? Or would we - until today - not be able to detect such objects?
To answer this question, one needs to consider both the technical ability as well as the area of the sky that is covered.
observational-astronomy amateur-observing
Mario Krenn
Mario KrennMario Krenn
$\begingroup$ If you clear out everything that comes after "Different Interpretation" and really make this question about detection of a natural satellite, the question maybe better received here. Right now I think it will be closed as off-topic. You might consider asking about the "ancient civilization" angle in a different Stack Exchange site. Possibly Worldbuilding SE or possibly Skeptics SE through I'm not sure about that. $\endgroup$ – uhoh Mar 12 '19 at 13:44
$\begingroup$ Keep in mind that geostationary orbits only exist over limited latitudes. $\endgroup$ – Carl Witthoft Mar 12 '19 at 18:18
$\begingroup$ Thanks for the edit, it looks much better now! $\endgroup$ – uhoh Mar 13 '19 at 4:43
tl;dr: At distances far enough from Earth that the motion with respect to the stars was slow, a serendipitous survey photographic plate from a large enough telescope might catch a trail, and in a doubly serendipitous situation it might have been a short exposure, duplicated on the next night, an Earth orbit suspected, and a hunt for a second Earth satellite begun.
However starting in the 1960's and 1970's radar and visual scans for artificial satellites in Earth orbit would have found this natural satellite in Earth orbit if it were low enough.
I'll start with @CarlWitthoft's 5 meter asteroid refer to this answer and especially this answer. Two equivalent equations to get the absolute magnitude of an asteroid are:
$$ H = C - 5 \log_{10} D - 2.5 \log_{10} p_V$$
where $H$ is absolute magnitude, $p_V$ is albedo, D is in km, and $C$ = 15.618, and
$$M_{Abs} = 5 \left(\log_{10}(1329) -\frac{1}{2}\log_{10}(\text{albedo}) -\log_{10}(D_{km})\right).$$
A 5 meter diameter asteroid with an albedo of 0.1 has an absolute magnitude of +29.6.
The apparent magnitude from this answer:
Knowing the absolute magnitude of an object, you calculate the apparent magnitude $m$ using:
$$ m = M_{Abs} + 5 \log_{10}\left(\frac{d_{SR} \ d_{RE}}{1 \ \text{AU}^2 O(1)}\right), $$
where $d_{SR}$ and $d_{RE}$ are the Sun-Roadster and Roadster-Earth Sun-satellite and satellite-Earth distances, each normalized by 1 AU, and the factor $O(1)$ is the phase integral, of order unity, taking into account the angular difference between the direction of illumination and the direction of viewing. In an order of magnitude calculation, this only becomes really significant when the body moves between the Sun and the viewer. See https://en.wikipedia.org/wiki/Absolute_magnitude#Solar_System_bodies_(H).
Let's pick two distances. One is the geostationary distance where the satellite would appear to hover above the observer, probably drifting up and down in a roughly annelema shape because the Earth's oblanteness will eventually tilt the orbit. See Geostationary orbit; Orbital stability. It will have a distance to Earth as close as 36,000 km.
The other is a low Earth orbit, but high enough that it will not decay due to drag too soon. Call it 1000 km altitude, or a circular orbit with a 7378 km semi major axis.
Plugging all of this into the equation above, I get:
orbit closest distance visual magnitude
Geosynchronous altitude 36,000 km +20.6
Low Earth Orbit 1,000 km +16.7
In low Earth orbit the apparent magnitude is almost as bright as Pluto, but it's going to be moving pretty fast. $\sqrt(GM/a)$ gives 7350 m/s, at a distance of 1000 km that's about 0.4 degrees per second. Any large telescope that's being used in astronomy will be tracking the motion of stars, or close to that, so it will be fast +17 magnitude track rather than a dot, and last only a fraction of a second. That probably wouldn't expose a photographic plate or if it did it would be dismissed as an artifact, meteor, or scratch. Visually it wouldn't be noticed.
At GEO type distances and +20.6 magnitude, the object would move about 0.25 degree in a minute, so it might also be captured on a photograph, but by the time the plate was developed it would be impossible to know when it appeared in a long exposure. However if the exposure (say at the Hale 200 inch telescope) were short, it really is possible that at a short-term trajectory on the celestial sphere could be considered. The problem is that nobody would suspect that it was in Earth orbit, and they would extrapolate to a heliocentric orbit and never find it again.
If the plate happened to be a series and there was another exposure of the same patch of sky the next night, then they would see it again and get pretty suspicious that it was in Earth orbit.
However, in a post-Sputnik cold war era radar and optical searches of the sky for objects in Earth orbit became particularly interesting.
So I would say that satellite surveys (both optical and radar) in the 1960's and 1970's would be the first likely candidates to find this 5 meter, 0.1 albedo satellite.
For some insight on optical tracking, see the two videos linked in Are commercial communications satellites in GEO being constantly monitored by telescopes?. Currently these links will take you to a new tab with the YouTube video:
https://www.youtube.com/watch?v=8ebIAUjFfZM
https://www.youtube.com/watch?v=4FXX1kSNljU
If you would like to see them here, then leave a comment or answer or vote at Interest in looking into adding the YouTube viewer?.
To first order: the ratio of the Moon's radius to distance from Earth is
$ \frac{1740e3}{380e6} = 0.004578947 $
and the ratio of a 5-m radius satellite at geosync orbit is roughly
$\frac{5}{36e6 } = 1.388889e-07 $
This means, for similar albedo, the light reaching your telescope (or eye) would be $(\frac{1.388889e-07}{0.004578947})^2 = 9.200339e-10 $ as much light as the full moon. You're not going to see it even with a good telescope.
As the comment points out, I was too glib there. If you knew where to look, a decent 20 cm (aka 8-inch) telescope can easily show an object of that apparent magnitude. The nice thing about geostationary is that you can spend plenty of nights sweeping the possible sky regions; the satellite won't move.
Carl WitthoftCarl Witthoft
$\begingroup$ That's apparent magnitude +10, easily visible in a 20cm telescope. $\endgroup$ – Mike G Mar 12 '19 at 20:11
Not the answer you're looking for? Browse other questions tagged observational-astronomy amateur-observing or ask your own question.
Good source for the relationship between absolute magnitude, diameter, and albedo?
Relation between phase and magnitude
How would we detect a planet behind the Sun?
What would this moving point of light be?
How was Earth's "quasi-satellite" 2016 HO3 "first spotted" and it's orbit determined?
Could non-supernova carbon, oxygen, or silicon flashes be observed?
How would a small TCO (temporarily captured orbiter) or other natural Earth satellite most likely be detected?
Solar Telescope Projector Screen? [& intro telescope for solar eclipse]
What would happen if someone had a telescope and watched Betelgeuse when it goes supernova?
How feasible is it that we could see the Central Black Hole Sgr A* occlude one of it's closely orbiting stars?
|
CommonCrawl
|
Credit, funding, margin, and capital valuation adjustments for bilateral portfolios
Claudio Albanese1,2,
Simone Caenazzo1 &
Stéphane Crépey3
We apply to the concrete setup of a bank engaged into bilateral trade portfolios the XVA theoretical framework of (Albanese and Crépey2017), whereby so-called contra-liabilities and cost of capital are charged by the bank to its clients, on top of the fair valuation of counterparty risk, in order to account for the incompleteness of this risk. The transfer of the residual reserve credit capital from shareholders to creditors at bank default results in a unilateral CVA, consistent with the regulatory requirement that capital should not diminish as an effect of the sole deterioration of the bank credit spread. Our funding cost for variation margin (FVA) is defined asymmetrically since there is no benefit in holding excess capital in the future. Capital is fungible as a source of funding for variation margin, causing a material FVA reduction. We introduce a specialist initial margin lending scheme that drastically reduces the funding cost for initial margin (MVA). Our capital valuation adjustment (KVA) is defined as a risk premium, i.e. the cost of remunerating shareholder capital at risk at some hurdle rate.
Albanese and Crépey (2017) developed an XVA theoretical framework based on a capital structure model acknowledging the impossibility for a bank to replicate jump-to-default related cash flows. Their approach results in a two-step XVA methodology.
First, the so-called contra-assets (CA) are valued as the expected counterparty default losses and funding expenditures. These expected costs can be represented as the sum between the valuation, dubbed CR, of counterparty risk to the bank as a whole (or "fair valuation" of counterparty risk), plus an add-on compensating bank shareholders for a wealth transfer to creditors, corresponding to the so-called contra-liabilities (CL), triggered by the impossibility for the bank to hedge its own jump-to-default exposure.
Second, a KVA risk premium is computed as the cost of a sustainable remuneration of the shareholder capital at risk earmarked to absorb the exceptional (beyond expected) losses due to the impossibility for the bank to replicate counterparty jump-to-default cash flows.
The all-inclusive XVA add-on appears as
$$ \text{CA} +\text{KVA}=\text{CR} +\text{CL}+\text{KVA}. $$
This formula is applied for every new deal or tentative deal, incrementally on a run-off basis in the portfolios with and without the new deal, in order to derive the so-called funds transfer price (FTP) of the deal. The corresponding pricing policy is interpreted as the cost for the bank of the possibility to run-off its portfolio, in line with shareholder interest, from any future time onward if wished. This "soft landing" option is key from a regulator point of view, as it guarantees that the bank should not be tempted to go into snowball or Ponzi kind of schemes where always more trades are entered for the sole purpose of funding previously entered ones.
In the present paper we apply this framework to the concrete setup of a bank engaged in bilateral trade portfolios. In this context, the discounted expectation of losses due to the default of counterparties or of the bank itself are respectively known as CVA (credit valuation adjustment) and DVA (debt valuation adjustment). Counterparty risk mitigants include the variation margin (VM), tracking the mark-to-market of client portfolios, and the initial margin (IM) set as a cushion against gap risk, which is the risk of slippage between the portfolio and its variation margin during liquidation periods. The cost of funding cash collateral for the variation margin is known as funding valuation adjustment (FVA), while the cost of funding segregated collateral posted as initial margin is the margin valuation adjustment (MVA). Contra-liability counterparts of the FVA and the MVA arise as the FDA (funding debt adjustment) and the MDA (margin debt adjustment). The contra-liability component of a unilateral CVA is dubbed CVACL.
The main contributions of this paper are the concrete equations of Proposition 4.1 for the corresponding CA=CVA+FVA+MVA and KVA metrics, the XVA algorithm and the numerical results on real datasets. The refined FVA in (29) captures the intertwining of the FVA and economic capital, which leads to a significantly lower FVA as a result of the fungibility of economic capital (on top of reserve capital) as a source of funding for variation margin. The alternative MVA formula (30) shows how a specialist initial margin lending scheme may drastically reduce the funding cost for initial margin (MVA).
Assumptions are emphasized in bold throughout the paper.
XVA conceptual framework
This section provides a brief recap of the XVA methodology that arises from Albanese and Crépey (2017).
We consider a bank made of four different floors. The intermediate floors (CVA and Treasury) are in charge of filtering out counterparty risk and its risky funding implications from the contracts with the clients of the bank. In the case of the Treasury there are two desks on the same floor (sharing the same bank account): the FVA desk, in charge of funding the VM, and the MVA desk, in charge of funding the IM. The upper floor is the management in charge of the KVA payments to the shareholders, i.e. of the dividend distribution policy of the bank. Thanks to the upper floors of the bank, the traders of the bottom (dubbed clean) floor can focus on the management of the market risk of their respective business lines, ignoring counterparty risk and its capital and funding implications.
CVA payments by clients flow into a a reserve credit (RC) account used by the CVA desk for coping with expected counterparty default losses. FVA and MVA payments flow into a reserve funding (RF) account used by the Treasury for coping with expected funding expenditures. KVA payments flow into a risk margin (RM) account from which they are gradually released by the bank management to shareholders as a remuneration for their capital at risk. We assume that all bank accounts are continuously reset to their theoretical target level. Therefore, the relations
$$ {}\left\{ \begin{array}{l} \!\text{RC}\,=\,\text{CVA}\text{ and }\text{RF}\,=\,\text{FVA}\!+\text{MVA},\text{ hence } \text{RC}+\text{RF}=\text{CVA}+\text{FVA}+\text{MVA}=\text{CA}\\ \!\text{RM}\,=\,\text{KVA} \end{array} \right. $$
hold at all times. In particular, much like with futures, the trading position of the bank (clean desks, CVA desk and Treasury altogether) is reset to zero at all times, but it generates a non-vanishing (unless perfectly hedged) trading loss-and-profit process L, or loss process for brevity. The KVA payments by the management of the bank come on top of the trading gains as an additional contribution to shareholder dividends, which corresponds to risk compensation.
In order to focus on counterparty risk and XVA analysis, we assume throughout the paper that the clean desks of the bank are perfectly hedged, i.e. their trading loss is zero. Hence, only the counterparty risk related cash flows remain and we can concentrate on the activity of the CVA floor, the Treasury and the management of the bank.
The default of the bank is modeled as a totally unpredictable time τ calibrated to the bank CDS spread, which we view as the most reliable and informative credit data regarding anticipations of markets participants about future recapitalization, government intervention, etc. Assuming instantaneous liquidations upon defaults, the time horizon of the model is \(\bar {\tau }=\tau \wedge T,\) where T is the final maturity of the portfolio.
We assume that, at the bank default time τ , an exceptional (e.g. operational) loss occurs and wipes out any residual risk capital (shareholder capital at risk and risk margin) and reserve funding capital, whereas the residual credit capital is transferred to creditors, who need it for coping with counterparty defaults after the bank default (in which case, creditors must mark to market a loss on the derivative position of the bank; similarly, if the creditors unwind the position of a counterparty, they have to recognize a CVA discount.
We consider a pricing stochastic basis \((\Omega,\mathbb {G},\mathbb {Q}),\) with model filtration \(\mathbb {G}=(\mathfrak {G}_{t})_{t\in \mathbb {R}_{+}}\) and probability measure \(\mathbb {Q},\) such that all processes of interest are \(\mathbb {G}\) adapted and all random times of interest are \(\mathbb {G}\) stopping times. The corresponding expectation and conditional expectation are denoted by \(\mathbb {E}\) and \(\mathbb {E}_{t}\). All value and price processes are modeled as semimartingales in a càdlàg version.
We denote by r a \(\mathbb {G}\) progressive OIS rate process, where OIS rate stands for overnight indexed swap rate, which is together the best market proxy for a risk-free rate and the reference rate for the remuneration of cash collateral. We write \(\beta _{t}=e^{-\int _{0}^{t} r_{s} ds}\) for the corresponding risk-neutral discount factor. We denote by the survival indicator process of the bank. For any left-limited process Y, we denote by Δ τ Y=Y τ −Y τ− the jump of Y at τ and by Y τ−=JY+(1−J)Y τ− the process Y stopped before τ, so that
$$ dY_{t} = dY^{\tau-}_{t}+(- \Delta_{\tau} Y)\, dJ_{t},\, 0\le t\le \bar{\tau}. $$
We say that the process Y is stopped before τ if Y=Y τ−.
We denote by \(\mathcal {C},\mathcal {F}\) and \(\mathcal {M}\) the cumulative streams of counterparty exposure cash flows and of VM and IM related risky funding cash flows. By risky funding cash flows we mean the funding cash flows other than risk-free accrual of the bank accounts and risk-free remuneration of the collateral. Cash flows are valued by their risk-free discounted \((\mathbb {G},\mathbb {Q})\) conditional expectation, which is assumed to exist for all the cash flows that appear in the paper. Risky funding is implemented in practice as the stochastic integral of predictable hedging ratios against funding assets. Under the above valuation assumption for cash flows, the value process of each of these assets is a martingale modulo risk-free accrual. Hence \(\boldsymbol{\boldsymbol{\boldsymbol{\mathcal {F}}}}\) and \(\boldsymbol{\boldsymbol{\boldsymbol{\mathcal {M}}}}\) are \((\mathbb {G},\mathbb {Q})\) martingales. Moreover the trading desks of the bank is supposed to be shareholder-centric, in the sense that traders only value the cash flows that affect the shareholders of the bank, i.e. the cash flows received by the bank prior its default or the transfer from shareholders to creditors of the residual value on their trading account at the bank default time τ .
We have, for \(0\leq t\leq \bar {\tau },\)
The trading loss process L of the bank is a risk-neutral local martingale such that
$$ \begin{aligned} &\beta_{t} dL_{t}=d\left(\beta_{t} \text{CA}_{t}\right)+\beta_{t} \left(d \mathcal{C}^{\tau-}_{t} + d\mathcal{F}^{\tau-}_{t} + d\mathcal{M}^{\tau-}_{t}\right),\, 0\leq t\leq \bar{\tau}, \end{aligned} $$
starting from some initial value L 0=z unknown but immaterial, as only the fluctuations of L matter in capital computations.
In (4) the CA equations express the valuation of the cash flows that affect each of the trading desks before bank default (CVA, FVA and MVA desks, as clean desks disappear from the picture under our perfect clean hedge assumption). Moreover, the terminal cash flow in the CVA equation corresponds to the freezing of the residual credit capital RC τ−=CVA τ− (cf. (2)), which is transferred from bank shareholders to creditors in case of default of the bank. This results in a CVA terminal condition CVA T =0 on {T<τ} and Δ τ CVA=0 on {τ<T}, as embedded in the CVA equation in (4). This comes in contrast with the terminal conditions \(\text {FVA}_{\bar {\tau }}=\text {MVA}_{\bar {\tau }}=0,\) which hold by our assumption that the residual funding capital is used for absorbing the exceptional loss of the bank at time τ. In line with the respective definitions (see "Introduction" section), we have
$$\begin{aligned} &\text{CR}_{t}=\mathbb{E}_{t}\int_{t}^{\bar{\tau}} \beta_{t}^{-1}\beta_{s} \left(d\mathcal{C}_{s}+d\mathcal{F}_{s}+d\mathcal{M}_{s}\right),\, &\text{CL}_{t}=\text{CA}_{t}-\text{CR}_{t}, \end{aligned} $$
from which the expressions stated for CR and CL in (4) result by the martingale properties of \(\mathcal {F}\) and \(\mathcal {M}\) and the CA equations in (4).
Note that the CVA, FVA and MVA equations in (4) are equivalent to the above-mentioned terminal conditions, alongside with martingale conditions on the following processes on \([0,\bar {\tau } ]\), which correspond to the trading loss processes of the corresponding trading desks:
$$ \begin{aligned} &\beta_{t} d{L}^{cva}_{t} =d\left(\beta_{t}{\text{CVA}}^{\tau-}_{t}\right)+\beta_{t} d\mathcal{C}^{\tau-}_{t}\\ &\beta_{t} d{L}^{fva}_{t}= d(\beta_{t}{\text{FVA}}_{t}) +\beta_{t} d\mathcal{F}^{\tau-}_{t}\\ &\beta_{t} d{L}^{mva}_{t}= d(\beta_{t}{\text{MVA}}_{t}) +\beta_{t} d\mathcal{M}^{\tau-}_{t}, \end{aligned} $$
Under our perfect clean hedge assumption, these add-up to the trading loss L of the bank (the KVA payments by the management of the bank are not part of the trading losses but risk compensation to shareholders). This yields the Eq. (5) for L (noting that the CVA process is stopped before τ). □
We emphasize that Proposition 2.1 is derived from a pure valuation perspective. In most references in the literature, XVA equations are based on hedging arguments. The reason is that previous XVA works were not considering KVA yet. Under our approach, the KVA is the risk premium for the market incompleteness related to the impossibility for the bank of replicating counterparty default losses. For consistency, our KVA treatment requires a pure valuation (as opposed to hedging) treatment of contra-assets and contra-liabilities.
Note that the CVA formula in (4) is in fact a fixed-point equation, which remains to be shown well posed in a suitable space of processes. Even if not visible at that stage, this remark also applies to the FVA equation, as the risky funding cash flows \(\mathcal {F}\) typically depend on the FVA itself (cf. e.g. (17), where CA includes an FVA term).
In the setup of Albanese and Crépey (2017), valuation is not price, which entails an additional deduction in the form of the KVA risk premium devised by the management of the bank in order to compensate the shareholders for their capital at risk. In the absence of reliable information about it at the time horizon of XVA computations (which can be as long as decades), we assume that the historical probability measure \(\mathbb {P}\) required for capital calculations coincides with the pricing measure \(\mathbb {Q}\), the discrepancy between \(\mathbb {P}\) and \(\mathbb {Q}\) being left to model risk.
Since L fluctuates over time according to (5), economic capital EC=EC t (L) needs to be dynamically earmarked by the bank in order to absorb exceptional losses (beyond the expected level of the losses already accounted for by reserve capital). As shown in Albanese and Crépey (2017, Sections 4.3, 6.5 and 8.1) the size of the risk margin (RM) account required for remunerating shareholder capital at risk (SCR) at a constant hurdle rate h throughout the life of the portfolio is
$$ \begin{aligned} \text{KVA}_{t} =h \mathbb{E}_{t} \int_{t}^{\bar{\tau}} e^{-\int_{t}^{s} (r_{u} +h) du} \text{EC}_{s}(L) ds,\,t\in[0,\bar{\tau}], \end{aligned} $$
where EC s (L) is computed based on a
$$ \text{97.5\% expected shortfall (ES) of } \int_{s}^{s+1}\beta_{s}^{-1} \beta_{u} dL_{u} \text{ conditional on } \mathfrak{G}_{s}, $$
which we denote by ES s (L).
The " +h" in the discount factor in (7) reflects the fact that the risk margin RM=KVA (cf. (2)) is loss-absorbing, hence part of EC, so that shareholder capital at risk reduces to SCR = EC − KVA. As a further consequence of this loss-absorbing feature of the risk margin, an increase of economic capital above ES may be required in order to ensure the consistency condition KVA≤EC, i.e. EC−KVA=SCR≥0. This results in a fixed-point problem (7), where
$$ \begin{aligned} \text{EC}_{t}(L)=\max\left(\text{ES}_{t}(L),\text{KVA}_{t}\right),\, t\in[0,\bar{\tau}]. \end{aligned} $$
For KVA computations entailing capital projections over decades, an equilibrium view based on Pillar II economic capital (EC) is more attractive than the ever-changing Pillar I regulatory charges supposed to approximate it (see Pykhtin (2012)). However, Pillar I regulatory capital requirements could be incorporated into our approach, if desired, by replacing ES=ES t (L) in (9) by its maximum with the regulatory capital pertaining to the portfolio.
Bilateral trading cash flows
In this paper, we assume that the bank is engaged in bilateral trading of a derivative portfolio split into several netting sets corresponding to counterparties indexed by i=1,…,n, with default times τ i and survival indicators . The bank is also default prone, with default time τ and survival indicator .
We suppose that all these default times are positive and admit a finite intensity. In particular, defaults occur at any given \(\mathbb {G}\) predictable stopping time with zero probability, so that such events can be ignored in all computations.
Exposures at defaults
Let \(\text {MtM}^{i}_{t}\) be the mark-to-market of the i-th netting set, i.e. the trade additive risk-neutral conditional expectation of future discounted promised cash flows, ignoring counterparty risk and assuming risk-free funding. Let \(\text {VM}^{i}_{t}\) denote the corresponding variation margin, counted positively when received by the bank. Hence,
$$ P^{i}_{t} = \text{MtM}^{i}_{t} - \text{VM}^{i}_{t} $$
is the net spot exposure of the bank to the i-th netting set. In addition to the variation margin \(\text {VM}^{i}_{t}\) that flows between them, the bank and counterparty i post respective initial margins PIMi and \(\text {RIM}^{i}_{t}\) in some segregated accounts.
In practice, there is a positive liquidation period, usually a few days, between the default of a party and the liquidation of its portfolio. The gap risk of slippage of MtM\(^{i}_{t}\) and of unpaid contractual cash flows during the liquidation period is the motivation for the initial margin.
A positive liquidation period is explicitly introduced in Armenti and Crépey (2017a, b) and Crépey and Song (2016) (see also Brigo and Pallavicini (2014)) and involves introducing the random variables
$$ \text{MtM}^{i}_{\tau_{i} + \delta{t}} + \delta \text{MtM}^{i}_{\tau_{i} + \delta{t}} - \text{VM}^{i}_{\tau_{i}}, $$
where δ t is the length of the liquidation period and \(\delta \text {MtM}^{i}_{\tau _{i} + \delta {t}}\) is the accrued value of all cash flows owed by the counterparty to the bank during the liquidation period.
To simplify the notation in this paper, we take the limit as δ t→0 and approximate \(\text {MtM}^{i}_{\tau _{i} + \delta {t}} + \delta \text {MtM}^{i}_{\tau _{i} + \delta {t}}\) with \(\widehat {\text {MtM}}^{i}_{\tau _{i}}\), and therefore (11) by \(Q^{i}_{\tau _{i}}=\widehat {\text {MtM}}_{\tau _{i}}^{i}- \text {VM}_{\tau _{i}}^{i}\), for a suitable \(\mathbb {G}\) optional process \(\widehat {\text {MtM}}^{i}\). A related issue is wrong-way risk, i.e. the risk of adverse dependence between the counterparty exposure and the credit risk of the parties. As illustrated in Crépey and Song (2016), this impact can also be captured in the modeling of the \(Q^{i}_{\tau _{i}}\).
We denote by R and R i the recovery rate of the bank and of counterparty i, for i=1,…,n.
The exposure of the bank to the default of counterparty i=1,…,n at time τ i ≤τ∧T is
$$ \begin{aligned} \left(1-R_{i}\right) \left(Q^{i}_{\tau_{i}} - \text{RIM}^{i}_{\tau_{i}}\right)^{+}. \end{aligned} $$
The exposure of counterparty i=1,…,n to the default of the bank at time τ≤τ i ∧T is
$$\begin{aligned} (1-R) \left(Q^{i}_{\tau} - \text{PIM}^{i}_{\tau }\right)^{-}. \end{aligned} $$
By symmetry, it is enough to prove (12). Let C i=VMi+RIMi and
$$\epsilon_{i}=\left(Q^{i}_{\tau_{i}}-\text{RIM}^{i}_{\tau_{i}}\right)^{+}=\left(\widehat{\text{MtM}}^{i}_{\tau_{i}}- {C}^{i}_{\tau_{i}} \right)^{+}. $$
When counterparty i defaults, the bank receives from counterparty i
Also accounting for the unwinding of the clean hedge of netting set i at the time of liquidation of counterparty i, the loss of the bank in case of default of counterparty i appears as (assuming τ i ≤τ)
As an immediate corollary to Lemma 3.1, denoting by δ t a Dirac measure at time t:
The cumulative cash flow stream \(\mathcal {C}\)of counterparty exposures satisfies, for \(0\le t\le \bar {\tau },\)
$$ \begin{aligned} d\mathcal{C}_{t}\!&=\! \sum\limits_{i}\! \left(1\,-\,R_{i}\right) \!\left(Q^{i}_{\tau_{i}} \!- \text{RIM}^{i}_{\tau_{i}}\right)^{+}\! \boldsymbol\delta_{\tau_{i}}\!(dt\!)\,-\,\!\sum\limits_{i}\! J^{i}_{\tau-} (1\,-\,R)\! \left(Q^{i}_{\tau}\! -\! \text{PIM}^{i}_{\tau }\right)^{-}\! \boldsymbol\delta_{\tau}\!(dt\!)\\ d\mathcal{C}^{\tau-}_{t}&= \sum\limits_{i} J_{\tau_{i}} (1-R_{i}) \left(Q^{i}_{\tau_{i}} - \text{RIM}^{i}_{\tau_{i}}\right)^{+} \boldsymbol\delta_{\tau_{i}}(dt) \\ \left(-\Delta_{\tau}\mathcal{C} \right)&=\sum\limits_{i} J^{i}_{\tau-}(1-R) \left(Q^{i}_{\tau} - \text{PIM}^{i}_{\tau }\right)^{-} \boldsymbol\delta_{\tau }(dt)\\ & \quad- \sum\limits_{i;\tau_{i}=\tau}(1-R_{i}) \left(Q^{i}_{\tau_{i}} - \text{RIM}^{i}_{\tau_{i}}\right)^{+} \boldsymbol\delta_{\tau_{i}}(dt). \end{aligned} $$
Margining and funding schemes
Variation margin typically consists of cash that is re-hypothecable, meaning that received variation margin can be reused for funding purposes, and is remunerated at OIS by the receiving party. Initial margin typically consists of liquid assets deposited in a segregated account, such as government bonds, which naturally pay coupons or otherwise accrue in value. The poster of initial margin receives no compensation, except for the natural accrual or coupons of its collateral.
The trading strategy of the bank needs funding for raising variation margin and initial margin that need to be posted as collateral. As happens in practice in the current regulatory environment, the clean hedge of the derivative portfolio of the bank is assumed to be with other financial institutions and attracts variation margin at zero threshold (i.e. is fully collateralized), so that the variation margin posted by the bank on its hedge is constantly equal to \(\sum _{i}J^{i} \text {MtM}^{i}\). Hence, the bank posts \(\sum _{i}J^{i} \text {MtM}^{i}\) as VM on the hedge and receives \(\sum _{i}J^{i} \text {VM}^{i}\) as VM on client trades, which nets to
$$\sum\limits_{i} J^{i}_{t}\text{MtM}^{i}_{t}-\sum\limits_{i }J^{i}_{t}\text{VM}^{i}_{t} = \sum\limits_{i} J^{i}_{t} P^{i}_{t}. $$
Moreover, the bank can use reserve capital as variation margin. Note that the marginal cost of capital for using capital as a funding source for variation margin is nil, because when one posts cash against variation margin, the valuation of the collateralized hedge is reset to zero and the total capital amount does not change. If, instead, the bank were to post capital as initial margin, then the bank would record a "margin receivable" entry on its balance sheet, which however cannot contribute to capital since this asset is too illiquid and impossible to unwind without unwinding all underlying derivatives. Hence, capital can only be used as VM, while it seems that IM must be borrowed entirely.
Under our continuous reset assumption (2), the amount (RC+RF) of reserve capital that can be used as VM coincides at all times with the theoretical CA value. The cash held by the bank, whether borrowed or received as variation margin, is deemed fungible across netting sets in a unique funding set. In conclusion,
$$ \begin{aligned} & (\text{VM funding needs})_{t} \,=\,\left(\sum\limits_{i} J^{i}_{t} P^{i}_{t} -\text{CA}_{t}\right)^{+},\, (\text{IM funding needs})_{t}\!=\sum\limits_{i} J^{i}_{t}\, \text{PIM}^{i}_{t}. \end{aligned} $$
We assume that the bank can invest at the OIS rate r t and obtain unsecured funding at rate ( r t + λ t ) for funding VM and ( \(r_{t}+\bar {\lambda }_{t}\) ) for funding IM, via two bonds of different seniorities issued by the bank, with respective recoveries R and \(\bar {R}\). Given our standing valuation setup, it must hold that
$$ \begin{aligned} \lambda=(1-R)\gamma,\,\bar{\lambda}=(1-\bar{R})\gamma, \end{aligned} $$
where γ is the risk-neutral default intensity process of the bank.
We denote by d μ t =γ t dt+dJ t the compensated jump-to-default martingale of the bank.
The cumulative cash flow streams \(\mathcal {F}\) and \(\mathcal {M}\) of VM and IM related risky funding cash flows satisfy, for \(0\le t\le \bar {\tau },\)
$$ \begin{aligned} d\mathcal{F}_{t}&= (1-R)\left(\sum\limits_{i} J^{i}_{t-} P^{i}_{t-} -\text{CA}_{t-}\right)^{+} d\mu_{t}, d\mathcal{M}_{t}= \left(1-\bar{R}\right)\left(\sum\limits_{i} J^{i}_{t-} \text{PIM}^{i}_{t-}\right) d\mu_{t}\\ d\mathcal{F}^{\tau-}_{t}&= \left(\sum\limits_{i} J^{i}_{t} P^{i}_{t} -\text{CA}_{t}\right)^{+} {\lambda}_{t} dt,\, d\mathcal{M}^{\tau-}_{t}= \left(\sum\limits_{i} J^{i}_{t} \text{PIM}^{i}_{t}\right) \bar{\lambda}_{t} dt\\ \left(-\Delta_{\tau} \mathcal{F} \right)&= (1-R)\left(\sum\limits_{i} J^{i}_{\tau-} P^{i}_{\tau-}-\text{CA}_{\tau-}\right)^{+},\,\left(-\Delta_{\tau} \mathcal{M}\right) =\left(1-\bar{R}\right)\left(\sum\limits_{i} J^{i}_{\tau-} \text{PIM}^{i}_{\tau-}\right). \end{aligned} $$
In view of the above description, we have
$$\begin{aligned} & d\mathcal{F}^{\tau-}_{t}= (\text{VM funding needs})_{t} {\lambda}_{t} dt,\, d\mathcal{M}^{\tau-}_{t}= (\text{IM funding needs})_{t} \bar{\lambda}_{t} dt\\ &\!\left(\,-\,\Delta_{\tau} \mathcal{F}\right)\,=\, (1\,-\,R)(\text{VM funding needs})_{\tau-},\!\, \left(\,-\,\Delta_{\tau} \mathcal{M} \right)\,=\,\left(\!1\,-\,\bar{R}\right)\!(\text{IM funding needs})_{\tau-}, \end{aligned} $$
$$\begin{aligned} &d\mathcal{F}_{t}=(\text{VM funding needs})_{t-}\left({\lambda}_{t} dt + (1-R) dJ_{t}\right)\\ &d\mathcal{M}_{t}=(\text{IM funding needs})_{t-}\left(\bar{\lambda}_{t} dt + (1-\bar{R}) dJ_{t}\right). \end{aligned} $$
Hence, given (16),
$$\begin{aligned} & d\mathcal{F}_{t}\,=\, (1\,-\,R)(\text{VM funding needs})_{t-} d\mu_{t},\, d\mathcal{M}_{t}\,=\, \left(\!1\,-\,\bar{R}\right)\!\left(\text{IM funding needs}\right)_{t-} d\mu_{t}. \end{aligned} $$
In view of (15), this yields (17). □
Bilateral trading XVA formulas
We work under the technical assumption that each of the martingales
This holds, in particular, when \(\mathbb {G}\) is modeled as the progressive enlargement of a reference filtration \(\mathbb {F}\) by the bank default time τ, in a basic immersion setup where \((\mathbb {F},\mathbb {Q})\) martingales are \((\mathbb {G},\mathbb {Q})\) martingales without jump at time τ (see the comments before Section 3 in Duffie et al. (1996) or in Collin-Dufresne et al. (2004, page 1379) and see the comments following (3.22) and (H.3) or the remarks following Proposition 6.1 in Bielecki and Rutkowski (2001)). The more general case, where immersion hypothesis is violated and the technical condition (18) does not hold, can also be considered, by dealing explicitly with the underlying enlargement of filtration issue. This is done in Albanese and Crépey (2017).
Proposition 4.1 yields a complete specification of all XVA metrics and of the loss process L required as input data in the KVA computations, in the case of a bank engaged in bilateral trade portfolios. It identifies the FTP (all-inclusive XVA add-on to the entry price) of a new trade as its incremental (CVA+FVA+MVA+KVA), the difference with the complete market formula (FTDCVA−FTDDVA) (cf. "Connection with the Duffie and Huang (1996) formula" section) being explained by the CL wealth transfer and the KVA risk premium, which are triggered by the impossibility for the bank to replicate jump-to-default exposures.
We denote by:
\(\mathcal {L}_{p},p\geq 1,\) the space of progressively measurable processes X over \([0,\bar {\tau }]\) such that \(\mathbb {E} \left [\int _{0}^{\bar {\tau }} |X_{t}|^{p} dt\right ]< +\infty,\)
\(\mathcal {S}_{2},\) the space of adapted càdlàg processes Y over \([0,\bar {\tau }]\) such that \(\mathbb {E} \left [\sup _{t \in [0, \bar {\tau }]} Y_{t}^{2}\right ] < \infty \).
All our XVA processes are sought for in \(\mathcal {S}_{2}\). In particular, we assume that the CVA equation in (4) has at most one solution in \(\mathcal {S}_{2},\) as can be established in the invariance bank default time framework of Albanese and Crépey (2017).
Assuming that r is bounded from below, that the CVA and MVA processes in (19) are in \(\mathcal {S}_{2}\), and that the processes r,λ, and \(\lambda \left (\sum _{i}J^{i} P^{i} -{CVA} -{MVA}\right)^{+}\) are in \(\mathcal {L}_{2}\):
(i) Contra-assets are given as
$$ \begin{aligned} &{}\text{CA}_{t}\,=\,\underbrace{\mathbb{E}_{t}\!\!\! \sum\limits_{\!\!\{i;\,t<\tau_{i} <T\}}\beta_{t}^{-1}\beta_{\tau_{i}} (1-R_{i}) \left(Q^{i}_{\tau_{i}} - \text{RIM}^{i}_{\tau_{i}}\right)^{+}}_{{\mathrm{{CVA}}_{t}}}\!+\underbrace{\mathbb{E}_{t}\int_{t}^{\bar{\tau}} \beta_{t}^{-1} \beta_{s} \bar{\lambda}_{s} \sum_{i} J^{i}_{s} \text{PIM}^{i}_{s} ds }_{{ \text{MVA}_{t}}}\\ &\ \ +\underbrace{\mathbb{E}_{t}\int_{t}^{\bar{\tau}} \beta_{t}^{-1} \beta_{s} \lambda_{s} \left(\sum_{i} J^{i}_{s} P^{i}_{s} - \text{CVA}_{s}-\text{MVA}_{s}-\text{FVA}_{s} \right)^{+} ds }_{{ \text{FVA}_{t}}},\, 0\le t\le \bar{\tau}, \end{aligned} $$
where FVA is the unique solution in \(\mathcal {S}_{2}\) to the backward SDE (BSDE) defined by the last line.
(ii) Contra-liabilities are given as
(iii) The value of counterparty risk to the bank as a whole is given by
i.e. we have
$$ \begin{aligned} &\underbrace{\text{CVA}+\text{FVA}+\text{MVA}}_{\text{CA}} =\\ &\underbrace{\mathrm{{FTDCVA}}-\mathrm{{FTDDVA}}}_{\text{CR}}+ \underbrace{\mathrm{{FTDDVA}}+\text{CVA}^{\text{CL}} +\text{FDA} +\text{MDA}}_{\text{CL}}, \end{aligned} $$
where the different terms are detailed in (19), (20), and (21).
(iv) The bank trading loss process L satisfies the following forward SDE on \([0,\bar {\tau }]\):
$$ \begin{aligned} L_{0}&=z \text{ (the accrued trading loss of the bank at time {0}) and, for } t\in (0,\bar{\tau}],\\ dL_{t} &=d \text{CA}_{t} + \sum_{i} J_{\tau_{i}}(1-R_{i}) \left(Q^{i}_{\tau_{i}} - \text{RIM}^{i}_{\tau_{i}}\right)^{+} \boldsymbol{\delta}_{\tau_{i}}(dt)\\& \quad + \left(\lambda_{t} \left(\sum_{i} J^{i}_{t} P^{i}_{t} -\text{CA}_{t}\right)^{+} +\bar{\lambda}_{t} \sum_{i} J^{i}_{t} \text{PIM}^{i}_{t} -r_{t}\text{CA}_{t} \right) dt. \end{aligned} $$
(v) Assuming the ensuing ES=ES t (L) process (8) in \(\mathcal {L}_{2}\), KVA is the unique solution in \(\mathcal {S}_{2}\) to the following BSDE:
$$ \begin{aligned} \text{KVA}_{t} = h \mathbb{E}_{t} \int_{t}^{\bar{\tau}} e^{-\int_{t}^{s} \left(r_{u} +h\right) du} \max\left(\text{ES}_{s}(L),\text{KVA}_{s}\right) ds,\,t\in[0,\bar{\tau}]. \end{aligned} $$
(vi) The all-inclusive XVA add-on to the entry price for a new deal, which we call funds transfer price (FTP), appears as
$$ \begin{aligned} \text{FTP}&= \underbrace{\Delta\text{CVA} +\Delta\text{FVA} +\Delta\text{MVA}}_{\mathrm{\Delta\text{CA}}} +\underbrace{\Delta\text{KVA}}_{\text{Risk~premium}}\\ &=\underbrace{\Delta\mathrm{{FTDCVA}}-\Delta\mathrm{{FTDDVA}}}_{\Delta\text{CR}}\\ &\quad + \underbrace{\Delta\mathrm{{FTDDVA}}+\Delta\text{CVA}^{\text{CL}} +\Delta\text{FDA} +\Delta\text{MDA}}_{\Delta\text{CL}} +\underbrace{\Delta\text{KVA}}_{\text{Risk~premium}}, \end{aligned} $$
computed on an incremental run-off basis, where all the underlying XVA metrics as well as the processes L to be used as input data in the economic capital and KVA computations are defined as in parts (i) through (v) relative to the portfolios with and without the new deal.
(i) Denoting by UCVA (for unilateral CVA) the expression stated in (19) for the CVA, we have, on {t≤τ}:
where the technical assumption (18) was used in the next-to-last equality. As a consequence,
by definition of UCVA. In view of (14), this means that the UCVA process satisfies the CVA equation in (4). Hence CVA = UCVA, by assumed uniqueness of an \(\mathcal {S}_{2}\) solution to this equation.
The MVA expression and FVA equation in (19) follow from the related equations in (4) and Lemma 3.3. In order to prove (i), it only remains to show that the FVA BSDE (last line in (19)) is well posed in \(\mathcal {S}_{2}\). Let \(X_{t}=\sum _{i}J^{i}_{t} P^{i}_{t} -\text {CVA}_{t} -\text {MVA}_{t}\). In terms of the coefficient
$$\begin{aligned} f_{t} (y) =\lambda_{t} \left(X_{t} - y\right)^{+} - r_{t} y,\, y\in\mathbb{R}, \end{aligned} $$
the FVA BSDE is rewritten as
$$ \begin{aligned} \text{FVA}_{t}=\mathbb{E}_{t} \int_{t}^{\bar{\tau}} f_{s} (\text{FVA}_{s}) ds,\, 0\le t\le \bar{\tau}. \end{aligned} $$
For any real \(y,y'\in \mathbb {R}\) and \(t\in [0,\bar {\tau }],\) we have
$${}\begin{aligned} \left(f_{t}(y) - f_{t}(y')\right) (y-y')&=- r_{t} (y-y')^2+\lambda_{t} (y-y')\left(\left(X_{t} - y\right)^{+} -\left(X_{t} - y'\right)^{+}\right)\\ &\quad\leq - r_{t} (y-y')^{2} \leq C (y-y')^{2}, \end{aligned} $$
for some constant C (having assumed r bounded from below and recalling λ≥0). Hence the BSDE coefficient f satisfies the so-called monotonicity condition. Moreover, for \(|y|\leq \bar {y}\), we have:
$$\begin{aligned} \left|f_{\cdot}(y)-f_{\cdot}(0)\right|&=\left|\lambda \left(X - y\right)^{+} - r y - \lambda X^{+} \right|\leq (\lambda+ | r |) \bar{y}. \end{aligned} $$
Hence, assuming that r,λ and \(\lambda \big (\sum _{i}J^{i} P^{i} -\text {CVA} -\text {MVA} \big)^{+}=\lambda X^{+}\) are in \(\mathcal {L}_{2},\) the following integrability conditions hold:
$$\begin{aligned} \sup_{|y|\leq \bar{y}}|f_{\cdot}(y)-f_{\cdot}(0)|\in {\mathcal{L}_{1}},\text{ for any } \bar{y}>0, \quad\text{and}\quad f_{\cdot}(0)\in {\mathcal{L}_{2}}. \end{aligned} $$
Therefore, by application of the general filtration BSDE results of Kruse and Popier (2016, Sect. 5) the FVA BSDE (27) is well posed in \(\mathcal {S}_{2}\).
(ii) Follows from the CL equation in (4) and Lemmas 3.2– 3.3.
(iii) The CR equation in (4) and Lemma 3.2 yield (21), after which (22) follows from (19), (20) and (21).
(iv) Follows from (5) and Lemmas 3.2 – 3.3.
(v) In view of (7) and (9), the KVA process satisfies the BSDE (24). This is a monotonic coefficient BSDE, which can be shown to be well posed in \(\mathcal {S}_{2}\) much like the FVA BSDE in part (i) of the proof (see also Albanese and Crépey (2017, Section 6.3)).
(vi) follows from (i) through (iv) by application of the generic formula (1), applied on an incremental run-off basis for every new trade as recalled after (1).
The regulator says quite explicitly that the bank capital (reserve capital in particular) cannot be seen increasing as a consequence of the sole deterioration of the bank credit, all else being equal (see Albanese and Andersen (2014, Section 3.1)). In particular, regulators decided that the CVA should be computed unilaterally, as in (19), by contrast with the first-to-default CVA (FTDCVA) in (21). As seen in the above, a unilateral CVA follows naturally from accounting for the wealth transfer corresponding to the transfer of the residual reserve credit from shareholders to creditors upon default of the bank.
In Proposition 4.1, the FVA appears as the solution to a BSDE through which it depends on all the CA components, including the FVA itself. This might seem in contradiction with our linear valuation rule for cash flows or with the additive appearance of the abstract FVA formula (4). The reconciliation between the two comes from the fact that the FVA is together value, by definition, and part RF=FVA of the amount in the reserve funding account, by (2). As RC+RF=CA is a deduction to the VM funding needs (cf. (2) and (15)), it follows that \(\mathcal {F}\) in (17) depends on the FVA.
Connection with the Duffie and Huang (1996) formula
The formula (21) for the fair valuation CR of counterparty risk (valuation from the point of view of the bank as a whole) is derived inDuffie and Huang (1996) in the limit case of a perfect market (complete counterparty risk market without trading restrictions). Proposition 4.1 (iii) extends the validity of this fair valuation formula to our incomplete market setup.
Formula (21) is symmetrical, i.e. consistent with the law of one price, in the sense that each term (FTDCVAi−FTDDVAi) in (21) corresponds to the negative of the analogous quantity considered from the point of view of counterparty i. It only involves the first-to-default CVAs and DVAs, where the counterparty default losses are only considered until the first occurrence of a default of the bank or its counterparty in the deal. This is consistent with the fact, first pointed out in Duffie and Huang (1996) and later emphasized in Bielecki and Rutkowski (2002) and Brigo and Capponi (2008), that later cash flows will not be paid.
Since the presence of collateral has a direct reducing impact on FTDCVA/DVA, this formula may give the impression that collateralization achieves a reduction in counterparty risk at no cost to either the bank or the clients. However, in our incomplete market setup, the value CR of counterparty risk from the point of view of the bank as a whole ignores the misalignement of interest between the shareholders and the creditors of a bank. Propositions 4.1 (i) and (ii) give explicit decompositions of the shareholder valuation CA of counterparty risk and of the wealth transfer CL triggered from the shareholders to the creditors by the impossibility for the bank to hedge its own jump-to-default exposure (see Albanese and Crépey (2017, Sections 2.1 and 3.4)). Accounting for the further impossibility for the bank to replicate counterparty default losses, not only these contra-liabilities (CL), but also cost of capital (KVA), are material to shareholders and need to be reflected in entry prices on top of CR (cf. (1)).
Using economic capital as variation margin
In this section, we account for the FVA reduction provided by the possibility for a bank to post economic capital, on top of reserve capital RC+RF=CA already included in the above, as variation margin. Note that, in practice, uninvested capital of the bank can also be used for that purpose. But, since the amount of uninvested capital is unknown and could as well be zero in the future, capital is conservatively taken in FVA computations as CA+EC.
The quantity EC=EC t (L) corresponds to the amount of economic capital required to cope with the loss process L (cf. (8)–(9)). Accounting for the use of EC as VM, the VM funding needs are reduced from \( \left (\sum _{i} J^{i} P^{i} -\text {CA}\right)^{+} \) to \(\left (\sum _{i} J^{i} P^{i} -{\text {EC}} (L)-\text {CA}\right)^{+} \) in (15). Lemma 3.3 is still valid provided one replaces \( \left (\sum _{i} J^{i} P^{i} -\text {CA}\right)^{+} \) with \(\left (\sum _{i} J^{i} P^{i} -{\text {EC}} (L)-\text {CA}\right)^{+}\) in (17).
As a consequence, instead of an exogenous CA value process as in (19) feeding the dynamics (23) for L, we obtain the following FBSDE system, made of a forward SDE for L coupled with a backward SDE for the FVA:
$$\begin{array}{@{}rcl@{}} L_{0}&=&z~\text{and, for } t\in (0,\bar{\tau}], \\ dL_{t} &=& {d \text{CA}_{t}}+ \sum_{i} J_{\tau_{i}}(1-R_{i}) \left(Q^{i}_{\tau_{i}} - \text{RIM}^{i}_{\tau_{i}}\right)^{+} \boldsymbol\delta_{\tau_{i}}(dt)\\ && + \left(\lambda_{t} \left(\sum_{i} J^{i}_{t} P^{i}_{t}-{\text{EC}}_{t}(L) -\text{CA}_{t}\right)^{+} +\bar{\lambda}_{t} \sum_{i} J^{i}_{t} \text{PIM}^{i}_{t} -r_{t}\text{CA}_{t} \right) dt, \end{array} $$
$$ \begin{aligned} \text{{FVA}}_{t}&=\mathbb{E}_{t}\int_{t}^{\bar{\tau}} \beta_{t}^{-1} \beta_{s} \lambda_{s} \left(\sum_{i} J^{i}_{s} P^{i}_{s} -{\text{EC}}_{s}(L)\right.\\ &\quad\left.-{\text{CVA}_{s}}-{\text{MVA}_{s}}-{\text{FVA}_{s}} {\vphantom{\sum_{i} J^{i}_{s} P^{i}_{s}}}\right)^{+} ds,\, 0\le t\le \bar{\tau} \end{aligned} $$
(whereas CVA and MVA are as in (19) and CA=CVA+FVA+MVA as usual).
Proceeding as in Crépey et al. (2017), one could show that, accounting for the use of economic capital as variation margin, Proposition 4.1 is still valid, provided one replaces \( \left (\sum _{i} J^{i} P^{i} -\text {CA}\right)^{+} \) with \(\left (\sum _{i} J^{i} P^{i} -{\text {EC}} (L)-\text {CA}\right)^{+}\) in all formulas.
Specialist lending of initial margin
If IM is unsecurely funded by the bank, then \(\bar {\lambda }=\lambda \) in (19). However, instead of an unsecured IM funding scheme, one can consider a more efficient scheme where initial margin is funded through a specialist lender that lends only IM and, in case of default of the bank, receives back the portion of IM unused to cover losses.
Hence, the exposure of the specialist lender to the default of the bank is \((1-R) \sum _{i }J^{i}_{\tau } \left (\left (Q^{i}_{\tau }\right)^{-} \wedge \text {PIM}^{i}_{\tau }\right)\). Recalling the risk-neutral valuation condition λ=(1−R)γ in (16), where γ is the risk-neutral default intensity of the bank, the ensuing specialist lender MVAsl is given as:
$$\begin{aligned} \text{{MVA}}^{sl}_{t}=\mathbb{E}_{t}\int_{t}^{\bar{\tau}} \beta_{t}^{-1} \beta_{s} {\lambda_{s}}\sum_{i} J^{i}_{s} \left(\left(Q^{i}_{s}\right)^{-} \wedge \text{PIM}^{i}_{s}\right) ds \end{aligned} $$
(assuming here for simplicity \(\mathbb {G}\) predictable processes Q i and PIMi). By identification with the general form \(\bar {\lambda }_{t} \sum _{i }J^{i}_{t} \text {PIM}^{i}_{t}\) of instantaneous IM costs in this paper (cf. the generic MVA formula in (19)), this specialist lending scheme corresponds to
$$\bar{\lambda}_{t}=\frac{\sum_{i }J^{i}_{t} \left(\left(Q^{i}_{t}\right)^{-} \wedge \text{PIM}^{i}_{t}\right)}{\sum_{i }J^{i}_{t} \text{PIM}^{i}_{t}} {\lambda}_{t}\leq {\lambda}_{t}. $$
In fact, given the very conservative levels of initial margin prescribed by the regulation since the emergence of ISDA's standard initial margin model (SIMM) for bilateral transactions, such a blended spread \(\bar {\lambda }_{t}\) is typically much smaller than the unsecured funding spread λ. Equivalently, the blended recovery rate \(\bar {R}\) in (16), i.e.
$$\begin{aligned} \bar{R}_{t}=\left(1-\frac{\bar{\lambda}_{t}}{\lambda_{t}}\right)+\frac{\bar{\lambda}_{t}}{\lambda_{t}} R \end{aligned} $$
(noting that everything in the paper can be readily extended to a \(\mathbb {G}\) predictable recovery rate process \(\bar {R}\)), is typically much larger then the unsecured borrowing recovery rate R.
Note that, for the argument to be valid, the IM lender does not need to anticipate the nature of future trades, which in the case of a market maker, such as a bank, would be impossible. The argument is robust and independent of future dealings. The IM lender simply needs to know (which is public regulatory information) that the collateral posted by the bank is very conservative, no matter what trades are entered into the future.
Such an IM funding policy is not a violation of pari passu rules, just as repurchase agreements or mortgages are not. It is just a form of collateralised lending, which does not transfer wealth from senior creditors in the baseline case. In practice specialist lenders are private equity funds. The specialist lending business is at the early stages.
For similar ideas regarding VM, see Albanese et al. (2013). However, such funding schemes are much more difficult to implement for VM because VM is far larger and more volatile than IM.
The XVA algorithm
Under the approach of this paper, prices of individual trades are no longer computable in isolation. Instead, they can only be computed incrementally with respect to the existing endowment of the bank. This is a major innovation in mathematical finance, which so far has mainly focused on option pricing theory for individual payoffs in isolation. There are antecedents in this direction in the XVA literature, but the new KVA dimension pushes this logic to an unprecedented level. By current industry practice, XVA desks are the desks first consulted in all major trades, whose pros and cons are assessed in terms of incremental XVAs.
This portfolio view raises computational and modeling challenges. Our XVA approach can be implemented by means of nested Monte Carlo simulations for approximating the loss process L required as input data in the KVA computations. Contra-assets (and contra-liabilities if wished) are computed at the same time.
One of the goals of the numerical experiments that follow is to emphasize the impact on the FVA of the funding sources provided by reserve capital and economic capital. Accordingly, we consider the FBSDE (28)–(29) accounting for the use of EC (on top of RC+RF=CA) as VM. Let
$$ \begin{aligned} \text{{FVA}}^{(0)}_{t}=\mathbb{E}_{t}\int_{t}^{\bar{\tau}} \beta_{t}^{-1} \beta_{s} \lambda_{s} \left(\sum_{i} J^{i}_{s} P^{i}_{s} \right)^{+} ds,\end{aligned} $$
which corresponds to an FVA that accounts only for the re-hypothecation of the variation margin received on hedges, but ignores the FVA deductions reflecting the possible use of reserve and economical capital as VM. Based on nested simulated paths, we compute CVA, MVA, and FVA(0) at all nodes of the primary simulation grid. We consider the following Picard iteration in the search for the solution to (28)–(29): L (0)=z,FVA(0) as in (31), KVA(0)=0 and, for k≥1,
$$ {\begin{aligned} L^{(k)}_{0}&= z~\text{and, for } t\in (0,\bar{\tau}],\\ d L^{(k)}_{t} &= {d\text{CA}^{(k-1)}_{t} } - r_{t}\text{CA}^{(k-1)}_{t} dt+\sum_{i} J_{\tau_{i}}(1-R_{i}) \left(Q^{i}_{\tau_{i}} - \text{RIM}^{i}_{\tau_{i}}\right)^{+} \boldsymbol\delta_{\tau_{i}}(dt) \\ &\quad + \lambda_{t}\left(\sum_{i} J^{i}_{t} P^{i}_{t} -\max\left({\text{ES}}_{t} \left(L^{(k-1)}\right), \text{KVA}^{(k-1)}_{t}\right) - {\text{CA}^{(k-1)}_{t}}\right)^{+} dt\\ &\quad+ \bar{\lambda}_{t} \sum_{i} J^{i}_{t} \text{PIM}^{i}_{t} dt\\ \text{CA}^{(k)}_{t}&=\text{CVA}_{t}+\text{{FVA}}^{(k)}_{t}+\text{MVA}_{t}, \text{where } {\text{FVA}}^{(k)}_{t}\\ &=\mathbb{E}_{t} \int_{t}^{\bar{\tau}}\!\beta_{t}^{-1}\beta_{s} \lambda_{s}\!\left(\!\sum_{i }J^{i}_{s} P^{i}_{s} -\max\!\left(\!{\text{ES}}_{s} \!\left(\!L^{(k)}\!\right)\!, \text{KVA}^{(k-1)}_{s}\!\right) -{\text{CA}^{(k-1)}_{s}}\!\right)^{\!+}\! ds \\ \text{{KVA}}^{(k)}_{t}&= h \mathbb{E}_{t} \int_{t}^{\bar{\tau}} e^{-\int_{t}^{s} (r_{u} +h) du} \max\left({\text{ES}}_{s} \left(L^{(k)}\right), \text{KVA}^{(k-1)}_{s}\right) ds. \end{aligned}} $$
The \(\mathcal {L}_{2}\times \mathcal {S}_{2}\times \mathcal {S}_{2}\) convergence of (L (k),CA(k),KVA(k)) to the solution (L,CA,KVA) of (28)–(29) and (7)–(9) can be established as in Crépey et al. (2017).
Numerically, one iterates (32) as many times as is required to reach a fixed point within a preset accuracy. In the case studies we considered, one iteration (k=1) was found sufficient. In other words, the KVA is computed based on the linear formula (7), using ES s (L (1)) instead of EC s (L) there. A refined FVA is obtained as a value \(\text {FVA}_{0} \approx \text {FVA}^{(1)}_{0}\) accounting for the use of reserve capital CA and economic capital EC as VM. A second iteration did not bring significant change, as:
In (28)–(29), the FVA feeds into EC t (L) only through FVA volatility in L, whereas EC feeds into FVA through a capital term which is typically not FVA dominated.
In (7)–(9), in most cases as in Fig. 2, we have that EC=ES. The inequality only stops holding when the hurdle rate h is very large and the term structure of EC starts out very small and has a sharp peak in a few years, which is quite unusual for a portfolio held on a run-off basis, as considered in XVA computations, which tends to amortize in time.
It could be that particular portfolios and parameter choices would necessitate two or more iterations. We did not encounter such situations and did not try to build artificial ones.
However, going even once through (32) necessitates the conditional risk measure simulation of ES t (L (1)). On realistically large portfolios, some approximation is required for the sake of tractability. Namely, the simulated paths of L (1) are used for inferring a deterministic term structure
$$ \begin{aligned} {\text{ES}}_{(1)}(t)\approx {\text{ES}_{t}} \left(L^{(1)}\right) \end{aligned} $$
of economic capital, obtained by projecting in time instead of conditioning with respect to \(\mathfrak {G}_{t}\) in ES, i.e. taking the 97.5% unconditional expected shortfall of \(\int _{t}^{t+1}\beta _{t}^{-1} \beta _{u} dL_{u}\) (instead of the conditional expected shortfall in (8). Simulating the full-flesh conditional expected shortfall process would involve not only nested, but doubly-nested Monte Carlo simulations, because of the conditional one-year-ahead CA(0) fluctuations that are part of the conditional one-year-ahead fluctuations of the loss process L (1).
Note that, if a corporate holds a bank payable, it typically has a desire to close it, receive cash, and restructure the hedge with a par contract (the bank would agree to close the deal as a market maker, charging fees for the new trade). Because of this natural selection, a bank is mostly in the receivables in its derivative business with corporates. Hence, the tail-fluctuations of its loss process L are mostly driven by the counterparty default events rather than by the volatility of the underlying market exposure. As a consequence, working with a deterministic term structure approximation ES(1)(t) of economic capital should be acceptable. If, by exception, the derivative portfolio of a bank is mostly in the payables, then all XVA numbers are small and matter much less anyway.
A similar argument is sometimes used to defend a symmetric FVA (or FVAsym) approach, such as, instead of FVA t in (19):
$$\begin{aligned} &\text{FVA}^{\text{sym}}_{t}= \mathbb{E}_{t}\int_{t}^{\bar{\tau}} {\beta}_{t}^{-1} {\beta}_{s} \tilde{\lambda}_{s} \left(\sum_{i} J^{i}_{s} P^{i}_{s} \right) ds,\, 0\le t\le \bar{\tau},\end{aligned} $$
for some VM blended funding spread \(\tilde {\lambda }_{t}\) (cf. Piterbarg (2010), Burgard and Kjaer (2013), and the discussion in Andersen et al. (2016)). This explicit, linear FVAsym formula can be implemented by standard (non-nested) Monte Carlo simulations. For a suitably chosen blended spread \(\tilde {\lambda }_{t}\), the equation yields reasonable results in the case of a typical bank portfolio dominated by unsecured receivables. However, in the case of a portfolio dominated by unsecured payables, this equation could yield a negative FVA, i.e. an FVA benefit, proportional to the own credit spread of the bank, which is not acceptable from a regulatory point of view.
Asymmetric FVA is more rigorous and has been considered in Albanese and Andersen (2014), Albanese et al. (2015), Crépey (2015), Crépey and Song (2016), Brigo and Pallavicini (2014), Bielecki and Rutkowski (2015), and Bichuch et al. (2016). In this paper, we improve upon these asymmetric FVA models by accounting for the funding source provided by economic capital (cf. (29)).
To illustrate our XVA methodology, we present two XVA case studies on fixed-income and foreign-exchange portfolios. To this end we use the GARCH-type market and credit portfolio models of Albanese et al. (2011) calibrated to the relevant market data.
We use nested simulation with primary scenarios and secondary scenarios generated under the risk-neutral measure \(\mathbb {Q}\) calibrated to derivative data using broker datasets for derivative market data.
All computations are run using a 4-socket server for Monte Carlo simulations, NVIDIA GPUs for algebraic calculations and Global Valuation Esther as simulation software. Using this supercomputer and GPU technology, the whole calculation takes a few minutes for building the models, followed by a nested simulation time on the order of about an hour for processing a billion scenarios on a realistic banking derivative portfolio.
We assume a hurdle rate h=10.5% and no variation or initial margins on the portfolio (but perfect variation-margining on the portfolio hedge). In particular, the MVA numbers are all equal to zero and hence not reported in the tables below. We take Q i=P i in all the counterparty exposures (12)–(13).
Toy portfolio results
We first consider a portfolio of ten USD currency fixed-income swaps depicted in Table 1, on the date of 11 January 2016. The nominal of each swap is 104. The swaps are traded with four counterparties i=1,…,4, with a 40% recovery rate and credit curves as in Fig. 1.
Credit curves of the bank and its four counterparties
Table 1 Toy portfolio of swaps (the nominal of each swap is $ 104)
We use 20,000 primary scenarios up to 30 years in the future run on 54 underlying time points with 1000 secondary scenarios starting from each primary simulation node, which amounts to a total of 20,000×54×1000= 1080 million scenarios. In this toy portfolio case the whole calculation takes roughly ten minutes to run, including two to three minutes for building the pre-calibrated market and credit models.
The corresponding XVA results are displayed in the left panel of Table 2. Since the portfolio is not collateralized, its CVA is quite high as compared with the nominal (104) of each swap. But its KVA is even higher. Note that given our deterministic term structure approximation (33) for expected shortfalls, the computation of the KVA reduces to a deterministic time-integration, which is why there is no related standard error in Table 2. The number FVA\(^{(0)}_{0}\), accounting only for re-hypothecation of variation margin received on hedges, amounts to $73.87. However, if we consider the additional funding sources due to economic capital and reserve capital, we arrive at an FVA figure of FVA\(^{(1)}_{0}=\$3.87\) only. The FTDCVA and FTDDVA metrics, whose difference corresponds to the fair valuation CR of counterparty risk (valuation of counterparty risk from the point of view of the bank as a whole, cf. "Connection with the Duffie and Huang (1996) formula" section), are also shown for comparison.
Table 2 Toy portfolio
The right panel of Table 2 shows the incremental XVA results when the fifth (resp. ninth) swap in Table 1 is added to the portfolio last. Note that, under an incremental run-off XVA methodology, introducing financial contracts one after the other in one or the reverse order in the portfolio at time 0 results in the same aggregated incremental FTP amounts for the bank, equal to its "portfolio FTP" (1), but in different FTPs for each given contract and counterparty.
Interestingly enough, all the incremental XVAs of Swap 9 (and also the incremental FVA of Swap 5) are negative. Hence, Swap 9, when added last, is XVA profitable to the portfolio, meaning that a price maker should be ready to enter the swap for less than its mark-to-market value, assuming it is already trading the rest of the portfolio: the corresponding FTP amounts to $−85.82, versus $202.87 in the case of Swap 5.
Large portfolio results
We now consider a representative portfolio with about 2000 counterparties and 100,000 fixed-income trades, including swaps, swaptions, FX options, inflation swaps, and CDS trades.
We use 20,000 primary scenarios up to 50 years in the future run on 100 underlying time points, with 1000 secondary scenarios starting from each primary simulation node, which amounts to a total of two billion scenarios. Using supercomputer and GPU technologies, the whole calculation takes about two hours.
Table 3 shows the XVA results for the large portfolio. The FVA is much smaller than the KVA, especially after accounting for the economic and reserve capital funding sources. The KVA amounts to $275 M, which makes it the largest of the XVA numbers. Figure 2 shows the term structure of economic capital and the term structure of the KVA obtained by a deterministic term structure approximation ES(1) as in (33) for economic capital and by the linear KVA formula (7) with ES(1) instead of EC there. Such a term structure of economic capital, with a starting hump followed by a slow decay after 2 or 3 years, is typical of an investment bank derivative portfolio assumed to be held on a run-off basis until its final maturity, where the bulk of the portfolio consists of trades with 3 to 5y maturity. In relation to the second point made after (32), note that the KVA computed by the linear formula (7) based on this term structure of economic capital is below the latter at all times.
Term structure of economic capital compared with the term structure of KVA
Table 3 XVA values for the large portfolio
The funding needs reduction achieved by EC and RC+RF=CA=CVA +FVA (in the absence of initial margin) is also shown in Fig. 3 by the FVA blended curve. This is the FVA funding curve which, whenever applied to the FVA computed neglecting the impact of economic and reserve capital, gives rise to the same term structure for the forward FVA as the calculation carried out with the CDS curve λ(t) of the bank as the funding curve but accounting for the economic and reserve capital funding sources. This blended curve is often inferred by consensus estimates based on the Markit XVA service. However, here it is computed from the ground up based on full-fledged capital projections.
FVA blended funding curve computed from the ground up based on capital projections
Contra-assets (or their valuation)
CDS :
Credit default swap
CL :
Contra-liabilities (or their valuation)
CR :
counterparty risk (or its valuation)
CVA :
Credit valuation adjustment
C V A CL :
Contra-liability component of a unilateral CVA
DVA :
Debt valuation adjustment
EC :
ES :
Expected shortfall at the confidence level 97.5%
FDA :
Funding debt adjustment
FTDCVA :
First-to-default CVA
FTDDVA :
First-to-default DVA
FTP :
Funds transfer price
FVA :
Funding valuation adjustment
IM :
Initial margin (with PIM and RIM for IM posted and received by the bank)
KVA :
Capital valuation adjustment
MDA :
Margin debt adjustment
MtM :
Mark-to-market of a portfolio when all XVAs are ignored
MVA :
Margin valuation adjustment
OIS :
Overnight index swap
RC :
Reserve credit capital
RF :
Reserve funding capital
RM :
Risk margin (or KVA)
SCR :
Shareholder capital at risk
VM :
Variation margin
XVA :
Generic "X" valuation adjustment
Albanese, C, Andersen, L: Accounting for OTC derivatives: Funding adjustments and the re-hypothecation option (2014). ssrn:2482955.
Albanese, C, Andersen, L, Iabichino, S: FVA: Accounting and risk management (2015). Risk Magazine, February 64–68.
Albanese, C, Bellaj, T, Gimonet, G: Pietronero G: Coherent global market simulations and securitization measures for counterparty credit risk. Quant Finance. 11(1), 1–20 (2011).
Albanese, C, Brigo, D, Oertel, F: Restructuring counterparty credit risk. Int. J. Theor. Appl. Finance. 16(2), 1350010 (29 pages) (2013).
Albanese, C, Crépey, S: XVA analysis from the balance sheet (2017). Working paper available at https://math.maths.univ-evry.fr/crepey. Accessed 7 June 2017.
Andersen, L, Duffie, D, Song, Y: Funding value adjustments (2016). ssrn.2746010.
Armenti, Y, Crépey, S: Central clearing valuation adjustment. SIAM J. Financial Math. 8, 274–313 (2017a).
Armenti, Y, Crépey, S: XVA Metrics for CCP optimisation (2017b). Working paper available at https://math.maths.univ-evry.fr/crepey. Accessed 13 June 2017.
Bichuch, M, Capponi, A, Sturm, S: Arbitrage-free XVA. Mathematical Finance (2016). Forthcoming (preprint version available at ssrn.2820257).
Bielecki, T, Rutkowski, M: Credit risk modelling: Intensity based approach. In: Jouini, E, Cvitanic, J, Musiela, M (eds.)Handbook in Mathematical Finance: Option Pricing, Interest Rates and Risk Management, pp. 399–457. Cambridge University Press, Cambridge (2001).
Bielecki, T, Rutkowski, M: Credit Risk: Modeling, Valuation and Hedging. Springer Finance, Berlin (2002).
Bielecki, TR, Rutkowski, M: Valuation and hedging of contracts with funding costs and collateralization. SIAM J. Financial Math. 6, 594–655 (2015).
Brigo, D, Capponi, A: Bilateral counterparty risk with application to CDSs (2008). arXiv:0812.3705, short version published later in 2010 in Risk Magazine.
Brigo, D, Pallavicini, A: Nonlinear consistent valuation of CCP cleared or CSA bilateral trades with initial margins under credit, funding and wrong-way risks. J. Financial Eng. 1, 1–60 (2014).
Burgard, C, Kjaer, M: Funding Strategies, Funding Costs. Risk Magazine, December, 82Ű-87 (2013).
Collin-Dufresne, P, Goldstein, R, Hugonnier, J: A general formula for valuing defaultable securities. Econometrica. 72(5), 1377–1407 (2004).
Crépey, S: Bilateral counterparty risk under funding constraints. Part I: Pricing, followed by Part II: CVA. Math. Finance. 25(1), 1–50 (2015). First published online on 12 December 2012.
Crépey, S, Élie, R, Sabbagh, W: When capital is a funding source: The XVA Anticipated BSDEs (2017). Working paper available at https://math.maths.univ-evry.fr/crepey.
Crépey, S, Song, S: Counterparty risk and funding: Immersion and beyond. Finance Stochast. 20(4), 901–930 (2016).
Duffie, D, Huang, M: Swap rates and credit quality. J. Finance. 51, 921–950 (1996).
Duffie, D, Schroder, M, Skiadas, C: Recursive valuation of defaultable securities and the timing of resolution of uncertainty. Ann. Appl. Probab. 6(4), 1075–1090 (1996).
Kruse, T, Popier, A: BSDEs with monotone generator driven by Brownian and Poisson noises in a general filtration. Stochastics: Int. J. Probab. Stochast. Process. 88(4), 491–539 (2016).
Piterbarg, V: Funding beyond discounting: collateral agreements and derivatives pricing. Risk Mag. 2, 97–102 (2010).
Pykhtin, M: Model foundations of the Basel III standardised CVA charge. Risk Magazine (2012).
The research of Stéphane Crépey benefited from the support of the "Chair Markets in Transition," Fédération Bancaire Française, of the ANR project 11-LABX-0019 and from the EIF grant "Collateral management in centrally cleared trading."
IMEX, London, UK
Claudio Albanese & Simone Caenazzo
CASS School of Business, London, UK
Claudio Albanese
LaMME, Univ Evry, CNRS, Université Paris-Saclay, Evry, 91037, France
Stéphane Crépey
Simone Caenazzo
Correspondence to Stéphane Crépey.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Albanese, C., Caenazzo, S. & Crépey, S. Credit, funding, margin, and capital valuation adjustments for bilateral portfolios. Probab Uncertain Quant Risk 2, 7 (2017). https://doi.org/10.1186/s41546-017-0019-2
Received: 31 January 2017
Accepted: 29 May 2017
Counterparty risk
Credit valuation adjustment (CVA)
Cost of funding variation margin (FVA)
Cost of funding initial margin (MVA)
Cost of capital (KVA)
Mathematics Subject Classification:
|
CommonCrawl
|
Home All issues A&A, 448 3 (2006) 1235-1245 Article references
The Citing articles tool gives a list of articles citing the current article.
The citing articles come from EDP Sciences database, as well as other publishers participating in CrossRef Cited-by Linking Program. You can set up your personal account to receive an email alert each time this article is cited by a new article (see the menu on the right-hand side of the abstract page).
Cited article:
The PM2000 Bordeaux proper motion catalogue ( )
C. Ducourant, J. F. Le Campion, M. Rapaport, J. I. B. Camargo, C. Soubiran, J. P. Périe, R. Teixeira, G. Daigne, A. Triaud, Y. Réquième
DOI: https://doi.org/10.1051/0004-6361:20053220
Tables at CDS
Simbad Objects
This article has been cited by the following article(s):
HD 69686: A MYSTERIOUS HIGH VELOCITY B STAR
Wenjin Huang, D. R. Gies and M. V. McSwain
The Astrophysical Journal 703 (1) 81 (2009)
DOI: 10.1088/0004-637X/703/1/81
See this article
The Disk Population of the Chamaeleon I Star‐forming Region
K. L. Luhman, L. E. Allen, P. R. Allen, et al.
The Astrophysical Journal 675 (2) 1375 (2008)
DOI: 10.1086/527347
The Potsdam plates of the Carte du Ciel project: I. Present inventory and plate catalogue
K. Tsvetkova, M. Tsvetkov, P. Böhm, M. Steinmetz and W.R. Dick
Astronomische Nachrichten 330 (8) 878 (2009)
DOI: 10.1002/asna.200911245
New white dwarfs in the Hyades
E. Schilbach and S. Röser
Astronomy & Astrophysics 537 A129 (2012)
DISCOVERY OF A WIDE PLANETARY-MASS COMPANION TO THE YOUNG M3 STAR GU PSC
Marie-Eve Naud, Étienne Artigau, Lison Malo, et al.
The Astrophysical Journal 787 (1) 5 (2014)
DOI: 10.1088/0004-637X/787/1/5
On the nature of the purported common proper motion companions to the exoplanet host star 51 Peg
E.E. Mamajek
Proper motion and densification of the International Celestial Reference Frame in the direction of the Galactic bulge
R. Teixeira, P. A. B. Galli, P. Benevides-Soares, et al.
Astronomy & Astrophysics 534 A91 (2011)
GG Tauri A: dark shadows on the ringworld
R. Brauer, E. Pantin, E. Di Folco, et al.
DIVISION I / COMMISSION 8 / WORKING GROUP ASTROGRAPHIC CATALOGUE AND CARTE DU CIEL PLATES
Beatrice Bucciarelli, Alain Fresnau, Carlos Abad, et al.
Proceedings of the International Astronomical Union 3 (T26B) 95 (2007)
Kinematic parameters and membership lists of open clusters in the Bordeaux Carte du Ciel zone
Alberto Krone-Martins, Caroline Soubiran, Ramachrisna Teixeira and Christine Ducourant
Proceedings of the International Astronomical Union 5 (S266) 442 (2009)
Dense optical reference frames: UCAC and URAT
N. Zacharias
The CdC2000 Bordeaux Carte du Ciel catalogue ($\mathsf{+11\degr \leq \delta \leq +18\degr}$)
M. Rapaport, C. Ducourant, J. F. Le Campion, et al.
Astronomy & Astrophysics 449 (1) 435 (2006)
The implementation of binned Kernel density estimation to determine open clusters' proper motions: validation of the method
R. Priyatikanto and M. I. Arifyanto
Astrophysics and Space Science 355 (1) 161 (2015)
DOI: 10.1007/s10509-014-2137-y
Sydney observatory Galactic survey
A. Fresneau, A. E. Vaughan and R. W. Argyle
Astronomy & Astrophysics 469 (3) 1221 (2007)
Galactic kinematics with RAVE data
L. Veltz, O. Bienaymé, K. C. Freeman, et al.
An analysis of Bordeaux meridian transit circle observations of planets and satellites (1997–2007)
J. E. Arlot, G. Dourneau and J. F. Le Campion
A variability sample catalogue selected from the Sydney Observatory Galactic Survey
A. Fresneau and W. H. Osborn
THE FIRST U.S. NAVAL OBSERVATORY ROBOTIC ASTROMETRIC TELESCOPE CATALOG
N. Zacharias, C. Finch, J. Subasavage, et al.
The Astronomical Journal 150 (4) 101 (2015)
DOI: 10.1088/0004-6256/150/4/101
Galactic planetary nebulae and their central stars
F. Kerber, R. P. Mignani, R. L. Smart and A. Wicenec
Catalogue of variable stars in open cluster fields
M. Zejda, E. Paunzen, B. Baumann, Z. Mikulášek and J. Liška
Evolved star water maser cloud size determined by star size
A. M. S. Richards, S. Etoka, M. D. Gray, et al.
The influence of radio-extended structures on offsets between the optical and VLBI positions of sources in the ICRF2
J. I. B. Camargo, A. H. Andrei, M. Assafin, R. Vieira-Martins and D. N. da Silva Neto
THE THIRD US NAVAL OBSERVATORY CCD ASTROGRAPH CATALOG (UCAC3)
N. Zacharias, C. Finch, T. Girard, et al.
The Astronomical Journal 139 (6) 2184 (2010)
DOI: 10.1088/0004-6256/139/6/2184
Photometric and spectroscopic study of the intermediate-age open cluster NGC 2355
P. Donati, A. Bragaglia, E. Carretta, et al.
Monthly Notices of the Royal Astronomical Society 453 (4) 4185 (2015)
DOI: 10.1093/mnras/stv1914
The triple system KR Comae Berenices
P. Zasche and R. Uhlář
Astronomy and Astrophysics 519 A78 (2010)
Astrophysics in 2006
Virginia Trimble, Markus J. Aschwanden and Carl J. Hansen
Space Science Reviews 132 (1) 1 (2007)
Data mining for dwarf novae in SDSS, GALEX and astrometric catalogues
Patrick Wils, Boris T. Gänsicke, Andrew J. Drake and John Southworth
Monthly Notices of the Royal Astronomical Society 402 (1) 436 (2010)
Kinematic parameters and membership probabilities of open clusters in the Bordeaux PM2000 catalogue
A. Krone-Martins, C. Soubiran, C. Ducourant, R. Teixeira and J. F. Le Campion
Astronomy and Astrophysics 516 A3 (2010)
Discovery of the first wide L dwarf + giant binary system and eight other ultracool dwarfs in wide binaries
Z. H. Zhang, D. J. Pinfield, A. C. Day-Jones, et al.
Monthly Notices of the Royal Astronomical Society (2010)
A new benchmark T8-9 brown dwarf and a couple of new mid-T dwarfs from the UKIDSS DR5+ LAS★
B. Goldman, S. Marsat, T. Henning, C. Clemens and J. Greiner
Monthly Notices of the Royal Astronomical Society no (2010)
The CPMDS catalogue of common proper motion double stars in the Bordeaux Carte du Ciel zone
P. Gavras, D. Sinachopoulos, J. F. Le Campion and C. Ducourant
Clusters and mirages: cataloguing stellar aggregates in the Milky Way
T. Cantat-Gaudin and F. Anders
|
CommonCrawl
|
Why does helium have such a low van der Waals coefficient but not as low a Lennard-Jones coefficient?
Helium has the lowest polarizability of any atom, and therefore ought to have the smallest London dispersion force. Indeed, if you look at the van der Waals constant of helium, you find that it has the lowest value of $a$ by a lot. $a$ roughly corresponds to the amount attraction between particles.
The long-distance force between neutral particles can be roughly modelled by the Lennard-Jones potential. Even though helium should have the weakest $r^{-6}$ term of any atom, if you look up the experimental values of the Lennard-Jones coefficients, you find that they're the same for Helium as they are for the rest of the noble gases.
What gives? Why is its Lennard-Jones potential roughly the same when its London dispersion force is so weak? (Perhaps my source is just wrong?)
physical-chemistry thermodynamics bond statistical-mechanics
$\begingroup$ Can you post your source? When I look up the values for $\sigma$ for the LJ potential, I find helium has the smallest of everything in the table, and is always smaller than the other noble gases. Keep in mind that that a small change in $\sigma$ can have a large change in the attraction because the attraction goes as $\sigma^6$ and repulsion $\sigma^{12}$. $\endgroup$
$\begingroup$ Even your linked source shows that helium has a smaller value of $\sigma$. $\endgroup$
Lennard-Jones Parameters
To quote the original source of those Lennard-Jones parameter values (Gordon and Kim),
For all the systems involving atoms larger than helium, the predictions appear quite reliable... Our approach thus provides the first successful prediction of the intermolecular potentials for the rare gases (except helium)
So I would not put much faith in the He-He LJ parameters in the table you linked. The analysis was good in 1972, but even still wasn't great for helium! The authors go on to state that
Polarizing (induction) forces are not included
which, arguably, is the strongest evidence not to trust the LJ parameters too much. The Lennard-Jones (6-12) potential's strongest theoretical justification is probably that dispersive forces can be shown to approximately follow $r^{-6}$ dependence. If dispersion's not even included, then that's a big red flag since the $r^{-12}$ part has no theoretical justification. It seems they just tried to fit the LJ potential to their calculations, probably for reasons of comparison with previous results. The experimental value that they compare their calculations to is $16.5 \times 10^{-16}$ ergs, which would be 1.0 in the table instead of 3.9, and thus would continue the expected trend.
van der Waals parameters
I will just be brief and say that the quantum gases, including helium, are not amenable to the use of the classical van der Waals equation. Helium's critical temperature is 5.2 kelvin, and there is a bunch of quantum stuff going on in that regime. You just can't compare the quantum gas parameters with those of fluids that behave classically; it's quantum apples and classical oranges.
Anger DensityAnger Density
Not the answer you're looking for? Browse other questions tagged physical-chemistry thermodynamics bond statistical-mechanics or ask your own question.
Do metallic bonds contain London dispersion forces?
Why Aren't Chlorides Of The Noble Gases As Prevalent As Their Fluorides?
Lennard-Jones potential and vibrational energy level diagram explanation
How to determine if a compound is likely to become a solid at room temperature?
Fitting parameters of Lennard-Jones potential to properties of real materials
How does the Lennard-Jones potential relate to energy?
van der Waals coefficients for helium and neon
What would be the intermolecular forces between the molecules of methanetetrol?
Can two noble gases attract each other?
|
CommonCrawl
|
Tools for Fundamental Analysis
Sectors & Industries Analysis
Fundamental Analysis Sectors & Industries Analysis
Which Industries Tend to Have the Most Inventory Turnover?
By Evan Tarver
The industries that tend to have the most inventory turnover are those with high volume and low margins, such as retail, grocery and clothing stores.
Inventory turnover measures the rate at which a company purchases and resells its products (or inventory) to its customers and consumers. Low inventory turnover can indicate bad management, poor purchasing practices or selling techniques, faulty decision-making, and the build-up of inferior or obsolete goods. As a result, investors usually don't like to see a low inventory turnover ratio in a company; it suggests the business is in, or headed for, trouble.
Most companies consider a turnover ratio between six and 12 to be desirable. However, it's important to realize that low and high are only relative to the company's particular sector or industry. No specific number exists to signify what constitutes a good or bad inventory turnover ratio across the board; desirable ratios vary from sector to sector (and even sub-sectors). Investors should always compare a particular company's inventory turnover to that of its sector, and even its sub-sector, before determining whether it's low or high.
Calculating Inventory Turnover
There are a couple of ways to calculate inventory turnover:
Inventory Turnover=SalesInventory\begin{aligned} &\text{Inventory Turnover} = \frac{ \text{Sales} }{ \text{Inventory} } \\ \end{aligned}Inventory Turnover=InventorySales
Inventory Turnover=COGSAverage Value of Inventorywhere:COGS=Cost of goods sold\begin{aligned} &\text{Inventory Turnover} = \frac{ \text{COGS} }{ \text{Average Value of Inventory} } \\ &\textbf{where:} \\ &\text{COGS} = \text{Cost of goods sold} \\ \end{aligned}Inventory Turnover=Average Value of InventoryCOGSwhere:COGS=Cost of goods sold
For example, using the first method: If a company has an annual inventory amount of $100,000 worth of goods and yearly sales of $1 million, its annual inventory turnover is 10. This means that over the course of the year, the company effectively replenished its inventory 10 times.
Using the second method, which many analysts prefer: If a company has an annual average inventory value of $100,000 and the cost of goods sold by that company was $850,000, its annual inventory turnover is 8.5; in other words, it has replaced all of its inventory eight and one-half times in a year.
Many analysts prefer the cost of goods method, deeming it to be more accurate because it reflects what items in inventory actually cost a company.
Inventory Turnover Industry Example: Grocery Stores
In sectors such as the grocery store industry, it is normal to have very high inventory turnover. According to CSIMarket, an independent financial research firm, the grocery store industry had an average inventory turnover of 13.56 (using the cost of goods method) for 2018, which means the average grocery store replenishes its entire inventory over 13 times per year.
This high inventory turnover is largely due to the fact that grocery stores need to offset lower per-unit profits with higher unit sales volume. These types of low-margin industries have proportionately higher sales than inventory costs for the year.
In addition to high volume/low margin industries needing a higher inventory turnover to remain cash-flow positive, a high inventory turnover can also signal an industry as a whole is enjoying strong sales or has very efficient operations. It is also a signal to investors that the sector is a less risky prospect since companies within it replenish cash quickly and don't get stuck with goods that can become obsolete or outdated.
How to Calculate the Inventory Turnover Ratio
How do you calculate inventory turnover?
How do you analyze inventory on the balance sheet?
Measuring Company Efficiency To Maximize Profits
The Difference Between Gross Profit Margin and Net Profit Margin
Key Financial Ratios for Retail Companies
Why You Should Use Days Sales Of Inventory – DSI
The days sales of inventory (DSI) gives investors an idea of how long it takes a company to turn its inventory into sales.
Inventory Turnover Definition
Inventory turnover measures a company's efficiency in managing its stock of goods. The ratio divides the cost of goods sold by the average inventory.
Asset Turnover Ratio
Asset turnover ratio measures the value of a company's sales or revenues generated relative to the value of its assets.
How to Understand Days Payable Outstanding
Days payable outstanding (DPO) is a financial ratio that indicates the average time (in days) that a company takes to pay its bills and invoices to its trade creditors, which include suppliers, vendors, or other companies.
Average Age Of Inventory
The average age of inventory is the average number of days it takes for a firm to sell off inventory.
How to Use the DuPont Analysis to Assess a Company's ROE
The DuPont analysis is a framework for analyzing fundamental performance popularized by the DuPont Corporation. DuPont analysis is a useful technique used to decompose the different drivers of return on equity (ROE).
|
CommonCrawl
|
Agriculture & Food Security
Field efficacy of genetically modified FK 95 Bollgard II cotton for control of bollworms, Lepidoptera, in Ghana
Mumuni Abudulai ORCID: orcid.org/0000-0003-1710-83871,
Emmanuel Boachie Chamba1,
Jerry Asalma Nboyine1,
Ramson Adombilla1,
Iddrisu Yahaya1,
Ahmed Seidu1 &
Foster Kangben1
Agriculture & Food Security volume 7, Article number: 81 (2018) Cite this article
Cotton (Gossypium hirsutum L.) cultivation in Ghana is constrained by bollworms that damage squares (flower buds) and developing bolls, resulting in loss in seed cotton yield. Control of these insects is heavily dependent on insecticides that are costly and also pose health and environmental risks to users. Potential alternative control strategies have focused on using cotton genetically modified with the soil-borne bacterium Bacillus thuringiensis Berliner (Bt) that confer resistance against these pests. This study evaluated the field efficacy of the genetically modified FK 95 Bollgard II (FK 95 BG II) cotton for control of bollworms in Ghana.
Results showed that bollworm densities in the FK 95 BG II cotton were lower compared with those in the FK 37 conventional cotton. However, populations of the natural enemies, ladybird beetles Coccinella undecimpunctata L and lacewings Chrysoperla carnea [Stephens] were higher in the Bt compared with the conventional technology of pest management. On average, seed cotton yields were higher in the FK 95 BG II compared to those in the FK 37. Net profit and cost–benefit ratios also were higher for the Bt technology compared with the conventional practice, indicating that farmers would benefit more if they adopt the Bt technology of cotton pest management.
The Bt cotton technology of pest management was more effective and economical than the conventional practice of wholly relying on insecticides and was a better management option for bollworm in cotton in the savanna ecology of Ghana.
Cotton (Gossypium hirsutum L.) is an important and a major cash crop in northern Ghana, where it is widely cultivated because of its adaptability to the climatic conditions of the area. It serves as a source of lint for the textile and garment industries in Ghana. Moreover, besides serving as a source of income and employment to farmers and their families, its export earns foreign exchange for the country. However, in spite of the high potential for its cultivation in Ghana, seed cotton yields on farmers' fields remain low averaging 500 kg ha−1 compared to about 2000 kg ha−1 in other cotton-producing countries [1, 2]. The low productivity is attributed to several factors, key among them is the problem of insect pests, particularly bollworms. A substantial part of the cotton production budget is allocated to controlling insect pests [3].
The crop is attacked in the field by numerous insect pests, with the bollworm complex being the most important in Ghana [2, 3]. This complex comprises the American bollworm, Helicoverpa armigera (Hubner) (Noctuidae); spiny bollworm, Earias spp. (Nolidae); pink bollworm, Pectinophora gossypiella Saunders (Gelechiidae); Sudan bollworm, Diparopsis watersii (Rothschild) (Noctuidae); and the false codling moth, Thaumatotibia (= Cryptophlebia) leucotreta Meyrick (Tortricidae) [2, 3]. The larval stages of these insects damage plant terminals and also chew into squares (flower buds) and developing bolls, resulting in abscission of these floral parts and loss in seed cotton yield. Complete yield loss due to these insects can occur in unprotected or poorly protected fields [4].
Conventional breeding for resistance against these pests has not yielded the desired results. Consequently, they are managed mainly with insecticides in Ghana and many other cotton-growing areas in West Africa [2,3,4]. Globally, cotton is responsible for about 16–25% of all chemical insecticides used in agriculture, which is more than what is used for any other single crop [6, 7, 10]. Although most of these insecticides are effective for control, they pose health hazards to farmers who use them and also contaminate the environment. Moreover, they are expensive and their indiscriminate use can cause emergence of resistant biotypes in insect populations resulting in control failures such as those reported for pyrethroid insecticide use in parts of West Africa [5, 8, 9]. In an effort to scale down on insecticide use, amidst fears for environmental contamination and insect resistance build-up, genetic modification of plants for resistance to insect pests has been found to be a better and environmentally friendlier alternative [10]. Genetic modification with the soil-borne bacterium Bacillus thuringiensis (Bt) Berliner has been used to control insect pests in several crops [11,12,13]. Genetically modified cotton contains the Bt gene(s) that produce(s) toxins or bio-pesticides inside the plant to offer protection against insects. It has specific activity against lepidopteran insects such as the bollworm complex due to specific receptors and conditions in the caterpillar's gut that allow activation of the Bt crystal proteins [14, 15]. The objective of the present study was to evaluate the field efficacy of the genetically modified Bollgard II (BG II) cotton, FK 95 BG II for control of cotton bollworms in Ghana.
Description of the experimental area
Field studies were conducted on-station at the research farm of the Council for Scientific and Industrial Research-Savanna Agricultural Research Institute (CSIR-SARI) in Nyankpala (9°42′N, 0°92′W) and on-farm on farmers' fields at five different locations (9°23′N, 0°07′W–10°50′N, 1°58′W) in northern Ghana. The experimental area is located in the Guinea Savanna zone which is characterized by grassland vegetation interspersed with few trees. Soils of the experimental fields were generally of sandy loam texture with a pH of 4.5–5.5 and organic matter content of 0.89–0.99%. Rainfall of the area is unimodal and falls from May to October followed by a long dry period from November to April. The annual rainfall ranges from 900 mm to 1200 mm and temperature from 21 to 40 °C.
Planting material and land preparation
Seeds of the FK 95 BG II cotton which carries the genes coding for Cry1Ac and Cry2Ab, and the FK 37 conventional cotton were obtained from Monsanto, South Africa and Burkina Faso, respectively. Seeds of the two cotton varieties were planted simultaneously in comparative studies. The land was tractor-ploughed and levelled manually with the hand hoe before planting.
Experimental design and treatments
The on-station experiment consisted of four treatments (Table 1) arranged in a randomized complete block design (RCBD) with four replications. Each treatment plot had eight (8) rows 10 m long with inter- and intra-row spacing of 0.75 m and 0.40 m, respectively. Blocks and plots were separated by 5-m unplanted alleys to minimize insecticide drift to unsprayed plots. The experimental area was surrounded by a 12-m border of conventional cotton that served as refugia for dilution of resistance genes to the Bt toxins [16]. The standard practice of six insecticide sprays were made at 2-week intervals beginning at 35 days after planting (DAP) [17] in designated plots that were sprayed for both bollworms and sucking insects (Table 1). The conventional cotton refugia also were protected according to the standard practice of six sprays. The insecticides Tihan (spirotetramat 75 g/l + flubendiamide 100 g/l) (BCS—Crop Protection, Accra, Ghana) were applied for the first three sprays followed by Thunder (imidacloprid 100 g/l + betacyfluthrin 45 g/l) (BCS—Crop Protection, Accra, Ghana) for the last three sprays. The application rate was 0.035 kg a.i.ha−1 for Tihan and 0.029 kg a.i. ha−1 for Thunder. For plots that were sprayed for only sucking insects mainly cotton stainers, only two applications were made with Tihan alternated with Thunder at 2-week interval at boll opening at 90 DAP. The compound fertilizer NPK (23-10-5) was used for basal application at 2 weeks after planting at the rate of 250 kg ha−1, while sulphate of ammonia was applied as top dress at 5 weeks after planting at the rate of 125 kg ha−1. Pendimethalin pre-emergence herbicide (Stomp 440 Herbicide, BASF Corp., Victoria, Australia) was applied within 2 days after planting at 1.0 kg a.i. ha−1. This was followed by hand weeding of fields at 2 and 4 weeks after planting.
Table 1 Cotton treatment and descriptions of insecticide application
The on-farm experiment consisted of two treatments, FK 95 BG II and FK 37 conventional cotton, arranged in a RCBD and replicated at five sites. At each site, the two treatments were planted adjacent to each other, each on a 0.25 ha plot. A spacing of 5-m fallow was maintained between and around plots and a 12-m border of conventional cotton surrounded the trial as refugia. The FK 37 conventional cotton was protected against insects according to the standard practice of six sprays from 35 DAP as described earlier in the on-station experiment. For the FK 95 BG II cotton, only two sprays were made and targeted against sucking insects as described earlier. All the agronomic practices were carried out as detailed for the on-station experiment.
In the on-station experiment, bollworm infestation and damage to squares and bolls were assessed on 6 randomly selected plants from the 6 inner rows of each plot. Counts were taken weekly from first square to cut-out. To develop data package on the impact of Bt cotton on non-target organisms (NTOs) such as pollinators and predators, counts were also made at the same time of these insects/arthropods on each of the 6 plants that were assessed for bollworms in each plot. Due to logistic constraints, weekly sampling of insects was not carried out in the on-farm experiment.
At maturity, seed cotton yield in the on-station experiment was determined from the harvests from the 6 inner rows of plots less 2 m on both ends of the rows to reduce border effects. In the on-farm experiment, the entire plot was harvested and seed cotton yield recorded. Yield loss due to bollworms was calculated using the formula:
$$\% {\text{Yield loss}} = \frac{{{\text{TB}} - {\text{UB}}}}{\text{TB}} \times 100,$$
where TB is the total number of bolls on plants and UB is the number of undamaged bolls.
The data from the two experiments were subjected to analysis of variance separately using the SAS statistical package [18]. Count data were subjected to square-root transformation before analysis. Means were separated using Fischer's protected LSD test at P < 0.05.
Partial budget analysis
Partial budget analysis was used to assess the cost–benefit ratio of treatments. The cost–benefit ratio was calculated from the seed cotton yield of each treatment and the cost of insecticide treatments. It was used to assess the economic viability of the Bt technology compared to the conventional practice of managing bollworms in cotton. Government-approved price for seed cotton was used to determine the value of yield of Bt cotton over that of conventional control, while market prices of insecticides and insecticide spray charges were used to compute variable cost of production. These calculations were based on the assumption that the market price of Bt seed cotton was equivalent to that of conventional seed cotton. The benefit or gross margin over control was calculated using the formula below:
$${\text{Gross margin over control}} = {\text{value of yield over control}} = P_{\text{market}} \times \left( {Q_{\text{bt}} - Q_{\text{control}} } \right),$$
where \(P_{\text{market}}\) is the market price of seed cotton/kilogram, Qbt is the yield of Bt seed cotton (kg/ha), and Qcontrol is the yield of the conventional cotton control (kg/ha).
The cost–benefit ratio was calculated using the following:
Effect on bollworm population densities
Bollworm population densities were lowest on the FK 95 BG II cotton sprayed for both sucking insects and bollworms or for sucking insects only and highest on the FK 37 conventional cotton sprayed for sucking insects only (Table 2). Bollworm densities in FK 95 BG II sprayed for sucking insects only were similar to those in FK 37 given insecticide protection for both sucking insect and bollworms. Averaged across spraying regimes for each cotton variety, bollworm densities were 0.06 larvae/plant for FK 95 BG II compared with 0.50 larvae/plant for FK 37 conventional cotton. The low densities of bollworms in the Bt cotton compared with the conventional cotton are consistent with reports that Bt cotton is effective for control of cotton bollworms [1, 19].
Table 2 Effect of treatments on bollworm densities, seed cotton yield and yield loss
Effect on natural enemies of insects
The different treatments in the on-station experiment significantly affected the abundance of the natural enemies, ladybird beetles Coccinella undecimpunctata L and lacewings Chrysoperla carnea [Stephens] (Table 3). Generally, populations of these natural enemies were higher on both FK 95 BG II and FK 37 conventional plots that were sprayed twice for sucking insects compared with those that were sprayed six times for both sucking insects and bollworms. However, spiders (e.g. Cheiracanthium mildei L. Koch) populations were not significantly different (P > 0.05) among the treatments.
Table 3 Effect of treatments on mean number of natural enemies per plant
Several workers have found no adverse effects on non-target natural enemies resulting from direct toxicity of Bt crops in the field [20,21,22,23]. Thus, the reduced populations of ladybird beetles and lacewings observed in plots that received multiple insecticide sprays could be attributed to the effect of the insecticides. Li et al. [20] observed that whereas conventionally grown cotton requires more insecticide treatments for bollworm control that generally are toxic to both pests and non-target arthropods, Bt cotton fields often have significantly more non-target arthropods than conventionally grown cotton fields. In a separate study, however, Abudulai et al. [2] did not find any negative effect of the insecticides used in this study to natural enemies.
Effect on seed cotton yield
Seed cotton yield was significantly (P < 0.05) affected by cultivar and spraying regime. In the on-station experiment, the highest yield was recorded in FK 95 BG II sprayed for both sucking insects and bollworms, while the lowest yield was recorded in FK 37 conventional cotton sprayed for sucking insects only (Table 2). Yields in FK 95 BG II sprayed for sucking insects but unsprayed for bollworms were comparable to those in the FK 37 conventional cotton protected against both sucking insects and bollworms. Yields in the FK 95 BG II were more than double those of the FK 37 conventional cotton when both plots were sprayed for sucking insects only. Seed cotton yield was 1415.4 kg ha−1 for the FK 95 BG II cotton compared with 829.6 kg ha−1 for the FK 37 conventional cotton, when averaged across spraying regimes for each cotton variety. This represented a 41% yield increase in the FK 95 BG II Bt cotton over the yield in the FK 37 conventional cotton. The results from the on-farm experiment also showed that yield was significantly greater in the FK 95 BG II Bt cotton compared with the FK 37 conventional cotton (Fig. 1). Yield increased by 19% in the FK 95 BG II over that of the FK 37 cotton. Gouse et al. [10] reported average yield increase above 50% with Bt cotton compared with conventional cotton for smallholders in South Africa. The yield increases observed with the Bt technology in the current study further demonstrated the superiority of the technology to the current conventional practice of wholly relying on insecticide sprays for bollworm control [24, 25].
Comparative efficacy (mean ± SE) of Bt technology and conventional management practice of bollworms on seed cotton yield in northern Ghana. Plots in the Bt technology were sprayed twice for control of sucking insects only, while plots in the conventional practice were sprayed six times for control of both sucking insects and bollworms
As expected, yield loss was highest (P < 0.05) in FK 37 sprayed for sucking insects only and lowest in FK 95 BG II sprayed for both sucking insects and bollworms, which was not lower than FK 95 BG II protected against sucking insects only (Table 2). The results also showed that yield was negatively correlated (r = − 0.85170; P < 0.0001) with bollworm densities, while a positive correlation was measured between bollworm densities and yield loss (r = 0.84387; P < 0.0001). Similar observations were made in a previous study [2], which demonstrates further the importance of bollworms infestations in limiting seed cotton yield.
Cost–benefit ratio of treatments
The on-station results showed a higher net profit for the Bt technology than the conventional practice of relying wholly on insecticide protection (Table 4). The cost–benefit ratio was also higher with the Bt technology than the conventional practice, indicating a higher return to investment with the Bt technology than with the conventional practice. For example, a cost–benefit ratio of 1:3.32 for the Bt technology with two sprays showed that a Gh¢1.00 (US$0.22) investment in the Bt technology yielded a return of Gh¢3.32 (US$0.74) compared to the ratio of 1:0.31 for the conventional practice with six sprays which yielded a return of Gh¢0.31 (US$0.07). Similarly, the on-farm results showed that the increased yield of 247.67 kg in the Bt over that of the conventional practice resulted in a net profit of Gh¢81.00 (US$18.00) and a cost–benefit ratio of 1:0.60 (data not shown). These results are consistent with the report that the Bt cotton technology increases profits of farmers [17, 26]. Farmers would therefore benefit more in terms of increased yields and returns to their investment when they adopt the Bt technology to manage bollworms compared with the conventional management with insecticides. Moreover, the associated decreased cost as a result of reduced number of sprays with the Bt cotton has an added advantage of minimizing the risk of pesticide poisoning to farmers [27] and also can compensate for higher cost, if any of Bt cotton seeds.
Table 4 Partial budget analysis for effect of Bt cotton on insecticide application frequency on cotton bollworm
The study showed that seed cotton yields on average were higher with the Bt cotton technology compared with the conventional practice of wholly relying on insecticide sprays for managing bollworms. The positive yield increases translated into higher net profits and cost–benefit ratio for the Bt cotton technology compared with the conventional practice. The findings are significant when the benefits in terms of the increased income that would accrue to farmers from the use of the Bt cotton technology are considered. The reduced number of insecticide sprays from six to two with the Bt technology also reduces the risks to farmers from insecticide exposure and poisoning.
CSIR:
Council for Scientific and Industrial Research
SARI:
Savanna Agricultural Research Institute
Bt :
Bacillus thuringiencis
BG:
Bollgard
RCBD:
DAP:
Days after planting
NTO:
Non-target organisms
Hillocks RJ. Is there a role for Bt cotton in IPM for smallholders in Africa?". Int J Pest Manag. 2005;51:131–41.
Abudulai M, Seini SS, Nboyine J, Seidu A, Ibrahim Y Jr. Field efficacy of some insecticides for control of bollworms and impact on non-target beneficial arthropods in cotton. Expl Agric. 2017. https://doi.org/10.1017/s0014479717000072.
Abudulai M, Abatania L, Salifu AB. Farmers' knowledge and perceptions of cotton insect pests and their control practices in Ghana. J Sci Technol. 2006;26:39–46.
Michel B, Togola M, Téréta I, Traoré NN. Cotton pest management in Mali: issues and recent progress. Cah Agric. 2000;9:109–15.
Martin T, Ochou OG, Hala NF, Vassal JM, Vassayre M. Pyrethroid resistance in the cotton bollworm, Helicoverpa armigera (Hübner) in West Africa. Pest Manag Sci. 2000;56:549–54.
EJF. The deadly chemicals in cotton. Environmental justice foundation in collaboration with pesticide action network. London, UK, ISBN No. 1-904523-10-2, 2007. https://ejfoundation.org//resources/downloads/the_deadly_chemicals_in_cotton.pdf
Naranjo SE. Impacts of Bt transgenic cotton on integrated pest management. J Agric Food Chem. 2011;59:5842–51.
Vaissayre M, Martin T, Vassal JM, Silvie P. Pyrethroid resistance monitoring program for the cotton bollworm in West Africa. 2000. http://www.cnpa.embrapa.br/produtos/algodao/publicacoes/cba3/algo080.pdf. Accessed 17 Nov 2017.
Martin T, Chandre F, Ochou OG, Vaissayre M, Fournier D. Pyrethroid resistance mechanisms in the cotton bollworm Helicoverpa armigera (Lepidoptera: Noctuidae) from West Africa. Pestic Biochem Physiol. 2002;74:17–26.
Gouse M, Kirsten JF, van der Walt WJ. Bt cotton and Bt maize: An evaluation of direct and indirect impact on the cotton and maize farming sectors in South Africa. Pretoria: Department of Agriculture; 2008. p. 99.
Chakraborty J, Sen S, Ghosh P, Sengupta A, Basu D, Das S. Homologous promoter derived constitutive and chloroplast targeted expression of synthetic cry1Ac in transgenic chickpea confers resistance against Helicoverpa armigera. Plant Cell Tiss Organ Cult. 2016;125:521–35.
Kaur A, Sharma M, Sharma C, Kaur H, Kaur N, Sharma S, et al. Pod borer resistant transgenic pigeon pea (Cajanus cajan L.) expressing cry1Ac transgene generated through simplified Agrobacterium transformation of pricked embryo axes. Plant Cell Tiss Organ Cult. 2016;127:717–27.
Mabubu JI, Nawaz M, Hua H. Advances of transgenic Bt-crops in insect pest management: an overview. J Entomol Zool Stud. 2016;4:48–52.
Glare TR, O'Callaghan M. Bacillus thuringiensis: biology, ecology and safety. New York: Wiley; 2000.
Keshavareddy G, Kumar ARV. Bacillus thuringiensis. In: Omkar O, editor. Ecofriendly pest management for food security. Elsevier: Amsterdam; 2016. p. 443–73.
Wu K, Feng H, Guo Y. Evaluation of maize as a refuge for management of resistance to Bt cotton by Helicoverpa armigera (Hubner) in the Yellow River cotton-farming region of China. Crop Prot. 2004;23:523–30.
Traoré H, Héma SAO, Traoré K. Bt cotton in Burkina Faso demonstrates that political will is key for biotechnology to benefit commercial agriculture in Africa. In: Wambugu F, Kamanga D, editors. Biotechnology in Africa, emergence, initiatives and future. Springer, New York, 2014. p. 291.
SAS Institute. SAS user's guide. 9th ed. Cary: SAS Institute; 1998.
Krattiger AF. Insect resistance in crops: a case study of Bacillus thuringiensis (Bt) and its transfer to developing countries. ISAAA Brief No. 2, International Service for the Acquisition of Agri-Biotech Applications. 1996. p. 51.
Li Y, Romeis J, Wang P, Peng Y, Shelton AM. A comprehensive assessment of the effects of Bt cotton on Coleomegilla maculata demonstrates no detrimental effects by Cry1Ac and Cry2Ab. PLoS ONE. 2011;6(7):e22185. https://doi.org/10.1371/journal.pone.0022185.
Moar WJ, Eubanks M, Freeman B, Turnipseed S, Ruberson J, Head G. Effects of Bt cotton on biological control agents in the Southeastern United States. In: Proceedings of the first international symposium on biological control of arthropods; 2002. Honolulu, USA.
Pehu E, Ragasa C. Agricultural biotechnology transgenics in agriculture and their implications for developing countries, Background paper for the World Development Report; 2008, Washington, D. C. The World Bank, 2007.
Romeis J, Meissle M, Bigler F. Transgenic crops expressing Bacillus thuringiensis toxins and biological control. Nat Biotechnol. 2006;24(1):63–71.
Wu K, Guo Y, Lv N, Greenplate JT, Deaton R. Efficacy of transgenic cotton containing a Cry1Ac gene from Bacillus thuringiensis against Helicoverpa armigera (Lepidoptera: Noctuidae) in northern China. J Econ Entomol. 2003;96(4):1322–8.
Héma SAO, Some HN, Traoré O, Greenplate J, Abdennadher M. Efficacy of transgenic cotton plant containing the Cry1Ac and Cry2Ab genes of Bacillus thuringiensis against Helicoverpa armigera and Syllepte derogata in cotton cultivation in Burkina Faso. Crop Prot. 2009;28:205–14.
Kathage J, Qaim M. Economic impacts and impact dynamics of Bt (Bacillus thuringiensis) cotton in India. PNAS. 2012;109(29):11652–6.
Kouser S, Qaim M. Impact of Bt cotton on pesticide poisoning in smallholder agriculture: a panel data analysis. Ecol Econ. 2011;70:2105–13.
MA and EBC conceived and designed the study. JAN, RA, AS and FK collected the data. MA, JAN and EBC analysed the data and drafted the manuscript. MA, JAN, IY and EBC contributed to the critical revision of the manuscript for important intellectual content. All authors read and approved the final manuscript.
The authors acknowledge Monsanto Company for providing seeds of FK 95 Bollgard II cotton and funds for the study. The authors specially thank Fred Anaman, James Yaw Kwabena, Mohammed Hafiz Alhassan, Rebecca Kaba and Soweiba Abdulai of the Entomology Section of CSIR-SARI for technical support.
The authors declare that there are no competing interests regarding the publication of this paper.
The data sets used and/or analysed in the preparation of the manuscript can be made available to anyone who desires to see it from the corresponding author on request.
Ethical approval and consent from our organization and farmer participants were sought and granted for the conduct of the study and to participate in research paper writing and submission to any relevant journal.
This research was supported through funds provided by Monsanto Company, USA.
CSIR-Savanna Agricultural Research Institute, P. O. Box 52, Tamale, Ghana
Mumuni Abudulai
, Emmanuel Boachie Chamba
, Jerry Asalma Nboyine
, Ramson Adombilla
, Iddrisu Yahaya
, Ahmed Seidu
& Foster Kangben
Search for Mumuni Abudulai in:
Search for Emmanuel Boachie Chamba in:
Search for Jerry Asalma Nboyine in:
Search for Ramson Adombilla in:
Search for Iddrisu Yahaya in:
Search for Ahmed Seidu in:
Search for Foster Kangben in:
Correspondence to Mumuni Abudulai.
Abudulai, M., Boachie Chamba, E., Asalma Nboyine, J. et al. Field efficacy of genetically modified FK 95 Bollgard II cotton for control of bollworms, Lepidoptera, in Ghana. Agric & Food Secur 7, 81 (2018) doi:10.1186/s40066-018-0232-y
DOI: https://doi.org/10.1186/s40066-018-0232-y
Gossypium hirsutum L
Bollworms
Yield loss
Insecticide control
Bacillus thuringiensis (Bt)
Submission enquiries: [email protected]
|
CommonCrawl
|
Performance analysis of power-splitting relaying protocol in SWIPT based cooperative NOMA systems
Huu Q. Tran ORCID: orcid.org/0000-0001-7636-43781,2,
Ca V. Phan1 &
Quoc-Tuan Vien3
EURASIP Journal on Wireless Communications and Networking volume 2021, Article number: 110 (2021) Cite this article
This paper investigates a relay assisted simultaneous wireless information and power transfer (SWIPT) for downlink in cellular systems. Cooperative non-orthogonal multiple access (C-NOMA) is employed along with power splitting protocol to enable both energy harvesting (EH) and information processing (IP). A downlink model consists of a base station (BS) and two users is considered, in which the near user (NU) is selected as a relay to forward the received signal from the BS to the far user (FU). Maximum ratio combining is then employed at the FU to combine both the signals received from the BS and NU. Closed form expressions of outage propability, throughput, ergodic rate and energy efficiency (EE) are firstly derived for the SWIPT based C-NOMA considering both scenarios of with and without direct link between the BS and FU. The impacts of EH time, EH efficiency, power-splitting ratio, source data rate and distance between different nodes on the performance are then investigated. The simulation results show that the C-NOMA with direct link achieves an outperformed performance over C-NOMA without direct link. Moreover, the performance of C-NOMA with direct link is also higher than that for OMA. Specifically, (1) the outage probability for C-NOMA in both direct and relaying link cases is always lower than that for OMA. (2) the outage probability, throughput and ergodic rate vary according to \(\beta\), (3) the EE of both users can obtain in SNR range of from \(-10\) to 5 dB and it decreases linearly as SNR increases. Numerical results are provided to verify the findings.
Non-orthogonal multiple access (NOMA) has recently been shown as one of the potential candidates for 5G and beyond based wireless networks to overcome the limitations of the current technologies such as energy efficiency, latency and user fairness [1,2,3]. One of the critical features of NOMA techniques is that multiple users are permitted to use the same resources in time, frequency and/or code domain [4]. It means that a strong user, i.e. a NU, is given a lower power allocation factor than a weak user, i.e. a FU, to ensure user fairness [1, 5,6,7]. Two key techniques applied in NOMA consist of superposition coding (SC) [2] and successive interference cancellation (SIC) [1, 2]. As an extended version of NOMA, cooperative NOMA (C-NOMA) [8, 9] exploits a user with better channel conditions, namely a relaying user, to assist to forward the information to another user with poor channel conditions. Therefore, it can increase the coverage region of BS and improve the performance of NOMA systems.
Radio frequency (RF) based energy harvesting (EH) can help solve energy constraint issues in mobile devices, wireless sensors as well as the relaying-acted nodes of wireless communication networks [10, 11]. At relay nodes, the energy harvesting is normally performed in the first phase of signal transmitting time block. This harvested energy is dedicated for: i) consuming at the relay and ii) forwarding the decoded information to the destination.
The combination of simultaneous wireless information and power transfer (SWIPT) and C-NOMA in 5G systems has demonstrated an outperformed energy efficiency and coverage area over OMA [7, 12]. More, by forwarding the information to far users, the relay based SWIPT C-NOMA can improve the integrity and reliability of the transmitted data for weak users [13]. Power-splitting protocol (PSR) and time-switching protocol (TSR) are exploited at SWIPT based relaying nodes to harvest energy and process information [5, 6, 14, 15]. In [16], the sum throughput of users in SWIPT based C-NOMA system was studied. Closed-form and closed-form approximate expressions of outage probability were achieved. In [17], two protocols based on SWIPT, namely CNOMA-SWIPT-PS and CNOMA-SWIPT-TS, were proposed. The effectiveness of the proposed schemes was demonstrated over OMA and the work in [18]. In [19], a SWIPT based C-NOMA system was investigated. A joint design for the power allocation coefficients and the PS factor was proposed to improve the system performance. The derivation of analytical expressions for the outage probabilities of near and far users was also provided. In [20], a PSR based SWIPT for C-NOMA was studied. Compared to the protocol in [21], this protocol can considerably reduce the outage probability of the strong users and increase the system throughput. In [22], the outage probability and throughput of the proposed TSR protocol was superior to the normal TSR protocol.
There are two main data forwarding schemes in relay-assisted C-NOMA, including decode-and-forward (DF) and amplify-and-forward (AF) [1]. Furthermore, in relay based C-NOMA, far users normally receive the transmitted signal which is forwarded from relay nodes [23,24,25,26,27]. This is because there are some obstacles on the propagation [5, 6, 28]. However, in system models without obstacle, these far users can receive signals from both relay and BS, namely therefore relay based C-NOMA with direct links [25, 29,30,31]. In [29], a dynamic DF based C-NOMA scheme for downlink transmission was proposed. The outage probability of the proposed scheme was derived by applying point process theory. In [32], three cooperative relaying schemes were proposed in a DF based C-NOMA system. The system performance for the proposed schemes was superior to the cooperative DF relaying without direct links and multiple user superposition transmission without relaying. In [33], a DF relay aimed C-NOMA system with direct link between BS and weak user was studied. In [34], a system cooperative device-to-device systems with NOMA in which the BS can communicate simultaneously with all users was considered. Two decoding strategies, namely single signal decoding scheme and maximum ratio combining (MRC) decoding scheme, were proposed. The numerical results showed that the ergodic sum rate as well as outage probability achieve better than the conventional NOMA schemes. The authors in [35] proposed a protocol to permit the BS to adaptively switch between direct and indirect modes in C-NOMA system with two users. The analytical results demonstrated that the proposed protocol overwhelmed the conventional C-NOMA protocol. In [36], the outage performance of dual DF based SWIPT NOMA system with direct link was presented.
The use of relays for forwarding information from sources to destinations and harvesting RF energy has been investigated in the current technologies such as OFDMA, SWIPT/WPT [37,38,39]. In [37], a relaying selection scheme, namely OFDMA relaying selection, was proposed for OFDM multihop cooperative networks with L relays and M hops (\(M,\,L\ge 2\)). The end-to-end outage performance of the proposed approach was evaluated and compared to that of the OFDM relaying selection approach. In [38], a relaying selection scheme was investigated in a two-hop relay-assisted multi-user OFDMA network with K fixed relays and L users (\(2\le L\le K\)), where the end-nodes exploited the SWIPT mechanism based on the power splitting (PS) technique. This relaying selection is to optimize the PS ratio of the end nodes as well as the relay, carrier, and power assignment so that the sum-rate of the system was maximized under the harvested energy and transmitted power constraints. In [39], a survey of the SWIPT and WPT assisted energy harvesting techniques was presented. The survey provided a detailed description of various potential emerging technologies for the fifth generation (5G) communications with SWIPT/WPT.
In this paper, we investigate a wireless communication system model which can ensure the user fairness by allocating power and harvest the RF energy from the source, namely C-NOMA based system model. We combine SWIPT and C-NOMA in our model system to study its performance metric in terms of the outage probability, throughput and energy efficiency. The investigated model consists one base station and two users where one user acts as a relaying user, another user is a FU. The BS simultaneously broadcasts the superposed coding signals to both users and thus the FU also receives the signal from BS. Based on NOMA mechanism, the FU with poor channel conditions is allocated more power than the NU with strong channel conditions. Moreover, the SIC process is performed at the NU which acts as the relaying user. After receiving the transmitted signal, the relaying user decodes the FU\(^{'}\)s signal and its own signal utilizing SIC. The decoded signal of the FU at the relaying user is then forwarded to the FU using DF protocol. The relaying user employs PSR protocol in its communication process. Involving in signal processing at relay node, delay limited transmission (DLT) and delay tolerant transmission (DTT) modes can be exploited at this node [14]. The DLT mode refers to the block wise received signal decoding mechanism at the destination node while the DTT mode refers to the storage of the received data block in the buffer of the destination node prior to data decoding. The key contributions of our work in this paper are sumarized as follows:
Closed-form expressions of the performance, i.e., outage probability, throughput, ergodic rate and EE, are derived for the PSR protocol with DLT and DTT modes and direct link. This performance of the system model with direct link is compared to that for C-NOMA without direct link as well as OMA. The simulation results show that the C-NOMA with direct link achieves a better performance than that for the C-NOMA without direct link and OMA.
The impacts of above mentioned parameters on the direct link are evaluated via the numerical simulation results to realize the changes of the performance. These impacts are as a background for choosing the suitable values of the parameters for system model to achieve the tradeoff among terms of the performance as well as users.
The rest of paper is organized as follows. Section 2 presents the detail of the proposed system model and assumptions. Section 3 analyzes the performance parameters including outage probability, throughput, ergodic rate and EE. Section 4 discusses the simulation results. Finally, Sect. 5 gives the main conclusions.
In this section, we investigate the combination of the C-NOMA based system model and PSR based SWIPT technique. From the system model, we analyze and derive the performance metric in terms of outage probability, throughput, ergodic rate, and energy efficiency under constraints of direct and relay links among source, destination, and relay. We then utilize the Monte Carlo numerical approach to simulate and verify the analytical results. Furthermore, the performance metric is compared between CNOMA and OMA schemes, a direct link and relay link to clarify which scheme is superior.
Figure 1 describes the model system with one source S and two users \(D_{1}\) and \(D_{2}\). These two users receive the transmitted signal from the source S. Because \(D_{2}\) is far from S, so \(D_{1}\) also helps S forward the information to \(D_{2}\). According to [40], the BS first broadcasts a threshold signal to all users in its coverage. One bit, i.e., 0 or 1, is used by the users to give the feedbacks of their received signal strength to the BS by comparing its own signal to the threshold signal. The threshold value therefore needs to be chosen carefully. In [41], optional thresholds for systems with different power constraints were also developed. After receiving the feedback bits from the users, the BS decides which user is a strong or weak user and sends this information to the users. In this model, the energy is harvested from the RF signal received at the relay. The harvested energy is stored in the battery which is a finite energy source. A part of this energy is used for the operation of the relay, while the remaining part is stored in the battery. It is assumed that the capacity of the battery is finite. Specifically, to maintain the operation, \(D_1\) harvests the energy from S by employing the PSR protocol, while the DF scheme is employed to decode and forward the information from S to \(D_2\). This system model can also be applied in wireless sensor and mobile networks where the nodes and/or users experience urban with high buildings e.g., some system models in [42,43,44,45,46,47].
Energy harvesting and data transmission protocol
Figure 2 shows the power splitting (PS) protocol for EH and IP. Specifically, in the first T/2, S sends data to both \(D_{1}\) and \(D_{2}\) and \(D_{1}\) harvests the energy from S with a part of the received signal power of \(\beta {P_S}\). Because the data is already sent to \(D_{2}\) in the first T/2, it is not necessary to resend the same information to \(D_{2}\) in the second time slot, unless S wants to send new information but it is beyond the scope of this work. In other words, \(D_{2}\) only decodes the information from \(D_{1}\) with a part of the remaining signal power of \(\left( {1-\beta }\right) {P_S}\) in the second time slot and employs MRC with the one received from S in the first time slot.
Table 1 lists the definition of the parameters used in the model and through the paper.
Table 1 Symbol definition
Energy harvesting at \(D_{1}\)
We know that the amount of harvested energy from RF signal is small. However, the main power of the relay is the battery and the harvested energy is stored in the battery. Following the same approach as in [48], in this paper, we assumed that \(D_{1}\) uses the total harvested energy to relay its detected message of \(D_{2}\) and this harvested energy is sufficient for data transmission and processing.
The observation at \(D_{1}\) is determined based on SC as follows
$$\begin{aligned} {y_{{D_1}}} = {h_1}(\sqrt{{\alpha _1}{P_S}} {x_1} + \sqrt{{\alpha _2}{P_S}} {x_2}) + {n_{{D_1}}}, \end{aligned}$$
The assumptions are given that \(E[{x_1^2}] = E[{x_2^2}] = 1\), and, without loss of generality, \({\alpha _2}>{\alpha _1}>0\) satisfying \({\alpha _1}+{\alpha _2}=\)1.
As shown in Fig 2, \(D_1\) only harvests the energy from S during the first T/2. The harvested energy at \(D_1\) is thus computed by
$$\begin{aligned} {E_H} = \beta \eta {\left| {{h_1}} \right| ^2}\rho \left( {T/2} \right) , \end{aligned}$$
From (2), we understand that the operation of the energy harvesters can occur in the non-linear region. Several works have also investigated the non-linearity of practical energy harvesting circuit [49, 50]. However, to overcome this challenge in our work for practical problems as well as to scope with our study, several EH circuits can be placed in parallel to yield a sufficiently large linear conversion region [51, 52].
The harvested energy is dissipated at \(D_1\) and used to forward the decoded data to \(D_2\). The transmitted power at \(D_1\) obtaining from the harvest energy EH is determined by
$$\begin{aligned} {P_r} = \frac{{{E_H}}}{{\left( {T/2} \right) }} = \frac{{\beta \eta {{\left| {{h_1}} \right| }^2}\rho \left( {T/2} \right) }}{{\left( {T/2} \right) }} = \beta \eta {\left| {{h_1}} \right| ^2}\rho . \end{aligned}$$
We can see that Eqs. (2) and (3) represent for linear energy harvesting circuits. However, as mentioned above, the non-linear region can appear in the operation of the energy harvester. Various studies in the recent trends have also emphasized on the linear architecture of the energy harvester [43, 44]. Therefore, to solve the nonlinear issue of the EH circuits based on linear region, we can yield a sufficiently large linear conversion region via parallel EH circuits.
Information processing at \(D_1\) and \(D_2\)
In fact, the NOMA scheme is only enabled when the strong/weak users can be identified. Hence, channel state information (CSI) of the links from S to \(D_{1}\) and from \(D_{1}\) to \(D_{2}\) is critical. In order to share these CSI, there are different approaches [53, 54], among which a typical one is to use pilot sequences.
Based on NOMA principle, \(D_{2}\) is allocated more power than \(D_{1}\). By applying SIC [14], \(D_{1}\) decodes both signal \(x_{2}\) and its own signal \(x_{1}\). It is assumed that the SIC is perfect. From (1), the received signal to interference plus noise ratio (SINR) at \(D_{1}\) to detect \(x_{2}\) of \(D_{2}\) is determined by
$$\begin{aligned} {\gamma _{2,{D_1}}} = \frac{{{\psi _I}{{\left| {{h_1}} \right| }^2}{\alpha _2}\rho }}{{{\psi _I}{{\left| {{h_1}} \right| }^2}{\alpha _1}\rho + 1}}. \end{aligned}$$
The interference is not in the received signal at \(D_1\) after SIC process. Therefore, the received SNR at \(D_{1}\) to detect its own message \(x_{1}\) is given by
$$\begin{aligned} {\gamma _{1,{D_1}}} = {\psi _I}{\left| {{h_1}} \right| ^2}{\alpha _1}\rho . \end{aligned}$$
Over direct link, the signal at \(D_{2}\) is given by
$$\begin{aligned} {y_{{1,D_2}}} = {h_0}(\sqrt{{\alpha _1}{P_S}} {x_1} + \sqrt{{\alpha _2}{P_S}} {x_2}) + {n_{{D_2}}}. \end{aligned}$$
Therefore, the received SINR at \(D_{2}\) to detect \(x_2\) for the direct link is given by
$$\begin{aligned} {\gamma _{1,{D_2}}} = \frac{{{{\left| {{h_0}} \right| }^2}{\alpha _2}\rho }}{{{{\left| {{h_0}} \right| }^2}{\alpha _1}\rho + 1}}. \end{aligned}$$
Over the relaying link, the decoded signal \(x_2\) at \(D_1\) is forwarded to \(D_{2}\). Thus, the received signal at \(D_{2}\) can be expressed as
$$\begin{aligned} {y_{{2,D_2}}} = \left( {\sqrt{{P_r}} {x_2}} \right) {h_2} + n_{D_2}. \end{aligned}$$
Substituting Eq. (3) into Eq. (8), we obtain
$$\begin{aligned} {y_{{2,D_2}}} = \left( {\sqrt{\beta \eta \rho } } \right) \left| {{h_1}} \right| {h_2}{x_2} + {n_{{D_2}}}. \end{aligned}$$
The received SNR at \(D_2\) over the relaying link is thus expressed by
$$\begin{aligned} {\gamma _{2,{D_2}}} = {\left| {{h_2}} \right| ^2}{\left| {{h_1}} \right| ^2}{\psi _E}\rho . \end{aligned}$$
At \(D_{2}\), the signals from both relaying and direct links are combined by employing the MRC mechanism. The combined SINR can be obtained by
$$\begin{aligned} \gamma _{{D_2}}^{MRC} = {\left| {{h_1}} \right| ^2}{\left| {{h_2}} \right| ^2}\psi _{E}\rho + \frac{{{{\left| {{h_0}} \right| }^2}{\alpha _2}\rho }}{{{{\left| {{h_0}} \right| }^2}{\alpha _1}\rho + 1}}. \end{aligned}$$
This section presents the analysis of the performance of the system model in which closed-form expressions of the outage probability, throughput, ergodic rate and EE are determined in DTT and DLT modes.
Outage performance
Outage probability at \(D_{1}\)
User \(D_1\) is not in outage when it can decode both signals \(x_1\) and \(x_2\) received from the BS. The outage probability at \(D_1\) is thus obtained by
$$\begin{aligned} P_{D_1} = 1 - \Pr \left( {{\gamma _{2,{D_1}}}> \gamma _{t{h_2}},{} {\gamma _{1,{D_1}}} > \gamma _{t{h_1}}} \right) , \end{aligned}$$
where, \(\gamma _{t{h_1}} = {2^{2{R_1}}} - 1\) and \(\gamma _{t{h_2}} = {2^{2{R_2}}} - 1\) represent the threshold SNRs at \(D_{1}\) for detecting signals \(x_1\) and \(x_2\), respectively.
Theorem 1
The outage probability at \(D_{1}\) is given by
$$\begin{aligned} P_{D1} = 1 - {e^{ - \frac{{{\theta _1}}}{{{\Omega _1}}}}}, \end{aligned}$$
where, \({\theta _1} = \max ({\tau _1},{\nu _1}),{\tau _1} = \frac{{\gamma _{th2}}}{{\rho {\psi _I}({\alpha _2} - {\alpha _1}\gamma _{th2})}}\) and \({\nu _1} = \frac{{\gamma _{th1}}}{{{a_1}{\psi _I}\rho }}\) with \({\alpha _2} > {\alpha _1}\gamma _{th2}.\)
From (12), the outage probability at \(D_{1}\) can be determined by
$$\begin{aligned} {P_{{D_1}}}&= {} 1\!-\!Pr\left( {\frac{{{\psi _I}{{\left| {{h_1}} \right| }^2}{\alpha _2}\rho }}{{{\psi _I}{{\left| {{h_1}} \right| }^2}{\alpha _1}\rho \!+\!1}}\!>\!{\gamma _{t{h_2}}},{} {} {} {} \psi _I{{\left| {{h_1}} \right| }^2}{\alpha _1}\rho \!>\! {\gamma _{t{h_1}}}} \right) \nonumber \\&= {} 1 - Pr\left( {{{\left| {{h_1}} \right| }^2}> \,\frac{{{\gamma _{t{h_2}}}}}{{\rho {\psi _I}\left( {{\alpha _2} - {\alpha _1}{\gamma _{t{h_2}}}} \right) }},\,\,{{\left| {{h_1}} \right| }^2} > \,\frac{{{\gamma _{t{h_1}}}}}{{{\alpha _1}\rho {\psi _I}}}} \right) \nonumber \\&= {} 1 - Pr({\left| {{h_1}} \right| ^2} \ge {\theta _1})\nonumber \\&= {} 1 - \int _{{\theta _1}}^\infty {{f_{{{\left| {{h_1}} \right| }^2}}}(x)dx} \end{aligned}$$
Applying the following equation
$$\begin{aligned} {f_{{h_i}}}(x) = \frac{1}{{{\Omega _i}}}\exp ( - \frac{x}{{{\Omega _i}}}),{} {} {} {} i \in \{ SD_{1},D_{1}D_{2}\mathrm{{\} }} \end{aligned}$$
Eq. (14) can be obtained as follows
$$\begin{aligned} {P_{{D_1}}}&= {} 1 - \int _{{\theta _{1}}}^\infty {\frac{1}{{{\Omega _1}}}{e^{\frac{{ - x}}{{{\Omega _1}}}}}dx}\nonumber \\&= {} 1 - {e^{ - \,\,\frac{{{\theta _{1}}}}{{{\Omega _1}}}}} \end{aligned}$$
The proof is completed. \(\square\)
Corollary 1
From (15), the outage probability at \(D_{1}\) for high SNR \(\rho \rightarrow \infty\) is expressed by
$$\begin{aligned} P_{D_1}^\infty = \frac{{{\theta _1}}}{{{\Omega _1}}} \end{aligned}$$
From (12), when \(\rho \rightarrow \infty\), the outage probability at \(D_{1}\) with \(1 - {e^{-x}} \approx x\) is given by
$$\begin{aligned} P_{D_1}^{\infty }&= {} 1 - {P_r}\left( {{{\left| {{h_1}} \right| }^2} \ge \theta _1} \right) \nonumber \\&= {} 1 - {e^{ - \frac{{\theta _1}}{{\Omega {\,_1}}}}}\nonumber \\&= {} \frac{{{\theta _1}}}{{{\Omega _1}}} \end{aligned}$$
Based on (15) and \({\alpha _2} > {\alpha _1}\gamma _{th2}\), \(P_{D_1}\) depends on \(\tau _{1}\) and the random variable \(\Omega _{1}\) (\(|h_1|^2\)). The closer the d, the lower the \(P_{D_1}\). This means that a better transmission quality can be achieved, and vice versa.
Outage probability at \(D_{2}\) for no direct link
Since \(D_1\) can not detect \(x_2\) as well as \(D_2\) can not recover the forwarded information from \(D_1\), the \(D_2\) is in outage. Hence, the outage probability at \(D_2\) is derived as (see (18)). By calculating \(J_2\) and \(J_3\), the outage probability for no direct link is determined by
$$\begin{aligned} \begin{array}{l} P_{{D_2},nodir} = \underbrace{\Pr \left( {{\gamma _{2,{D_1}}}< \gamma _{t{h_2}}^{HD}} \right) }_{{J_2}} + \underbrace{\Pr \left( {{\gamma _{2,{D_2}}} < \gamma _{t{h_2}}^{HD},{\gamma _{2,{D_1}}} > \gamma _{t{h_2}}^{HD}} \right) }_{{J_3}}, \end{array} \end{aligned}$$
The outage probability at \(D_{2}\) can be obtained by
$$\begin{aligned} P_{{D_2},nodir} = 1 - {e^{ - \frac{{{\tau _1}}}{{{\Omega _1}}}}}+ \int \limits _{{\tau _1}}^\infty {\left( {1 - {{\mathop {e}\nolimits } ^{ - \frac{{\gamma _{t{h_2}}}}{{x{\psi _E}\rho {\Omega _2}}}}}} \right) \frac{1}{{{\Omega _1}}}\exp \left( {\frac{{- x}}{{{\Omega _1}}}} \right) } dx. \end{aligned}$$
Considering the Rayleigh fading channel, \(J_2\) can be given by
$$\begin{aligned} {J_2} = 1 - \exp \left( {\frac{{ - {\tau _1}}}{{{\Omega _1}}}} \right) . \end{aligned}$$
and \(J_3\) can be expressed as (see(21)).
$$\begin{aligned} \begin{array}{l} {J_3}=\Pr \left( {{{\left| {{h_2}} \right| }^2}{{\left| {{h_1}} \right| }^2}{\psi _E}\rho<\gamma _{t{h_2}},\frac{{{{\left| {{h_1}} \right| }^2}{\psi _I}{\alpha _2}\rho }}{{{\psi _I}{{\left| {{h_1}} \right| }^2}{\alpha _1}\rho +1}}>\gamma _{t{h_2}}} \right) \\ \\ \quad \,=\left\{ \begin{array}{l} \Pr \left( {{{\left| {{h_2}} \right| }^2}<\frac{{\gamma _{t{h_2}}}}{{{{\left| {{h_1}} \right| }^2}{\psi _E}\rho }}, {{\left| {{h_1}} \right| }^2}>\frac{{\gamma _{t{h_2}}}}{{{\psi _I}\rho \left( {{\alpha _2}-{\alpha _1}\gamma _{t{h_2}}} \right) }}} \right) ,{\alpha _2}> {\alpha _1}\gamma _{t{h_2}}\\ \\ 0,\,\,{\alpha _2} \le {a_1}\gamma _{t{h_2}} \end{array} \right. \\ \\ \quad \,= \int \limits _{\frac{{\gamma _{t{h_2}}}}{{{\psi _I}\rho \left( {{\alpha _2} - {\alpha _1}\gamma _{t{h_2}}} \right) }}}^\infty {\int \limits _0^{\frac{{\gamma _{t{h_2}}}}{{x{\psi _E}\rho }}} {{f_{{{\left| {{h_1}} \right| }^2}}}(x){f_{{{\left| {{h_2}} \right| }^2}}}(y)dxdy} }= \int \limits _{{\tau _1}}^\infty {\frac{1}{{{\Omega _{1}}}}\left[ {1 - \exp \left( {\frac{{ - \gamma _{t{h_2}}}}{{x{\psi _E}\rho {\Omega _{2}}}}} \right) } \right] \exp \left( {\frac{{ - x}}{{{\Omega _{1}}}}} \right) } dx. \end{array} \end{aligned}$$
The outage probability at \(D_2\) is given by
$$\begin{aligned} P_{{D_2},nodir} = \,{J_2}\, + \,{J_3}.\, \end{aligned}$$
\(\square\)
The outage probability at \(D_{2}\) for high SNR can be determined as (see(23)), where \({K_1}(.)\) is the first order modified Bessel function of the second kind [55, Eq.(3.324.1)].
$$\begin{aligned} P_{{D_2},nodir}^{\infty }&= {} \Pr \left( {\frac{{{\alpha _2}}}{{{\alpha _1}}}< \gamma _{t{h_2}}} \right) + \Pr \left( {{{\left| {{h_2}} \right| }^2}< \frac{{\gamma _{t{h_2}}}}{{{\psi _E}\rho {{\left| {{h_1}} \right| }^2}}},\frac{{{\alpha _2}}}{{{\alpha _1}}}> \gamma _{t{h_2}}} \right) \nonumber \\&= {} \Pr \left( {{{\left| {{h_2}} \right| }^2} < \frac{{\gamma _{t{h_2}}}}{{{\psi _E}\rho {{\left| {{h_1}} \right| }^2}}},\frac{{{\alpha _2}}}{{{\alpha _1}}} > \gamma _{t{h_2}}} \right) \nonumber \\&= {} \int \limits _0^\infty {\left[ {1 - \exp \left( {\frac{{ - \gamma _{t{h_2}}}}{{{\psi _E}\rho {\Omega _2}x}}} \right) } \right] } \frac{1}{{{\Omega _1}}}\exp \left( {\frac{{ - x}}{{{\Omega _1}}}} \right) dx \nonumber \\&= {} 1 - 2\sqrt{\frac{{\gamma _{t{h_2}}}}{{{\psi _E}\rho {\Omega _1}{\Omega _2}}}} {K_1}\left( {2\sqrt{\frac{{\gamma _{t{h_2}}}}{{{\psi _E}\rho {\Omega _1}{\Omega _2}}}} } \right) . \end{aligned}$$
Outage probability at \(D_{2}\) for User Relaying with Direct Link
When \(x_{2}\) can be detected at \(D_{1}\) but the SINR is smaller than the target SNR after MRC or both \(D_{1}\) and \(D_{2}\) can not detect \(x_{2}\), the outage probability will occur at \(D_{2}\) and is given by (see(24))
$$\begin{aligned} {P_{{D_2},dir}} = \underbrace{{P_r} (\gamma _{{D_2}}^{MRC}< {\gamma _{th{{}_2}}})}_{{J_4}}\underbrace{{P_r}\left( {{\gamma _{2,{D_1}}} > {\gamma _{th{{}_2}}}} \right) }_{{J_5}} + {} \underbrace{{P_r}({\gamma _{2,{D_1}}}< {\gamma _{th{{}_2}}},{\gamma _{1,{D_2}}} < {\gamma _{th{{}_2}}})}_{{J_6}}, \end{aligned}$$
The outage probability at \(D_{2}\) can be given by (see(25))
$$\begin{aligned} P_{{D_2},dir}&= {} \int _0^\infty {\int _0^{\psi _{I}\,{\tau _1}} {\frac{1}{{{\Omega _0}{\Omega _1}}}\left( {1-{e^{-\frac{{\gamma _{t{h_2}}}}{{x\psi _{E}\rho {\Omega _2}}}+\frac{{y{\alpha _2}}}{{x\psi _{E}{\Omega _2}\left( {y{\alpha _1}\rho +1} \right) }}}}} \right) {e^{-\frac{x}{{\Omega {_1}}}\!-\!\frac{y}{{{\Omega _0}}}}}dxdy}}\times {e^{ - \frac{{{\tau _1}}}{{{\Omega _1}}}}}\,\nonumber \\&+ \left( {1 - {e^{ - \frac{{\tau _1}}{{{\Omega _1}}}}}} \right) \left( {1 - {e^{ - \,\,\frac{{{\tau _1}\,\psi _{I}}}{{{\Omega _0}}}}}} \right) \end{aligned}$$
From (24), the outage probability at \(D_{2}\) is determined by (see(26)), (see(27)), (see(28))
$$\begin{aligned} {J_4}&= {} \Pr \left( {{{\left| {{h_2}} \right| }^2}< \frac{{\gamma _{t{h_2}}}}{{{{\left| {{h_1}} \right| }^2}\psi _{E}\rho }} - \frac{{{{\left| {{h_0}} \right| }^2}{\alpha _2}}}{{{{\left| {{h_1}} \right| }^2}\psi _{E}\left( {{{\left| {{h_0}} \right| }^2}{\alpha _1}\rho + 1} \right) }},{{\left| {{h_0}} \right| }^2} < {\tau _1}\,\psi _{I}} \right) \nonumber \\&= {} \int _0^\infty {\int _0^{\psi _{I}\,{\tau _1}} {\int \limits _0^{\frac{{\gamma _{t{h_2}}}}{{x\,\psi _{E}\rho }}\,\, - \,\,\frac{{y{\alpha _2}}}{{x\,\psi _{E}\left( {y\,{\alpha _1}\rho + 1} \right) }}} {{f_{{{\left| {h{\,_1}} \right| }^2}}}\left( x \right) {f_{{{\left| {{h_0}} \right| }^2}}}\left( y \right) {f_{{{\left| {h{\,_2}} \right| }^2}}}\left( z \right) dxdydz} } }\nonumber \\&= {} \int _0^\infty {\int _0^{\psi _{I}\,{\tau _1}} {\frac{1}{{{\Omega _0}{\Omega _1}}}\left( {1 - {e^{ - \,\,\frac{{\gamma _{t{h_2}}}}{{x\,\psi _{E}\rho \,\Omega {\,_2}}}\,\, + \,\,\frac{{y{\alpha _2}}}{{x\,\psi _{E}\Omega {\,_2}\left( {y\,{\alpha _1}\rho + 1} \right) }}}}} \right) {e^{ - \,\,\frac{x}{{\Omega {\,_1}}}\,\, - \,\,\frac{y}{{{\Omega _0}}}}}dxdy} } \end{aligned}$$
$$\begin{aligned} {J_5}&= {} {\left| {{h_1}} \right| ^2} > {\tau _1}=\int _{\tau _1}^\infty {{f_{{{\left| {h{\,_1}} \right| }^2}}}\left( x \right) dx}= {e^{ - \,\,\frac{{\tau _1}}{{\Omega {\,_1}}}}} \end{aligned}$$
$$\begin{aligned} {J_6}&= {} \Pr \left( {{{\left| {{h_0}} \right| }^2}< \psi _{I}{} {} {\tau _1},{{\left| {{h_1}} \right| }^2} < {\tau _1}} \right) = \int _0^{{\tau _1}} {\int _0^{\psi _{I}{} {\tau _1}} {{f_{{{\left| {h{{}_1}} \right| }^2}}}\left( x \right) } } {f_{{{\left| {{h_{{} 0}}} \right| }^2}}}\left( y \right) dxdy\;\;\;{} {} {} {} {} {}\nonumber \\&= {} \left( {1 - {e^{ - \frac{{\psi _{I}{} {\tau _1}}}{{{\Omega _0}}}}}} \right) \left( {1 - {e^{ - \frac{{{\tau _1}}}{{{\Omega _1}}}}}} \right) \end{aligned}$$
Throughput for DLT mode
User relaying without direct link
With a given constant R, the transmitted information of the source node depends on the outage probability performance due to wireless fading channels. Therefore, the throughput of the system is determined by
$$\begin{aligned} \tau _{t,nodir} = \left( {1 - P_{{D_1}}} \right) {R_1} + \left( {1 - P_{{D_2},nodir}} \right) {R_2}, \end{aligned}$$
where \(P_{D_1}\) and \(P_{{D_2},nodir}\) can be achieved from (15) and (19), respectively.
User relaying with direct link
The throughput of system is given by
$$\begin{aligned} \tau _{t,dir} = \left( {1 - P_{{D_1}}} \right) {R_1} + \left( {1 - P_{{D_2},dir}} \right) {R_2}, \end{aligned}$$
where \(P_{{D_1}}\) and \(P_{{D_2},dir}\) can be achieved from (15) and (25), respectively.
Ergodic rate for DTT mode
Ergodic rate at \(D_{1}\)
The achievable rate at \(D_{1}\) where \(D_{1}\) can detect \(x_{2}\) is given by
$$\begin{aligned} {R_{{D_1}}} = \frac{1}{2}{\log _2}\left( {1 + {\gamma _{{D_1}}}} \right) . \end{aligned}$$
The ergodic rate at \(D_{1}\) is determined by
$$\begin{aligned} R_{{D_1}} = \frac{{ - \exp \left( {\frac{1}{{{\psi _I}{\alpha _1}\rho {\Omega _1}}}} \right) }}{{2\ln 2}}Ei\left( {\frac{{ - 1}}{{{\psi _I}{\alpha _1}\rho {\Omega _1}}}} \right) , \end{aligned}$$
where Ei(.) indicates the exponential integral function [55, Eq.(3.354.4)].
See Appendix 1. \(\square\)
Ergodic rate at \(D_{2}\) for User Relaying Without Direct Link
Since \(x_2\) needs to be detected at both \(D_1\) and \(D_2\), the achievable rate at \(D_{2}\) is given by
$$\begin{aligned} {R_{{D_2},nodir}}{} = {} \frac{1}{2}{\log _2}\left( {1 + \min \left( {{\gamma _{2,{D_1}}},{\gamma _{2,{D_2}}}} \right) } \right) . \end{aligned}$$
The ergodic rate at \(D_{2}\) is given by
$$\begin{aligned} R_{{D_2},nodir} = \frac{1}{{2\ln 2}}\int \limits _0^{\frac{{{\alpha _2}}}{{{\alpha _1}}}} {\left[ {\frac{{{e^{ - \frac{x}{{{\psi _I}\rho \left( {{\alpha _2} - {\alpha _1}x} \right) {\Omega _1}}}}}}}{{1 + x}}} \right. } \left. {+ \frac{{\int _{\frac{x}{{{\psi _I}\rho \left( {{\alpha _2} - {\alpha _1}x} \right) }}}^\infty {\frac{1}{{{\Omega _1}}}\left( {1 - {e^{ - \frac{x}{{y\rho {\psi _E}{\Omega _2}}}}}} \right) {e^{ - \frac{y}{{{\Omega _1}}}}}dy} }}{{1 + x}}} \right] dx. \end{aligned}$$
See Appendix 2 . \(\square\)
The ergodic rate in the asymptotic expression at \(D_{2}\) for high SNR region \(\rho \rightarrow \infty\) is obtained by
$$\begin{aligned} R_{{D_2},nodir}^{\infty }=\frac{1}{{2\ln 2}}\int \limits _0^\infty {\frac{{1 - {F_X}(x)}}{{1 + x}}dx}. \end{aligned}$$
From the analytical result in (35), this expression can be deployed by
$$\begin{aligned} R_{{D_2},nodir}^{\infty }\!=\!\frac{1}{{2\ln 2}}\!\int \limits _0^{\frac{{{\alpha _2}}}{{{\alpha _1}}}} {\frac{{\!2\sqrt{\!\frac{x}{{{\psi _E}\rho \,{\Omega _1}{\Omega _2}}}} \,{K_1}\left( {\!2\sqrt{\!\frac{x}{{{\psi _E}\rho \,{\Omega _1}{\Omega _2}}}} } \right) }}{{1 + x}}dx}. \end{aligned}$$
Ergodic rate at \(D_{2}\) for user relaying with direct link
$$\begin{aligned} {R_{{D_2},dir}} = E\left[ {\frac{1}{2}{{\log }_2}\left( {1 + \min \left( {{\gamma _{2,{D_1}}},\gamma _{{D_2}}^{MRC}} \right) } \right) } \right] . \end{aligned}$$
From (37), the ergodic rate at \(D_{2}\) can be computed by (see(38))
$$\begin{aligned} R_{{D_2},dir}\!=\!\frac{1}{{2\ln 2}}\int \limits _0^{\frac{{{\alpha _2}}}{{{\alpha _1}}}} {\left[ {\frac{{{e^{\!-\!\frac{x}{{\psi _{I}\rho \left( {{\alpha _2} - {\alpha _1}x} \right) {\Omega _1}}}}}}}{{1\!+\!x}}} \right. } \left. {\!-\!\frac{{\int _0^\infty {\int _{\frac{x}{{\psi _{I}\rho \left( {{\alpha _2}\!-\!{\alpha _1}x} \right) }}}^\infty {\frac{1}{{{\Omega _1}{\Omega _0}}}\left( {1\!-\!{e^{\!-\!\frac{{x\left( {y{\alpha _1}\rho \!+\!1} \right) \!+\!y{\alpha _2}\rho }}{{z\rho \psi _{E}{\Omega _2}\left( {y{\alpha _1}\rho \!+\!1} \right) }}}}} \right) {e^{\!-\!\frac{y}{{{\Omega _0}}}\!-\!\frac{z}{{{\Omega _1}}}}}dydz} } }}{{1\!+\!x}}} \right] dx. \end{aligned}$$
The ergodic rate in the asymptotic expression at \(D_{2}\) for high SNR region \(\rho \rightarrow \infty\) is given by
$$\begin{aligned} R_{{D_2},dir}^{\infty } = \frac{1}{{2\ln 2}}\int \limits _0^\infty {\frac{{1 - {F_X}\left( x \right) }}{{1 + x}}dx} \end{aligned}$$
From (39), this expression can be deployed by
$$\begin{aligned} R_{{D_2},dir}^{\infty } = \frac{1}{{2\ln 2}}\int \limits _0^{\frac{{{\alpha _2}}}{{{\alpha _1}}}} {\frac{{2\sqrt{\frac{x}{{{\psi _E}\rho \,{\Omega _1}{\Omega _2}}}} \,\!{K_1}\!\left( {2\sqrt{\frac{x}{{{\psi _E}\rho \,{\Omega _1}{\Omega _2}}}} } \right) }}{{1 + x}}dx} \end{aligned}$$
Ergodic rate of the system for user relaying without direct link
The ergodic rate of system is determined by
$$\begin{aligned} \tau _{r,nodir} = R_{{D_1}} + R_{{D_2},nodir}, \end{aligned}$$
where \(R_{{D_1}}\) and \(R_{{D_2},nodir}\) can be obtained from (32) and (34), respectively.
Ergodic rate of the system for user relaying with direct link
The ergodic rate of system is thus expressed by
$$\begin{aligned} \tau _{r,dir} = R_{{D_1}} + R_{{D_2},dir}, \end{aligned}$$
where \(R_{{D_1}}\) and \(R_{{D_2},dir}\) can be obtained from (32) and (38), respectively.
The EE can be determined as the ratio of the total data rate over the total consumed power in entire network, which is given by \(\mathrm{{EE}} \buildrel \Delta \over = \frac{R}{{{P_S} + {P_r}}}\). The energy efficiency of user relaying systems can be given as
$$\begin{aligned} E{E_\phi } = \frac{{2\tau _\phi ^{HD}}}{{\rho \left( {1 + {\psi _E}{\Omega _1}} \right) }}, \end{aligned}$$
where \(\phi \in \left( {t,r} \right)\), denotes the system energy efficiency in DLT mode and DTT mode, respectively.
Simulation parameters and scenarios
Table 2 Simulation Parameters
In this section, Monte Carlo based simulation scenarios are performed in Matlab to verify the derived analytical results. The simulation parameters of the system model are listed in Table 2. In addition, we use the conventional OMA as a counterpart for comparison. The operation of this scheme is described as follows. During the first phase of time block, the information \(x_{1}\) is transmitted to \(D_{1}\) by S. During the second phase of time block, the information \(x_{2}\) is sent to \(D_{1}\) by S. Lastly, during the third phase of time block, \(D_{1}\) decodes and then forwards \(x_{2}\) to \(D_{2}\).
Outage probability versus SNR and \(\beta\)
Outage probability of two users versus transmitting SNR in cases of no direct link and direct link
Figure 3 describes the relation between the outage probability and SNR of from −10 to 40 dB in case of \(\beta =0.7\). As shown in the figure, User 1 obtains a higher outage probability than User 2. In particular, with increasing SNR, the outage probability of both users decreases approximately linear. As a result, the gap width of these curves is bigger more and more. It means that the User 2 achieves a higher outage probability than User 1 in high SNR regime. Compared with OMA in high SNR regime, it is seen that the outage probability of User 1 for C-NOMA without direct link is lower. Besides, the outage probability of User 1 for OMA is also higher than that for C-NOMA in high SNR. Moreover, the User 2 for C-NOMA achieves a lower outage probability than that for OMA in both no direct and direct link cases. However, compared between direct link and no direct link, the C-NOMA scheme with direct link achieves a lower outage probability than C-NOMA scheme without direct link. The indicated curves for these cases correspond with the maker-red line and blue line in the figure. Obviously, the outage probability for C-NOMA with direct link is lowest as compared to both C-NOMA without direct link and OMA. It can be explained that the received information at User 2 in case of direct link existence between the BS and User 2 includes the one sent from BS and other one sent from the relay user 1. Thus, dropped package percentage at User 2 is lower as compared to the case of no direct link existence. In addition, we can base on the Eqs. (13), (14), (16), (18), (19), (23), (24), (25) to explain for these features. Therefore, it is shown that the C-NOMA scheme with direct link has a low outage probability over the C-NOMA without direct link as well as OMA.
Outage probability of two users versus \(\beta\) in case of no direct link and direct link
Similarly, Fig. 4 shows the outage probability versus \(\beta\), where beta is from 0 to 1. It is observed that the outage probability of the User 2 for C-NOMA scheme with direct link is the lowest among the curves for C-NOMA without direct link and OMA for both users. For comparison of the outage probability of User 1 for both C-NOMA and OMA, we can see that the curve for C-NOMA is always below that for OMA and thus the probability of User 1 for the C-NOMA achieves better than that for OMA. Moreover, the outage probabilty between direct and no direct links is different in each user. This is shown in the maker red curve and the maker green curve in the figure corresponding with direct OMA and without direct OMA and that the outage probability for direct OMA is much lower than that for without direct OMA. Besides, the probability of User 2 for OMA with direct link is lower than that for C-NOMA without direct link. This means that the OMA scheme is better than the C-NOMA without direct link. However, the outage probability of User 2 for C-NOMA with direct link is lower than that for OMA with direct link. Therefore, we can conclude that the C-NOMA with direct link outperforms the C-NOMA without direct link as well as OMA. These can be explained similarly to Fig. 3 as well as based on Eqs. (12), (13), (16), (18), (19), (23), (24), (25). In addition, it can be seen that when the \(\beta\) is in range of from 0.1 to 0.65, the outage probability of User 2 for C-NOMA with direct link is considerably lower as compared to others. In range of from 0.65 to approximately 0.9 and from 0 to 0.1, this probability decreases and increases gradually, respectively. As a result, the gap width between the curve of the outage probability of User 2 for C-NOMA with direct link and the others achieves the biggest in range of 0.1 to 0.65. It means that we can choose \(\beta\) in this range to obtain a better outage probability for the system model.
Throughput and ergodic versus SNR and \(\beta\)
The throughput of two users versus \(\beta\) in cases of no direct link and direct link
Figure 5 decribes the throughput versus \(\beta\) in from 0 to 1. From the figure, it is shown that the throughput of User 1 for C-NOMA is considerably higher than that for OMA. However, this throughput trends to decrease quickly when the \(\beta\) towards to 1. This is because that as the power splitting ratio \(\beta\) increases, the power allocated to User 1 is lower, thus resulting in a decreased throughput of User 1. Furthermore, the throughput of User 2 for C-NOMA with direct link is the highest among the graphs illustrating for User 2 in cases of C-NOMA and OMA. We can see that the maker blue line of User 2 for C-NOMA with direct link is higher and approximately constant as compared that for C-NOMA without direct link as well as OMA, i.e. the maker green line and the maker magenta line, respectively. Besides, the throughput of User 2 for these three cases is almost constant in range of \(\beta\) from about 0.1 to 0.8. This means that we can choose a suitable value of \(\beta\) to satify the tradeoff between the throughput of User 1 and the throughput of User 2 for the system model. Additionally, the figure also shows that the throughput of User 2 for C-NOMA is lower than that for C-NOMA without direct link. Thus, the C-NOMA scheme with direct link is superior to the C-NOMA without direct link and C-NOMA.
The ergodic rate of two users versus \(\beta\) in cases of no direct link and direct link
Figure 6 plots the ergodic rate as a function of \(\beta\) in from 0 to 1. The figure shows that the ergodic rate of User 1 for C-NOMA is considerably higher than that for OMA. However, these curves degrade quickly when the \(\beta\) towards to 1. The reason is similar to Fig. 4. It can be also explained according to Eqs. (32), (34), (36), (38). Besides, the ergodic rate of User 2 for C-NOMA with direct link is the highest as compared to that of User 2 in cases of C-NOMA and OMA. Specifically, the ergodic rate of User 2 for C-NOMA with direct link is about four times higher than that for OMA. Futhermore, the ergodic rate of User 2 is almost constant in range of \(\beta\) from about 0.1 to 0.9. Therefore, one can choose a suitable value of \(\beta\) to satify the tradeoff between the ergodic rate of User 1 and the ergodic rate of User 2 for the system model. From Eqs. (32), (34), (36), (38), the ergodic rate of User 2 depends on the \(\beta\) less than that of User 1. Thus, the C-NOMA scheme with direct link outperforms the C-NOMA without direct link and OMA.
Energy efficiency of two users for the PSR protocol in cases of without direct link and direct link
Figure 7 illustrates the energy efficiency according to SNR from -10 to 40 dB. It is shown that the EE for C-NOMA with direct link achieves much higher than that for C-NOMA without direct link and OMA. In particular, with the SNR range of from -10 to 5 dB, this energy efficiency is large but linearly decreases as the SNR value increases. When SNR range is from 5 to 40 dB, the energy efficiency for both C-NOMA with and without direct link and OMA asymptotically decreases to 0. Therefore, it can be concluded that the C-NOMA scheme with direct link provides a better EE as compared among the schemes in low SNR regime.
Therefore, the simulation results are in accordant with theoretical calculations.
In this paper, a proposed EH scheme for SWIPT C-NOMA has been presented. The closed-form expressions of the performance were derived. The analytical results shown that the C-NOMA with direct link obtained a better outage probability over the OMA as well as C-NOMA without direct link. Numerical results provided that C-NOMA with direct link outperformed throughput and ergodic rate than OMA. Besides, the C-NOMA with direct link obtained a higher EE performance than the C-NOMA without direct link and OMA in low SNR region. The proposed system model can be applied for C-NOMA based wireless sensor networks where relaying sensor nodes can harvest RF energy from BSs to maintain their operation and assist to forward information to other sensor nodes. Futhermore, we can develope the system using multiple antennas or combine relay selection with multiple antennas to enhance the performance of the system.
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
Acronym:
SWIPT:
Simultaneous wireless information and power transfer
C-NOMA:
Cooperative non-orthogonal multiple access
EH:
NU:
Near user
FU:
Far user
Outage propability
EE:
Radio-frequency
PSR:
Power-splitting relaying
Decode-and-forward
SC:
Superposition coding
Successive interference cancellation
OMA:
Orthogonal multiple access
TSR:
Time-switching relaying
DLT:
Delay-limited transmission
Delay-tolerant transmission
Maximum ratio combining
D. Wan, M. Wen, F. Ji, Yu. Hua, F. Chen, Non-orthogonal multiple access for cooperative communications: Challenges, opportunities, and trends. IEEE Wirel. Commun. 25(2), 109–117 (2018)
M. Liaqat, K.A. Noordin, T.A. Latef, K. Dimyati, Power-domain non orthogonal multiple access (PD-NOMA) in cooperative networks: an overview. Wirel. Netw. 26(1), 181–203 (2020)
H.Q. Tran, P.Q. Truong, C.V. Phan, Q.-T. Vien, On the energy efficiency of NOMA for wireless backhaul in multi-tier heterogeneous CRAN, in Proceedings SigTelCom, pp. 229–234 (2017)
Z. Ding, X. Lei, G.K. Karagiannidis, R. Schober, J. Yuan, V.K. Bhargava, A survey on non-orthogonal multiple access for 5G networks: Research challenges and future trends. IEEE J. Sel. Areas Commun. 35(10), 2181–2195 (2017)
H.Q. Tran, T.T. Nguyen, C.V. Phan, Q.T. Vien, Power-splitting relaying protocol for wireless energy harvesting and information processing in NOMA systems. IET Commun. 13(14), 2132–2140 (2019)
H.Q. Tran, C.V. Phan, Q.T. Vien, Power splitting versus time switching based cooperative relaying protocols for SWIPT in NOMA systems. Phys. Commun. 41, (2020)
H. Huang, M. Zhu, Energy efficiency maximization design for full-duplex cooperative NOMA systems with SWIPT. IEEE Access 7, 20442–20451 (2019)
Z. Ding, M. Peng, H.V. Poor, Cooperative non-orthogonal multiple access in 5G systems. IEEE Commun. Lett. 19(8), 1462–1465 (2015)
Y. Liu, T. Wu, X. Deng, X. Zhang, F. Gao, G. Wang, Outage performance analysis for SWIPT-based cooperative non-orthogonal multiple access systems. IEEE Commun. Lett. 23(9), 1501–1505 (2019)
Y. Guo, Y. Li, Y. Li, W. Cheng, H. Zhang, SWIPT assisted NOMA for coordinated direct and relay transmission, in 2018 IEEE/CIC International Conference on Communications in China (ICCC), pp. 111–115 (2018)
X. Li, J., Li, Y. Liu, Z. Ding, A. Nallanathan, Outage performance of cooperative NOMA networks with hardware impairments, in 2018 IEEE Global Communications Conference (GLOBECOM), pp. 1–6 (2018)
D. Wan, M. Wen, F. Ji, Y. Liu, Yu. Huang, Cooperative NOMA systems with partial channel state information over Nakagami-m fading channels. IEEE Trans. Commun. 66(3), 947–958 (2018)
N.T. Do, D.B. da Costa, T.Q. Duong, B. An, Transmit antenna selection schemes for MISO-NOMA cooperative downlink transmissions with hybrid SWIPT protocol, in 2017 IEEE International Conference on Communications (ICC), pp. 1–6 (2017)
A.A. Nasir, Z. Xiangyun, D. Salman, A.K. Rodney, Relaying protocols for wireless energy harvesting and information processing. IEEE Trans. Wirel. Commun. 12(7), 3622–3636 (2013)
S.K. Zaidi, S.F. Hasan, X. Gui, Time switching based relaying for coordinated transmission using NOMA, in 2018 Eleventh International Conference on Mobile Computing and Ubiquitous Network (ICMU), pp. 1–5 (2018)
T.N. Do, B. An, Optimal sum-throughput analysis for downlink cooperative SWIPT NOMA systems, in 2018 2nd International Conference on Recent Advances in Signal Processing, Telecommunications and Computing (SigTelCom), pp. 85–90 (2018)
M. Kader, M.B. Uddin, A. Islam, S.Y. Shin, Cooperative non-orthogonal multiple access with SWIPT over Nakagami-m fading channels. Trans. Emerg. Telecommun. Technol. 30(5), (2019)
N. Jain, V.A. Bohara, Energy harvesting and spectrum sharing protocol for wireless sensor networks. IEEE Wirel. Commun. Lett. 4(6), 697–700 (2015)
Y. Liu, H. Ding, J. Shen, R. Xiao, H. Yang, Outage performance analysis for SWIPT-based cooperative non-orthogonal multiple access systems. IEEE Commun. Lett. 23(9), 1501–1505 (2019)
Y. Ye, Y. Li, D. Wang, G. Lu, Power splitting protocol design for the cooperative NOMA with SWIPT, in 2017 IEEE International Conference on Communications (ICC), pp. 1–5 (2017)
Y. Liu, Z. Ding, M. Elkacshlan, H.V. Poor, Cooperative non orthogonal multiple access with simultaneous wireless information and power transfer. IEEE J. Sel. Areas Commun. 34(4), 938–953 (2016)
A.A. Al-habob, A.M. Salhab, S.A. Zummo, A novel time-switching relaying protocol for multi-user relay networks with SWIPT. Arab. J. Sci. Eng. 44(3), 2253–2263 (2019)
N.T. Do, D. Costa, T.Q. Duong, B. An, A BNBF user selection scheme for NOMA-based cooperative relaying systems with SWIPT. IEEE Commun. Lett. 21(3), 664–667 (2017)
Y. Xu, G. Wang, L. Zheng, S. Jia, Performance of NOMA-based coordinated direct and relay transmission using dynamic scheme. IET Commun. 12(18), 2231–2242 (2018)
X. Yue, Y. Liu, S. Kang, A. Nallanathan, Z. Ding, Exploiting full/half-duplex user relaying in NOMA systems. IEEE Trans. Commun. 66(2), 560–575 (2017)
J. Zhang, X. Tao, W. Huici, X. Zhang, Performance analysis of user pairing in cooperative NOMA networks. IEEE Access 6, 74288–74302 (2018)
W. Duan, J. Ju, J. Hou, Q. Sun, X.-Q. Jiang, G. Zhang, Effective resource utilization schemes for decode-and-forward relay networks with NOMA. IEEE Access 7, 51466–51474 (2019)
C. Da, D. Benevides, M.D. Yacoub, Outage performance of two hop AF relaying systems with co-channel interferers over Nakagami-m fading. IEEE Commun. Lett. 15(9), 980–982 (2011)
Y. Zhou, V. W. Wong, R. Schober, Performance analysis of cooperative NOMA with dynamic decode-and-forward relaying, in IEEE Global Communications Conference, pp. 1–6 (2017)
Z. Zhang, Z. Ma, M. Xiao, Z. Ding, P. Fan, Full-duplex device-to-device-aided cooperative nonorthogonal multiple access. IEEE Trans. Veh. Technol. 66(5), 4467–4471 (2016)
D. Deng, Yu. Minghui, J. Xia, Z. Na, J. Zhao, Q. Yang, Wireless powered cooperative communications with direct links over correlated channels. Phys. Commun. 28, 147–153 (2018)
H. Liu, Z. Ding, K.J. Kim, K.S. Kwak, H. Vincent Poor, Decode-and-forward relaying for cooperative NOMA systems with direct links. IEEE Trans. Wirel. Commun. 17(12), 8077–8093 (2018)
X. Yue, Y. Liu, S. Kang, A. Nallanathan, Z. Ding, Outage performance of full/half-duplex user relaying in NOMA systems, in 2017 IEEE International Conference on Communications (ICC), pp. 1–6 (2017)
W. Duan, J. Ju, Q. Sun, Y. Ji, Z. Wang, J. Choi, G. Zhang, Capacity enhanced cooperative D2D systems over rayleigh fading channels with NOMA. arXiv preprint arXiv:1810.06837 (2018)
G. Li, D. Mishra, H. Jiang, Cooperative NOMA with incremental relaying: performance analysis and optimization. IEEE Trans. Veh. Technol. 67(11), 11291–11295 (2018)
Y. Ye, Y. Li, F. Zhou, N. Al-Dhahir, H. Zhang, Power splitting-based swipt with dual-hop DF relaying in the presence of a direct link. IEEE Syst. J. 99, 1–5 (2018)
L. Dai, B. Gui, L. J. Cimini, Selective relaying in OFDM multihop cooperative networks, in IEEE Wireless Communications and Networking Conference, pp. 963–968 (2007)
S. Gautam, E. Lagunas, S. Chatzinotas, B. Ottersten, Relay selection and resource allocation for SWIPT in multi-user OFDMA systems. IEEE Trans. Wirel. Commun. 18(5), 2493–2508 (2019)
T.D.P. Perera, D.N.K. Jayakody, S.K. Sharma, S. Chatzinotas, J. Li, Simultaneous wireless information and power transfer (SWIPT): recent advances and future challenges. IEEE Commun. Surv. Tutor. 20(1), 264–302 (2017)
P. Xu, Y. Yuan, Z. Ding, X. Dai, R. Schober, On the outage performance of non-orthogonal multiple access with 1-bit feedback. IEEE Trans. Wirel. Commun. 15(10), 6716–6730 (2016)
M. Basharat, M. Naeem, W. Ejaz, A.M. Khattak, A. Anpalagan, O. Alfandi, H.S. Kim, Non-orthogonal radio resource management for RF energy harvested 5G networks. IEEE Access 7, 46550–46561 (2019)
D. Pradhan, K.C. Priyanka, RF-energy harvesting (RF-EH) for sustainable ultra dense green network (SUDGN) in 5G green communication (2020)
S. Kusaladharma, C. Tellambura, Energy harvesting aided by random motion; a stochastic geometry based approach. IEEE Trans. Green Commun. Netw. (2020)
R.A. Abd-Alhameed, I. Elfergani, J. Rodriguez, Recent technical developments in energy-efficient 5G mobile cells: present and future (2020)
B.C. Nguyen, T.M. Hoang, X.N. Pham, P.T. Tran, Performance analysis of energy harvesting-based full-duplex decode-and-forward vehicle-to-vehicle relay networks with nonorthogonal multiple access. Wirel. Commun. Mobile Comput. (2019)
I. Budhiraja, N. Kumar, S. Tyagi, S. Tanwar, M. Guizani, SWIPT-enabled D2D communication underlaying NOMA-based cellular networks in imperfect CSI. IEEE Trans. Veh. Technol. (2021)
C. Guo, L. Zhao, C. Feng, Z. Ding, H. Chen, Energy harvesting enabled NOMA systems with full-duplex relaying. IEEE Trans. Veh. Technol. 68(7), 7179–7183 (2019)
E. Boshkovska, D.W.K. Ng, N. Zlatanov, R. Schober, Practical non-linear energy harvesting model and resource allocation for SWIPT systems. IEEE Commun. Lett. 19(12), 2082–2085 (2015)
B. Clerckx, R. Zhang, R. Schober, D.W.K. Ng, D.I. Kim, H.V. Poor, Fundamentals of wireless information and power transfer: from RF energy harvester models to signal and system designs. IEEE J. Sel. Areas Commun. 37(1), 4–33 (2019)
J.M. Kang, I.M. Kim, D.I. Kim, Joint Tx power allocation and Rx power splitting for SWIPT system with multiple nonlinear energy harvesting circuit. IEEE Wirel. Commun. Lett. 8(1), 53–56 (2019)
G. Ma, J. Xu, Y. Zeng, M. Moghadam, A generic receiver architecture for MIMO wireless power transfer with non-linear energy harvesting. IEEE Signal Proc. Lett. 26(2), 312–316 (2019)
Q. Sun, S. Han, I. Chin-Lin, Z. Pan, On the ergodic capacity of MIMO NOMA systems. IEEE Wirel. Commun. Lett. 4(4), 405–408 (2015)
F. Fang, H. Zhang, J. Cheng, S. Roy, V.C. Leung, Joint user scheduling and power allocation optimization for energy-efficient NOMA systems with imperfect CSI. IEEE J. Sel. Areas Commun. 35(12), 2874–2885 (2017)
I.S. Gradshteyn, I.M. Ryzhik, Table of integrals, series and products, 7th edn. (Academic, San Diego, 2007)
This study was self-funded by the authors.
Ho Chi Minh City University of Technology and Education, 01 Vo Van Ngan, 700000, Ho Chi Minh City, Vietnam
Huu Q. Tran & Ca V. Phan
Industrial University of Ho Chi Minh City, 12 Nguyen Van Bao, 700000, Ho Chi Minh City, Vietnam
Huu Q. Tran
Middlesex University, The Burroughs, London NW4 4BT, United Kingdom
Quoc-Tuan Vien
Ca V. Phan
HQT: Conceptualization, methodology, software, formal analysis, investigation. HQT: Data curation, Writing-Original draft preparation. HQT, CVP and Q-TV validation, resources. HQT, CVP and Q-TV: Writing-Reviewing and Editing. CVP and Q-TV supervision. All authors read and approved the final manuscript.
Correspondence to Huu Q. Tran.
Appendix 1: Proof of Theorem 1
In this appendix, the proof of (32) is presented. To achieve this closed-form expression, the ergodic rate of \(D_{1}\) for HD NOMA can be expressed as
$$\begin{aligned} R_{{D_1}}&= \frac{1}{2}\mathrm{E}\left[ {{{\log }_2}\left( {1 + {\psi _I}{{\left| {{h_1}} \right| }^2}{\alpha _1}\rho } \right) } \right] \nonumber \\&= \frac{1}{{2\ln 2}}\int _0^\infty {\frac{{1 - {F_X}\left( x \right) }}{{1 + x}}dx}, \end{aligned}$$
The cumulative distribution function (CDF) of X is computed as follows
$$\begin{aligned} {F_X}\left( x \right)&= \Pr \left( {{{\left| {{h_1}} \right| }^2} < \frac{x}{{{\psi _I}{\alpha _1}\rho }}} \right) \nonumber \\&= \int _0^{\frac{{x}}{{{\psi _I}{\alpha _1}\rho }}} {\frac{1}{{{\Omega _1}}}{e^{ - \frac{y}{{{\Omega _1}}}}}dy}\nonumber \\&= 1 - {e^{ - \frac{x}{{{\psi _I}{\alpha _1}\rho {\Omega _1}}}}}, \end{aligned}$$
Substituting (42) into (43), the ergodic rate of \({D_1}\) can be derived as
$$\begin{aligned} R_{{D_1}}&= \frac{1}{2}\frac{1}{{\ln 2}}\int _0^\infty {\frac{1}{{1 + x}}{e^{ - \frac{x}{{{\psi _I}{\alpha _1}\rho {\Omega _1}}}}}dx}\\ R_{{D_1}}&= \frac{{ - \exp \left( {\frac{1}{{{\psi _I}{\alpha _1}\rho {\Omega _1}}}} \right) }}{{2\ln 2}}Ei\left( {\frac{{ - 1}}{{{\psi _I}{\alpha _1}\rho {\Omega _1}}}} \right) \end{aligned}$$
We can derive (32). The proof is completed.
In this appendix, the proof starts by giving the ergodic rate of \(D_{2}\) as follows
$$\begin{aligned} {R_{{D_2},nodir}}&= {} E\left[ {\frac{1}{2}{{\log }_2}\left( {1 + \underbrace{\min \left( {{\gamma _{2,{D_1}}},{\gamma _{2,{D_2}}}} \right) }_{{J_1}}} \right) } \right] \\ {J_1}&= {} \underbrace{\min \left( {\frac{{{\psi _I}{{\left| {{h_1}} \right| }^2}{\alpha _2}\rho }}{{{\psi _I}{{\left| {{h_1}} \right| }^2}{\alpha _1}\rho + 1}},{{\left| {{h_2}} \right| }^2}{{\left| {{h_1}} \right| }^2}{\psi _E}\rho } \right) }_X \end{aligned}$$
The CDF of X is calculated as follows (see(44))
$$\begin{aligned} {F_X}\left( x \right)&= {} \underbrace{\Pr \left( {\frac{{{\psi _I}{{\left| {{h_1}} \right| }^2}{\alpha _2}\rho }}{{{\psi _I}{{\left| {{h_1}} \right| }^2}{\alpha _1}\rho + 1}}< {{\left| {{h_2}} \right| }^2}{{\left| {{h_1}} \right| }^2}{\psi _E}\rho ,\frac{{{\psi _I}{{\left| {{h_1}} \right| }^2}{\alpha _2}\rho }}{{{\psi _I}{{\left| {{h_1}} \right| }^2}{\alpha _1}\rho + 1}}< x} \right) }_{{I_3}}\nonumber \\&+\underbrace{\Pr \left( {\frac{{{\psi _I}{{\left| {{h_1}} \right| }^2}{\alpha _2}\rho }}{{{\psi _I}{{\left| {{h_1}} \right| }^2}{\alpha _1}\rho + 1}} > {{\left| {{h_2}} \right| }^2}{{\left| {{h_1}} \right| }^2}{\psi _E}\rho ,{{\left| {{h_2}} \right| }^2}{{\left| {{h_1}} \right| }^2}{\psi _E}\rho < x} \right) }_{{I_4}} \end{aligned}$$
\(I_{3}\) and \(I_{4}\) are given by (see(45)) and (see(46))
$$\begin{aligned} {I_3} &= Pr \left( {\frac{{{\psi _I}{\alpha _2}}}{{\left( {{\psi _I}{{\left| {{h_1}} \right| }^2}{\alpha _1}\rho + 1} \right) {\psi _E}}}< {{\left| {{h_2}} \right| }^2},{{\left| {{h_1}} \right| }^2} < \frac{x}{{{\psi _I}\rho \left( {{\alpha _2} - x{\alpha _1}} \right) }},\frac{{{\alpha _2}}}{{{\alpha _1}}} - x > 0} \right) \\ & = U\left( {\frac{{{\alpha _2}}}{{{\alpha _1}}} - x}\right) \times \int _0^{\frac{x}{{{\psi _I}\rho \left( {{\alpha _2} - {\alpha _1}x} \right) }}} {\int _{\frac{{{\psi _I}{\alpha _2}}}{{\left( {{\psi _I}y{\alpha _1}\rho + 1} \right) {\psi _E}}}}^\infty {{f_{{{\left| {{h_1}} \right| }^2}}}\left( y \right) {f_{{{\left| {{h_2}} \right| }^2}}}\left( z \right) dydz} }\\ &= U\left( {\frac{{{\alpha _2}}}{{{\alpha _1}}} - x} \right) \int \limits _0^{\frac{x}{{{\psi _I}\rho \left( {{\alpha _2} - {\alpha _1}x} \right) }}} {\exp \left( {\frac{{ - {\psi _I}{\alpha _2}}}{{\left( {{\psi _I}y{\alpha _1}\rho + 1} \right) {\psi _E}{\Omega _2}}}} \right) \frac{1}{{{\Omega _1}}}\exp \left( {\frac{{ - y}}{{{\Omega _1}}}} \right) dy}\\ &= U\left( {\frac{{{\alpha _2}}}{{{\alpha _2}}} - x} \right) \int _0^{\frac{x}{{{\psi _I}\rho \left( {{\alpha _2} - {\alpha _1}x} \right) }}} {\frac{1}{{{\Omega _1}}}{e^{ - \frac{{{\psi _I}{\alpha _2}}}{{\left( {{\psi _I}y{\alpha _1}\rho + 1} \right) {\psi _E}{\Omega _2}}} - \frac{y}{{{\Omega _1}}}}}dy} \end{aligned}$$
$$\begin{aligned} {I_4}&=\Pr \left( {\frac{{{{\left| {{h_1}} \right| }^2}{\psi _I}{\alpha _2}\rho }}{{{\psi _I}{{\left| {{h_1}} \right| }^2}{\alpha _1}\rho +1}}>{{\left| {{h_2}} \right| }^2}{{\left| {{h_1}} \right| }^2}{\psi _E}\rho ,{{\left| {{h_2}} \right| }^2}{{\left| {{h_1}} \right| }^2}{\psi _E}\rho<x} \right) \\ & =\Pr \left( {{{\left| {{h_2}} \right| }^2}<\frac{{{\psi _I}{\alpha _2}}}{{\left( {{\psi _I}{{\left| {{h_1}} \right| }^2}{\alpha _1}\rho +1} \right) {\psi _E}}},{{\left| {{h_2}} \right| }^2}<\frac{x}{{{{\left| {{h_1}} \right| }^2}{\psi _E}\rho }},\frac{{{\alpha _2}}}{{{\alpha _1}}}-x>0} \right) \\ &= U\left( {\frac{{{\alpha _2}}}{{{\alpha _1}}} - x} \right) \Pr \left( {{{\left| {{h_2}} \right| }^2}< \frac{{{\psi _I}{\alpha _2}}}{{\left( {{\psi _I}{{\left| {{h_1}} \right| }^2}{\alpha _1}\rho + 1} \right) {\psi _E}}},{{\left| {{h_2}} \right| }^2}< \frac{x}{{{{\left| {{h_1}} \right| }^2}{\psi _E}\rho }}} \right) \\ {I_4} &= U\left( {\frac{{{\alpha _2}}}{{{\alpha _1}}} - x} \right) \times \left[ {_{\underbrace{\Pr \left( {{{\left| {{h_1}} \right| }^2}> \frac{x}{{{\psi _I}\rho \left( {{\alpha _2} - x{\alpha _1}} \right) }},{{\left| {{h_2}} \right| }^2}< \frac{x}{{{{\left| {{h_1}} \right| }^2}{\psi _E}\rho }}} \right) }_{{I_{42}}}}^{\underbrace{\Pr \left( {{{\left| {{h_1}} \right| }^2}< \frac{x}{{{\psi _I}\rho \left( {{\alpha _2} - x{\alpha _1}} \right) }},{{\left| {{h_2}} \right| }^2}< \frac{{{\psi _I}{\alpha _2}}}{{\left( {{\psi _I}{{\left| {{h_1}} \right| }^2}{\alpha _1}\rho + 1} \right) {\psi _E}}}} \right) }_{{I_{41}}}}} \right. \\ {I_4} &= U\left( {\frac{{{\alpha _2}}}{{{\alpha _1}}} - x} \right) \times \left[ {\underbrace{\Pr \left( {{{\left| {{h_1}} \right| }^2}< \frac{x}{{{\psi _I}\rho \left( {{\alpha _2} - {\alpha _1}x} \right) }},{{\left| {{h_2}} \right| }^2}< \frac{{{\psi _I}{\alpha _2}}}{{\left( {{\psi _I}{{\left| {{h_1}} \right| }^2}{\alpha _1}\rho + 1} \right) {\psi _E}}}} \right) }_{{I_{41}}}} \right. \\ &\qquad+\left. { \underbrace{\,\Pr \left( {{{\left| {{h_1}} \right| }^2} > \frac{x}{{{\psi _I}\rho \left( {{\alpha _2} - {\alpha _1}x} \right) }},{{\left| {{h_2}} \right| }^2} < \frac{x}{{{{\left| {{h_1}} \right| }^2}{\psi _E}\rho }}} \right) }_{{I_{42}}}} \right] \end{aligned}$$
And \({I_{41}}\) and \({I_{42}}\) are computed by (see(47)) and (see(48))
$$\begin{aligned} {I_{41}}&= {} \int _0^\infty {\int _0^{\frac{x}{{{\psi _I}\rho \left( {{\alpha _2} - {\alpha _1}x} \right) }}} {\int _0^{\frac{{{\psi _I}{a_2}}}{{\left( {{\psi _I}y{a_1}\rho + 1} \right) {\psi _E}}}} {{f_{{{\left| {{h_1}} \right| }^2}}}\left( y \right) {f_{{{\left| {{h_2}} \right| }^2}}}\left( z \right) dydz} } }\nonumber \\&= {} \int _0^{\frac{x}{{{\psi _I}\rho \left( {{\alpha _2} - {\alpha _1}x} \right) }}} {\frac{1}{{{\Omega _1}}}\left( {1 - {e^{ - \frac{{{\psi _I}{\alpha _2}}}{{\left( {{\psi _I}y{\alpha _1}\rho + 1} \right) {\psi _E}{\Omega _2}}}}}} \right) {e^{ - \frac{y}{{{\Omega _1}}}}}dy}, \end{aligned}$$
$$\begin{aligned} {I_{42}} = \int _{\frac{x}{{{\psi _I}\rho \left( {{\alpha _2} - {\alpha _1}x} \right) }}}^\infty {\int _0^{\frac{x}{{y{\psi _E}\rho }}} {{f_{{{\left| {{h_1}} \right| }^2}}}\left( y \right) {f_{{{\left| {{h_2}} \right| }^2}}}\left( z \right) dydz} } = \int _{\frac{x}{{{\psi _I}\rho \left( {{\alpha _2} - {\alpha _1}x} \right) }}}^\infty {\frac{1}{{{\Omega _1}}}\left( {1 - {e^{ - \frac{y}{{y\rho {\psi _E}{\Omega _2}}} - \frac{y}{{{\Omega _1}}}}}} \right) dy}, \end{aligned}$$
where U(x) is unit step function as
$$\begin{aligned} \begin{matrix} U\left( x \right) = \left[ \begin{array}{l} 0,\,\,x < 0\\ 1,\,\,x > 0 \end{array} \right. \end{matrix} \end{aligned}$$
From (47) and (48), we have (46). Substituting (45) and (46) into (44), the CDF of X is given by (see(47)).
$$\begin{aligned} {F_X}\left( x \right)&= {} U\left( {\frac{{{\alpha _2}}}{{{\alpha _1}}} - x} \right) \left[ {1 - {e^{ - \frac{x}{{{\psi _I}\rho \left( {{\alpha _2} - {\alpha _1}x} \right) {\Omega _1}}}}}} \right. \nonumber \\&\left. + \int _{\frac{x}{{{\psi _I}\rho \left( {{\alpha _2} - {\alpha _1}x} \right) }}}^\infty {\frac{1}{{{\Omega _1}}}\left( {1 - {e^{ - \frac{x}{{y\rho {\psi _E}{\Omega _2}}}}}} \right) {e^{ - \frac{y}{{{\Omega _1}}}}}dy} \right] , \end{aligned}$$
By replacing (49) into (33), we can obtain (34).
The proof is completed.
Appendix 3: Proof of Remark 1
The proof starts by giving the ergodic rate of \(D_{2}\) for the high SNR regime as follows
$$\begin{aligned} R_{{D_2},nodir}^\infty&= E\left[ {\frac{1}{2}\log \left( {1 + \min \left( {{\gamma _{2,{D_1}}},{\gamma _{2,{D_2}}}} \right) } \right) } \right] \nonumber \\&=\frac{1}{{2\ln 2}}\int \limits _0^\infty {\frac{{1 - {F_X}\left( x \right) }}{{1 + x}}dx},\nonumber \\ {I_5}&= \underbrace{\min \left( {\frac{{{\alpha _2}}}{{{\alpha _1}}},{{\left| {{h_2}} \right| }^2}{{\left| {{h_1}} \right| }^2}{\psi _E}\rho } \right) }_X \end{aligned}$$
The CDF of X is computed as follows
$$\begin{aligned} {F_X}\left( x \right)&= {} \Pr \left( {\frac{{{\alpha _2}}}{{{\alpha _1}}}< {{\left| {{h_2}} \right| }^2}{{\left| {{h_1}} \right| }^2}{\psi _E}\rho ,\frac{{{\alpha _2}}}{{{\alpha _1}}}< x} \right) \nonumber \\&+ \Pr \left( {\frac{{{\alpha _2}}}{{{\alpha _1}}}> {{\left| {{h_2}} \right| }^2}{{\left| {{h_1}} \right| }^2}{\psi _E}\rho ,{{\left| {{h_2}} \right| }^2}{{\left| {{h_1}} \right| }^2}{\psi _E}\rho< y,\frac{{{\alpha _2}}}{{{\alpha _1}}}> x} \right) \nonumber \\&= {} \Pr \left( {\frac{{{\alpha _2}}}{{{\alpha _1}}}> {{\left| {{h_2}} \right| }^2}{{\left| {{h_1}} \right| }^2}{\psi _E}\rho ,{{\left| {{h_2}} \right| }^2}{{\left| {{h_1}} \right| }^2}{\psi _E}\rho< y,\frac{{{\alpha _2}}}{{{\alpha _1}}}> x} \right) \nonumber \\&= {} U\left( {\frac{{{\alpha _2}}}{{{\alpha _1}}} - x} \right) \Pr \left( {{{\left| {{h_1}} \right| }^2}< \frac{x}{{{\psi _E}\rho {{\left| {{h_2}} \right| }^2}}},{{\left| {{h_1}} \right| }^2}< \frac{{{\alpha _2}}}{{{\alpha _1}{\psi _E}\rho {{\left| {{h_2}} \right| }^2}}}} \right) \nonumber \\&= {} U\left( {\frac{{{\alpha _2}}}{{{\alpha _1}}} - x} \right) \Pr \left( {\frac{{{\alpha _2}}}{{{\alpha _1}}}> x,{{\left| {{h_1}} \right| }^2}< \frac{x}{{{\psi _E}\rho {{\left| {{h_2}} \right| }^2}}}} \right) \nonumber \\&+ \,U\left( {\frac{{{\alpha _2}}}{{{\alpha _1}}} - x} \right) \Pr \left( {\frac{{{\alpha _2}}}{{{\alpha _1}}}< x,{{\left| {{h_1}} \right| }^2}< \frac{{{\alpha _2}}}{{{\alpha _1}{\psi _E}\rho {{\left| {{h_2}} \right| }^2}}}} \right) \nonumber \\&= {} U\left( {\frac{{{\alpha _2}}}{{{\alpha _1}}} - x} \right) \Pr \left( {\frac{{{\alpha _2}}}{{{\alpha _1}}} > x,{{\left| {{h_1}} \right| }^2} < \frac{x}{{{\psi _E}\rho {{\left| {{h_2}} \right| }^2}}}} \right) \nonumber \\&= {} U\left( {\frac{{{\alpha _2}}}{{{\alpha _1}}} - x} \right) \int _0^\infty {\int _0^{\frac{x}{{{\psi _E}\rho y}}} {{f_{{{\left| {{h_2}} \right| }^2}}}\left( y \right) {f_{{{\left| {{h_1}} \right| }^2}}}\left( z \right) dydz} }\nonumber \\&= {} U\left( {\frac{{{\alpha _2}}}{{{\alpha _1}}} - x} \right) \int _0^\infty {\frac{1}{{{\Omega _2}}}\left( {1 - {e^{ - \frac{x}{{{\psi _E}\rho \,{\Omega _1}y}}}}} \right) {e^{ - \frac{y}{{{\Omega _2}}}}}dy}\nonumber \\&= {} U\left( {\frac{{{\alpha _2}}}{{{\alpha _1}}} - x} \right) \left( {1 - 2\sqrt{\frac{x}{{{\psi _E}\rho \,{\Omega _1}{\Omega _2}}}} \,{K_1}\left( {2\sqrt{\frac{x}{{{\psi _E}\rho \,{\Omega _1}{\Omega _2}}}} } \right) } \right) , \end{aligned}$$
Substituting (51) into (50), we can obtain \(R_{{D_2},PSR}^{\infty }\).
In this appendix, the proof starts by giving the ergodic rate at \({D_2}\) for direck link as follows
$$\begin{aligned} {R_{{D_2},dir}}&= E\left[ {\frac{1}{2}\log \left( {1 + \min \left( {{\gamma _{2,{D_1}}},\gamma _{{D_2}}^{MRC}} \right) } \right) } \right] \nonumber \\&=\frac{1}{{2\ln 2}}\int \limits _0^\infty {\frac{{1 - {F_X}\left( x \right) }}{{1 + x}}dx},\nonumber \\ {I_6}\!&=\!\underbrace{\min \left( {\frac{{{\psi _I}{{\left| {{h_1}}\right| }^2}{\alpha _2}\rho }}{{{\psi _I}{{\left| {{h_1}}\right| }^2}{\alpha _1}\rho \!+\!1}},{{\left| {{h_2}}\right| }^2}{{\left| {{h_1}} \right| }^2}{\psi _E}\rho \!+\!\frac{{{{\left| {{h_0}}\right| }^2}{\alpha _2}\rho }}{{{{\left| {{h_0}}\right| }^2}{\alpha _1}\rho \!+\!1}}}\right) }_X \end{aligned}$$
The CDF of X is calculated as follows (see (53)).
$$\begin{aligned} {F_X}\left( x \right) = 1 - \underbrace{\Pr \left( {\frac{{{\psi _I}{{\left| {{h_1}} \right| }^2}{\alpha _2}\rho }}{{{\psi _I}{{\left| {{h_1}} \right| }^2}{\alpha _1}\rho + 1}}> x,{{\left| {{h_2}} \right| }^2}{{\left| {{h_1}} \right| }^2}{\psi _E}\rho + \frac{{{{\left| {{h_0}} \right| }^2}{\alpha _2}\rho }}{{{{\left| {{h_0}} \right| }^2}{\alpha _1}\rho + 1}} > x} \right) }_{{I_{61}}} \end{aligned}$$
\({I_{61}}\) is computed by (see(54)).
$$\begin{aligned} {I_{61}}&= {} U\left( {\frac{{{\alpha _2}}}{{{\alpha _1}}} - x} \right) \times \Pr \left( {{{\left| {{h_1}} \right| }^2}> \frac{x}{{\left( {{\alpha _2} - {\alpha _1}x} \right) {\psi _I}\rho }},{{\left| {{h_2}} \right| }^2} > \frac{x}{{{{\left| {{h_1}} \right| }^2}{\psi _E}\rho }} - \frac{{{{\left| {{h_0}} \right| }^2}{\alpha _2}}}{{\left( {{{\left| {{h_0}} \right| }^2}{\alpha _2}\rho + 1} \right) {{\left| {{h_1}} \right| }^2}{\psi _E}}}} \right) \nonumber \\&= {} U\left( {\frac{{{\alpha _2}}}{{{\alpha _1}}} - x} \right) \times \int _0^\infty {\int _{\frac{x}{{{\psi _I}\rho \left( {{\alpha _2} - {\alpha _1}x} \right) }}}^\infty {\int _{\frac{x}{{z{\psi _E}\rho }} - \frac{{y{\alpha _2}}}{{\left( {y{\alpha _1}\rho + 1} \right) z{\psi _E}}}}^\infty {{f_{{{\left| {{h_0}} \right| }^2}}}\left( y \right) {f_{{{\left| {{h_1}} \right| }^2}}}\left( z \right) {f_{{{\left| {{h_2}} \right| }^2}}}\left( u \right) dydz} } } du\nonumber \\&= {} U\left( {\frac{{{\alpha _2}}}{{{\alpha _1}}} - x} \right) \int _0^\infty {\int _{\frac{x}{{{\psi _I}\rho \left( {{\alpha _2} - {\alpha _1}x} \right) }}}^\infty {\frac{1}{{{\Omega _1}{\Omega _0}}}{e^{ - \frac{x}{{z{\psi _E}\rho \,{\Omega _2}}} - \frac{{y{\alpha _2}}}{{\left( {y{\alpha _1}\rho + 1} \right) z{\psi _E}{\Omega _2}}} - \frac{y}{{{\Omega _0}}}\frac{z}{{{\Omega _1}}}}}dy} } dz \end{aligned}$$
Substituting (54) into (53), we can achieve (see(55)).
$$\begin{aligned} {F_X}\left( x \right)&= {} 1 - U\left( {\frac{{{\alpha _2}}}{{{\alpha _1}}} - x} \right) \nonumber \\&\times \left( {\int _0^\infty {\int _{\frac{x}{{{\psi _I}\rho \left( {{\alpha _2} - {\alpha _1}x} \right) }}}^\infty {\frac{1}{{{\Omega _1}{\Omega _0}}}{e^{ - \frac{x}{{z{\psi _E}\rho \,{\Omega _2}}} - \frac{{y{\alpha _2}}}{{\left( {y{\alpha _1}\rho + 1} \right) z{\psi _E}{\Omega _2}}} - \frac{y}{{{\Omega _0}}}\frac{z}{{{\Omega _1}}}}}dy} } dz} \right) \end{aligned}$$
Substituting (55) into (52), we can achieve \(R_{D_2}\).
Tran, H.Q., Phan, C.V. & Vien, QT. Performance analysis of power-splitting relaying protocol in SWIPT based cooperative NOMA systems. J Wireless Com Network 2021, 110 (2021). https://doi.org/10.1186/s13638-021-01981-9
Accepted: 06 April 2021
Non-orthogonal multiple access (NOMA)
Energy harvesting (EH)
Information processing (IP)
Radio-frequency (RF)
Power-splitting relaying (PSR)
Decode-and-forward (DF)
|
CommonCrawl
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.