text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
OSA Publishing > Optics Express > Volume 28 > Issue 16 > Page 23266
James Leger, Editor-in-Chief
Feature Issues
64Gb/s PAM4 and 160Gb/s 16QAM modulation reception using a low-voltage Si-Ge waveguide-integrated APD
Jin Zhang, Bill Ping-Piu Kuo, and Stojan Radic
Jin Zhang,* Bill Ping-Piu Kuo, and Stojan Radic
University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093, USA
*Corresponding author: [email protected]
Jin Zhang https://orcid.org/0000-0002-4901-4636
J Zhang
B Kuo
S Radic
pp. 23266-23273
•https://doi.org/10.1364/OE.396979
Jin Zhang, Bill Ping-Piu Kuo, and Stojan Radic, "64Gb/s PAM4 and 160Gb/s 16QAM modulation reception using a low-voltage Si-Ge waveguide-integrated APD," Opt. Express 28, 23266-23273 (2020)
64 Gb/s low-voltage waveguide SiGe avalanche photodiodes with distributed Bragg reflectors (PRJ)
25 Gbps low-voltage waveguide Si–Ge avalanche photodiode (OPTICA)
High performance waveguide-coupled Ge-on-Si linear mode avalanche photodiodes (OE)
Phase recovery
Power spectral density
Quadrature amplitude modulation
Variable optical attenuators
Original Manuscript: May 6, 2020
Revised Manuscript: July 10, 2020
Manuscript Accepted: July 15, 2020
Design and characterization
PAM-4 channel reception
16-QAM coherent channel reception
We demonstrate waveguide-integrated silicon-germanium avalanche photodiodes with a maximum responsivity of 15.2 A/W at 16x avalanche gain, and 33 GHz bandwidth. Intensity-modulation-direct-detection (IMDD) and coherent channel reception test demonstrated the APD's performance with higher-order formats, allowing 32 Gbaud PAM-4 and 40 Gbaud 16QAM channel reception without any digital signal processing conventionally used for receiver impairments mitigation.
© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
The sustained growth in intra- and inter-data center traffic has led to widespread deployment of high spectral efficiency modulation formats, such as multi-level pulse-amplitude modulation (PAM) and quadrature amplitude modulation (QAM). This development is motivated, among other requirements, by the need to scale throughputs while circumventing conventional power-consumption penalty associated with baudrate scaling, imposed by CMOS dynamic power [1]. The use of multi-level formats, however, requires additional link power budget margin, provided by the combination of increased laser power on the transmitter side, reduced loss budget, or enhanced receiver sensitivity by elevating gain of the transimpedance amplifiers (TIA). Such link design approach consequently dictates higher cost and power consumption per data unit throughput, and lower link reach than binary channels. Such compromises impose barriers in system designs that are bound by strict power consumption envelope while demanding highest possible throughput: typical examples are represented by in-package interconnect designs for next generation switches and network interfaces.
Avalanche photodiodes (APDs) offer an alternative approach for advanced modulation format link designs in short-reach scenarios where optical power is limited. With internal gain generated by photocarriers-initiated impact ionization, APDs provide responsivity beyond quantum-limited external efficiency and enhance the sensitivity of thermal-noise limited receivers [2,3], thereby allowing for lower optical power into the photonic circuit, and reduced or even suppressed transimpedance gain requirement. Furthermore, advances made in low-defect Ge epitaxial growth on Si techniques allow integration of low dark-current APDs directly coupled with silicon photonic circuits, thus enabling low-cost manufacturing of APD-enabled systems using commercial silicon foundry processes.
A majority of demonstrations on waveguide-integrated Si-Ge APDs are based on separate-absorption-charge-multiplication (SACM) structure to reduce excess noise by confining carrier multiplication in Si [3–10]. The SACM approach, however, requires epitaxial silicon growth for field control and multiplication layer formation [3–9], which is incompatible with standard photonics processes optimized for dual-polarization operation [11]. Another demonstration type [10], while requires no epitaxial silicon growth, shows limited bandwidth (10 GHz) due to reduced drift velocity resulted from the weak electric field in the Ge absorption layer and silicon charge layer. On the other hand, while Ge APDs are generally understood to suffer from high multiplication noise in germanium, these devices can achieve high gain using carrier multiplication close to avalanche breakdown [12], and noise could be effectively suppressed by manipulating the electric field and multiplication distribution [13–15]. Demonstrations with lateral PIN [14] and vertical PIN [15] structures all show improved sensitivity performances, however, at limited data rates of 10 Gbps.
Recognizing this limitation, we investigate another lateral PIN-based Ge-on-Si APDs fabricated in a standard foundry process capable of delivering high bandwidth and gain for multi-level PAM and QAM channel reception. The fabricated APD exhibits high primary responsivity of 0.95 A/W at 1550 nm, a 3-dB bandwidth of 33 GHz, and a multiplication gain of 16 at −20 dBm input power before reaching junction breakdown at 12.5V reverse bias. The APD is capable of receiving 64 Gbps (32 Gbaud) PAM−4 channel and 40 Gbaud 16QAM channel without receiver equalization. To the best of our knowledge, this is the first demonstration of Si-Ge APD for coherent detection.
2. Design and characterization
The Si-Ge waveguide APDs were fabricated at commercial foundry using 0.18 um silicon photonics process without Si epitaxial growth. The schematic of the designed APD with lateral PIN junction structure is shown in Fig. 1(a). A pair of shallow implants (p/n) with 1018 cm−3 peak concentration were used to form a p-i-n junction in silicon with 500 nm intrinsic width. The silicon p-i-n junction was optimized to generate strong electric field in Ge (shown in Fig. 1(d)) for initiating impact ionization, while simultaneously minimizing free carrier optical absorption by highly doped contact implants (p++ / n++). A 500-nm thick Ge optical absorption layer was subsequently deposited by selective epitaxial growth on the doped silicon slab.
Fig. 1. (a) Schematic view of the designed APD; (b) microscope image of the fabricated APD device; (c) simulated optical absorption distribution; (d) simulated electric field distribution, and (e) simulated impact generation distribution.
Numerical model showed that the electric field (Fig. 1(d)) was concentrated to a 200-nm thick layer within the Ge-Si junction. Despite the observation that the peak electric field was located in the Si layer, cross-sectional integration of impact ionization generation rate showed that the carrier multiplication primarily occurred in Ge (96.5%), as Ge possesses an order of magnitude higher ionization coefficient within the operating field strength at 0.2–0.4 MV/cm [16], [17]. Although the near-unity ratio between hole and electron ionization coefficient of Ge would have resulted in high excess noise [17], the localized electric field near Si-Ge boundary had largely confined impact ionization to within 200 nm of the implanted Si regions, which could be attributed to the reduced excess noise factor seen in the experimental characterization due to dead-space effect [18], [19] . The APD (microscope image in Fig. 1(b)) was coupled to silicon waveguide via adiabatic taper, allowing efficient evanescent coupling of the input light to the 1 µm×50 µm Ge crystal. Subsequent dual-layer aluminum metallization provides electrical connection to exposed pads for characterization.
The dark / illuminated I-V characteristics at the room temperature (22°C), as well as the corresponding gain of the APD, are shown in Fig. 2(a). A continuous-wave laser at 1550 nm wavelength was coupled into the APD device via on-chip vertical grating couplers. The fiber-to-chip coupling loss, at 7 dB, was de-embedded from all optical power measurements. The in-waveguide power was −20 dBm. The dark current was 24 nA at < 4 V reverse bias, and increased to 100 µA at 12.5 V, beyond which the diode underwent junction breakdown. The primary responsivity at unity gain (bias < 2 V) was 0.95 A/W and a maximum gain of 16 was achieved before breakdown.
Fig. 2. (a) Dark and photo current versus reverse bias voltage characteristics at 1550 nm and −20 dBm input (left) and extracted multiplication gain (right); (b) frequency responses at various bias conditions; (c) 3-dB bandwidth versus gain.
The frequency response at various bias voltages was measured with a calibrated electro-optic modulator and vector network analyzer (VNA), as shown in Fig. 2(b). The 3-dB opto-electrical (O/E) bandwidth at various multiplication gains is summarized in Fig. 2(c). The bandwidth increased with increasing bias voltage until reaching a maximum bandwidth of 33 GHz at 9.5 V bias. Further increase in reverse bias reduced bandwidth to 22 GHz at 12.5 V, due to increased ionization build-up time [4].
The noise characteristics of the APD were extracted from dark/light shot-noise measurements with −20 dBm in-waveguide optical power. The noise power spectral density (PSD) of the output current was measured using a low-noise amplifier and electrical signal analyzer. The measured noise PSD was compared against Eq. (1), which accounts for shot noise due to primary photocarrier generation and subsequent avalanche multiplication, as well as system thermal noise and laser relative intensity noise (RIN) [20]:
(1)$${N_t} = 2q({I_D} + {I_L}){M^2}F{R_L}\Delta f + {N_{thermal}} + {N_{RIN}}$$
In Eq. (1), q denotes the electron charge, ID and IL are the dark and photo current at unit gain, M is the corresponding gain, F is the excess noise factor, RL is the system impedance, and Δf is the bandwidth. Nthermal and NRIN are the thermal noise and laser RIN. To demonstrate that the design was able to leverage dead-space effect to reduce excess noise generation in Ge, the extracted excess noise factor F versus gain M was compared against theoretical values of McIntyre's theory in terms of the ratio of the hole-electron ionization coefficients keff: [21]
(2)$$F = M{k_{eff}} + (2 - {\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 M}} \right.}\!\lower0.7ex\hbox{$M$}})(1 - {k_{eff}})$$
The measured excess noise factor F (Fig. 3) showed that the effective ionization ratio keff was bounded within 0.15–0.25, far below the ionization ratio of bulk Ge (≈0.9) [17]. The excess noise reduction is attributed to the localization of the ionization (Fig. 1(e)).
Fig. 3. Measured excess noise factor against gain with an input optical power of −20dBm.
The linearity of the photoconductive response of the APD was evaluated by characterizing the total harmonic distortion (THD). A Mach-Zehnder modulator driven by a single-frequency signal was used to generate a stimulus at 5 GHz. The THD of the stimulus, measured using a reference photodiode (Discovery Semiconductor DSC-10H), was less than 0.7% at an input power up to −2 dBm. The THD of the APD, as shown in Fig. 4, shows that the device provided sufficiently linear response (< 2%) for PAM-4 and 16-QAM modulations, even at high gain and input power regimes.
Fig. 4. Measured total harmonic distortion versus coupled input power under various bias voltages.
3. PAM-4 channel reception
The performance of the APD for TIA-less PAM-4 channel reception was characterized using the setup shown in Fig. 5. The PAM-4 channel was generated by a 64 Gsamples/s digital-to-analog converter (DAC) with 16-GHz 3-dB bandwidth and was electrically amplified to drive a LiNbO3 Mach-Zehnder modulator (MZM) with a 3-dB bandwidth of 30 GHz. A 1549.3 nm external cavity laser with RIN < −140 dBc/Hz was used for optical carrier supply. The output of the MZM was amplified by an erbium-doped fiber amplifier (EDFA) and attenuated by a variable optical attenuator (VOA) for controlling incident power into the APD. Frequency response of the transmitter (DAC, modulator driver and modulator) as well as nonlinear distortions intrinsic to MZM and driver were measured using a reference 40-GHz photodetector (Discovery Semiconductor DSC-10H), and subsequently pre-compensated for in the digitized samples uploaded to the DAC. The photocurrent output of the APD was received by an equivalent-time oscilloscope with 50 GHz bandwidth via a GSG probe and a bias-tee, through which the desired reverse bias was applied. On-chip 50Ω shunt resistors across signal and ground pads of each photodiode were used to reduce RF mismatch loss in the RF probe and coaxial cables, thereby improving frequency response. However, the on-chip shunt termination inevitable decreased the amount of photocurrent reaching the oscilloscope, as half of the photocurrent was diverged to ground.
Fig. 5. Setup for PAM-4 channel reception (PC: polarization controller; OSC: oscilloscope).
The PAM4 eye diagrams were captured at −3 dBm incident power to the APD under test, which corresponded to −9.2 dBm inner optical modulation amplitude (OMA). Figure 6 plots the Q-factor and BER results of the received eye against reverse bias from −3 V to −12 V, which corresponds to avalanche gain (M) of 1 to 2.6 at the −3 dBm input power level. The multiplication gain provided a maximum 1.7 dB increase in Q-factor over the thermal-noise limited value at M = 1. Absence of eye level distortion and inter-symbol interference (ISI) in the recorded eye diagram suggested that the APD response remained linear over the bandwidth of the channel (∼ 26 GHz).
Fig. 6. 64Gbs PAM4 signal detection: eye diagrams at reverse bias voltages of 3 V (a), 11 V (b), and 12 V (c) separately; (d) Q factor and BER results versus bias voltage.
4. 16-QAM coherent channel reception
Coherent transmission further enhances spectral efficiency by allowing higher cardinality modulations than intensity-modulation-direct-detection (IMDD) systems at the same SNR, thereby providing scalable path towards Tb/s/wavelength capacity. Leveraging internal gain provided by APDs, coherent receiver with APDs can improve the sensitivity with a lower optical power or electrical gain than traditional coherent receivers [22], thereby reducing power consumption of such systems.
The performance of the proposed APD in coherent channel detection was characterized using APDs integrated with silicon optical hybrid based on 4 × 4 multimode interferometric (MMI) coupler, as shown in Fig. 7. The test channel was created from an external cavity laser at 1549.3 nm with 5 kHz linewidth. A 16-QAM modulation at variable baud-rate was imprinted onto the carrier using a nested-MZM with > 30 GHz bandwidth, driven by quad-channel DAC with 64 Gsamples/s sampling rate. The data pattern was shaped by a raised-cosine filter with a roll-off factor of 0.1. Pre-compensation of the data pattern was further applied in order to remove the frequency response and nonlinear distortion of the transmitter (nested-MZM + modulator driver + DAC) and that of the real-time oscilloscope (RTO) serving as the digitizer. The frequency response and nonlinear distortion of the transmitter and RTO was characterized using a 40-GHz wide reference coherent receiver constructed out of discrete InP photodetectors (Finisar BPDV2120R) and optical hybrid. To allow control of channel optical signal-to-noise ratio (OSNR), a filtered amplified spontaneous emission (ASE) noise source with variable output power was coupled with the 16-QAM channel before entering the receiver, and the OSNR was monitored prior to the device under test. The channel was further filtered to 0.6 nm bandwidth and amplified before being coupled to the signal port of the APD-integrated coherent receiver. The LO was derived from the same laser source powering the test channel via a 50/50 coupler, and subsequently amplified to compensate for fiber-to-chip coupling loss. The photocurrents from the four APDs of the integrated coherent receiver were coupled to the input ports of the real-time oscilloscope (Tektronix DPO72004A), which captured the photocurrent (via the internal 50 Ω load) at 50 Gsamples/s. Channel deskew and carrier phase recovery were performed offline to recover the transmitted data. Subsequent characterizations were performed at an averaged signal / LO power of −9 dBm / −7 dBm respectively into each APD, of which the fiber-to-chip coupling loss of 7 dB and hybrid excess loss of 1 dB were taken into account.
Fig. 7. Setup for coherent detection. Inset shows the microscope image of the APD coherent receiver.
Figure 8 depicts the sensitivity of the APD-integrated coherent receiver in terms of Q2 versus OSNR at 0.1 nm for a 40-Gbaud 16-QAM channel when biased at −3 V and −11 V (i.e., M = 1 and 2.2 respectively). The internal gain of the APDs provided a 2-dB improvement in Q-factor over the thermal-noise limited receiver at M = 1, and allowed reaching BER = 4.5×10−3 (Q2 = 8.8 dB) at 14.5 dB OSNR which corresponds to error-free reception with the staircase hard decision forward error-correction coding (HDFEC) [23]. Near-theoretic sensitivity and distortion-free constellation further suggest that no discernible nonlinear distortion (compression and bandwidth modulation due to output current) was present in this coherent receiver.
Fig. 8. The Q-factor results for 40 Gbaud 16QAM detection with various OSNR values under reverse bias of −3 V and −11 V. Insets show selected constellation diagrams.
We presented a waveguide-integrated Si-Ge APD fabricated in standard silicon photonics foundry process. A high primary responsivity of 0.95 A/W, a gain of 16, and a bandwidth of 26–33 GHz allowed signal reception of 32 Gbaud PAM-4 channel and 40 Gbaud 16QAM channel. The reported p-i-n design enables massive, low-cost integration of avalanche devices in standard silicon photonics platform without extra Si epitaxy steps.
Defense Advanced Research Projects Agency (DARPA).
1. D. Thomson, A. Zilkie, J. E. Bowers, T. Komljenovic, G. T. Reed, L. Vivien, D. Marris-Morini, E. Cassan, L. Virot, J.-M. Fédéli, J.-M. Hartmann, J. H. Schmid, D.-X. Xu, F. Boeuf, P. O'Brien, G. Z. Mashanovich, and M. Z. Nedeljkovic, "Roadmap on silicon photonics," J. Opt. 18(7), 073003 (2016). [CrossRef]
2. Payman Zarkesh-Ha, Robert Efroymson, Earl Fuller, Joe C. Campbell, and Majeed M. Hayat, "5.2 dB Sensitivity Enhancement in 25Gbps APD-Based Optical Receiver Using Dynamic Biasing," Optical Fiber Communication Conference. Optical Society of America, 2020.
3. Z. Huang, C. Li, D. Liang, K. Yu, C. Santori, M. Fiorentino, W. Sorin, S. Palermo, and R. G. Beausoleil, "25 Gbps low-voltage waveguide Si–Ge avalanche photodiode," Optica 3(8), 793–798 (2016). [CrossRef]
4. Y. Kang, H. D. Liu, M. Morse, M. J. Paniccia, M. Zadka, S. Litski, G. Sarid, A. Pauchard, Y. H. Kuo, H. W. Chen, W. S. Zaoui, J. E. Bowers, A. Beling, D. C. McIntosh, X. Zheng, and J. C. Campbell, "Monolithic germanium/silicon avalanche photodiodes with 340 GHz gain–bandwidth product," Nat. Photonics 3(1), 59–63 (2009). [CrossRef]
5. M. Huang, Su. Li, C. Pengfei, H. Guanghui, S. Tzung-I, W. Chen, C.-y. Hong, and D. Pan, "Germanium on silicon avalanche photodiode," IEEE J. Sel. Top. Quantum Electron. 24(2), 1–11 (2018). [CrossRef]
6. Sungbong Park, Yann Malinge, Olufemi Dosunmu, Gregory Lovell, Seth Slavin, Kelly Magruder, Yimin Kang, and Ansheng Liu, "50-Gbps receiver subsystem using Ge/Si avalanche photodiode and integrated bypass capacitor," 2019 Optical Fiber Communications Conference and Exhibition (OFC). IEEE, 2019.
7. Binhao Wang, Zhihong Huang, Xiaoge Zeng, Di Liang, Marco Fiorentino, Wayne V. Sorin, and Raymond G. Beausoleil, "50 Gb/s PAM4 low-voltage Si-Ge avalanche photodiode," CLEO: Science and Innovations. Optical Society of America, 2019.
8. Zhihong Huang, Binhao Wang, Yuan Yuan, Di Liang, Marco Fiorentino, and Raymond G. Beausoleil, "64Gbps PAM4 Modulation for a Low Energy Si-Ge Waveguide APD with Distributed Bragg Reflectors," Optical Fiber Communication Conference. Optical Society of America, 2020.
9. Yuan Yuan, Zhihong Huang, Binhao Wang, Wayne Sorin, Di Liang, Joe C. Campbell, and Raymond G. Beausoleil, "Superior Temperature Performance of Si-Ge Waveguide Avalanche Photodiodes at 64Gbps PAM4 Operation," 2020 Optical Fiber Communications Conference and Exhibition (OFC). IEEE, 2020.
10. N. J. D. Martinez, C. T. Derose, R. W. Brock, A. L. Starbuck, A. T. Pomerene, A. L. Lentine, D. C. Trotter, and P. S. Davids, "High performance waveguide-coupled Ge-on-Si linear mode avalanche photodiodes," Opt. Express 24(17), 19072–19081 (2016). [CrossRef]
11. A. E.-J. Lim, J. Song, Q. Fang, C. Li, X. Tu, N. Duan, K. K. Chen, R. P.-C. T. Tern, and T. Y. Liow, "Review of silicon photonics foundry efforts," IEEE J. Sel. Top. Quantum Electron. 20(4), 405–416 (2014). [CrossRef]
12. H. Melchior and W. T. Lynch, "Signal and noise response of high speed germanium avalanche photodiodes," IEEE Trans. Electron Devices ED-13(12), 829–838 (1966). [CrossRef]
13. S. Assefa, F. Xia, and Y. A. Vlasov, "Reinventing germanium avalanche photodetector for nanophotonic on-chip optical interconnects," Nature 464(7285), 80–84 (2010). [CrossRef]
14. L. Virot, P. Crozat, J.-M.. Fédéli, J.-M. Hartmann, D. Marris-Morini, E. Cassan, F. Boeuf, and L. Vivien, "Germanium avalanche receiver for low power interconnects," Nat. Commun. 5(1), 4957 (2014). [CrossRef]
15. H. T. Chen, J. Verbist, P. Verheyen, P. D. Heyn, G. Lepage, J. D. Coster, P. Absil, X. Yin, J. Bauwelinck, J. Van Campenhout, and G. Roelkens, "High sensitivity 10Gb/s Si photonic receiver based on a low-voltage waveguide-coupled Ge avalanche photodetector," Opt. Express 23(2), 815–822 (2015). [CrossRef]
16. R. Van Overstraeten and H. De Man, "Measurement of the ionization rates in diffused silicon pn junctions," Solid-State Electron. 13(5), 583–608 (1970). [CrossRef]
17. D. R. Decker and C. N. Dunn, "Determination of germanium ionization coefficients from small-signal IMPATT diode characteristics," IEEE Trans. Electron Devices 17(4), 290–299 (1970). [CrossRef]
18. M. M. Hayat, W. L. Sargeant, and B. E. Saleh, "Effect of dead space on gain and noise in Si and GaAs avalanche photodiodes," IEEE J. Quantum Electron. 28(5), 1360–1365 (1992). [CrossRef]
19. M. M. Hayat, O.-H. Kwon, S. Wang, J. C. Campbell, B. E. A. Saleh, and M. C. Teich, "Boundary effects on multiplication noise in thin heterostructure avalanche photodiodes: theory and experiment [Al/sub 0.6/Ga/sub 0.4/As/GaAs]," IEEE Trans. Electron Devices 49(12), 2114–2123 (2002). [CrossRef]
20. M. Teich, K. Matsuo, and B. Saleh, "Excess noise factors for conventional and superlattice avalanche photodiodes and photomultiplier tubes," IEEE J. Quantum Electron. 22(8), 1184–1193 (1986). [CrossRef]
21. R. J. McIntyre, "Multiplication noise in uniform avalanche diodes," IEEE Trans. Electron Devices ED-13(1), 164–168 (1966). [CrossRef]
22. K. Wen, Y. Zhao, J. Gao, S. Zhang, and J. Tu, "Design of a coherent receiver based on InAs electron avalanche photodiode for free-space optical communications," IEEE Trans. Electron Devices 62(6), 1932–1938 (2015). [CrossRef]
23. B. P. Smith, A. Farhood, A. Hunt, F. R. Kschischang, and J. Lodge, "Staircase codes: FEC for 100 Gb/s OTN," J. Lightwave Technol. 30(1), 110–117 (2012). [CrossRef]
D. Thomson, A. Zilkie, J. E. Bowers, T. Komljenovic, G. T. Reed, L. Vivien, D. Marris-Morini, E. Cassan, L. Virot, J.-M. Fédéli, J.-M. Hartmann, J. H. Schmid, D.-X. Xu, F. Boeuf, P. O'Brien, G. Z. Mashanovich, and M. Z. Nedeljkovic, "Roadmap on silicon photonics," J. Opt. 18(7), 073003 (2016).
Payman Zarkesh-Ha, Robert Efroymson, Earl Fuller, Joe C. Campbell, and Majeed M. Hayat, "5.2 dB Sensitivity Enhancement in 25Gbps APD-Based Optical Receiver Using Dynamic Biasing," Optical Fiber Communication Conference. Optical Society of America, 2020.
Z. Huang, C. Li, D. Liang, K. Yu, C. Santori, M. Fiorentino, W. Sorin, S. Palermo, and R. G. Beausoleil, "25 Gbps low-voltage waveguide Si–Ge avalanche photodiode," Optica 3(8), 793–798 (2016).
Y. Kang, H. D. Liu, M. Morse, M. J. Paniccia, M. Zadka, S. Litski, G. Sarid, A. Pauchard, Y. H. Kuo, H. W. Chen, W. S. Zaoui, J. E. Bowers, A. Beling, D. C. McIntosh, X. Zheng, and J. C. Campbell, "Monolithic germanium/silicon avalanche photodiodes with 340 GHz gain–bandwidth product," Nat. Photonics 3(1), 59–63 (2009).
M. Huang, Su. Li, C. Pengfei, H. Guanghui, S. Tzung-I, W. Chen, C.-y. Hong, and D. Pan, "Germanium on silicon avalanche photodiode," IEEE J. Sel. Top. Quantum Electron. 24(2), 1–11 (2018).
Sungbong Park, Yann Malinge, Olufemi Dosunmu, Gregory Lovell, Seth Slavin, Kelly Magruder, Yimin Kang, and Ansheng Liu, "50-Gbps receiver subsystem using Ge/Si avalanche photodiode and integrated bypass capacitor," 2019 Optical Fiber Communications Conference and Exhibition (OFC). IEEE, 2019.
Binhao Wang, Zhihong Huang, Xiaoge Zeng, Di Liang, Marco Fiorentino, Wayne V. Sorin, and Raymond G. Beausoleil, "50 Gb/s PAM4 low-voltage Si-Ge avalanche photodiode," CLEO: Science and Innovations. Optical Society of America, 2019.
Zhihong Huang, Binhao Wang, Yuan Yuan, Di Liang, Marco Fiorentino, and Raymond G. Beausoleil, "64Gbps PAM4 Modulation for a Low Energy Si-Ge Waveguide APD with Distributed Bragg Reflectors," Optical Fiber Communication Conference. Optical Society of America, 2020.
Yuan Yuan, Zhihong Huang, Binhao Wang, Wayne Sorin, Di Liang, Joe C. Campbell, and Raymond G. Beausoleil, "Superior Temperature Performance of Si-Ge Waveguide Avalanche Photodiodes at 64Gbps PAM4 Operation," 2020 Optical Fiber Communications Conference and Exhibition (OFC). IEEE, 2020.
N. J. D. Martinez, C. T. Derose, R. W. Brock, A. L. Starbuck, A. T. Pomerene, A. L. Lentine, D. C. Trotter, and P. S. Davids, "High performance waveguide-coupled Ge-on-Si linear mode avalanche photodiodes," Opt. Express 24(17), 19072–19081 (2016).
A. E.-J. Lim, J. Song, Q. Fang, C. Li, X. Tu, N. Duan, K. K. Chen, R. P.-C. T. Tern, and T. Y. Liow, "Review of silicon photonics foundry efforts," IEEE J. Sel. Top. Quantum Electron. 20(4), 405–416 (2014).
H. Melchior and W. T. Lynch, "Signal and noise response of high speed germanium avalanche photodiodes," IEEE Trans. Electron Devices ED-13(12), 829–838 (1966).
S. Assefa, F. Xia, and Y. A. Vlasov, "Reinventing germanium avalanche photodetector for nanophotonic on-chip optical interconnects," Nature 464(7285), 80–84 (2010).
L. Virot, P. Crozat, J.-M.. Fédéli, J.-M. Hartmann, D. Marris-Morini, E. Cassan, F. Boeuf, and L. Vivien, "Germanium avalanche receiver for low power interconnects," Nat. Commun. 5(1), 4957 (2014).
H. T. Chen, J. Verbist, P. Verheyen, P. D. Heyn, G. Lepage, J. D. Coster, P. Absil, X. Yin, J. Bauwelinck, J. Van Campenhout, and G. Roelkens, "High sensitivity 10Gb/s Si photonic receiver based on a low-voltage waveguide-coupled Ge avalanche photodetector," Opt. Express 23(2), 815–822 (2015).
R. Van Overstraeten and H. De Man, "Measurement of the ionization rates in diffused silicon pn junctions," Solid-State Electron. 13(5), 583–608 (1970).
D. R. Decker and C. N. Dunn, "Determination of germanium ionization coefficients from small-signal IMPATT diode characteristics," IEEE Trans. Electron Devices 17(4), 290–299 (1970).
M. M. Hayat, W. L. Sargeant, and B. E. Saleh, "Effect of dead space on gain and noise in Si and GaAs avalanche photodiodes," IEEE J. Quantum Electron. 28(5), 1360–1365 (1992).
M. M. Hayat, O.-H. Kwon, S. Wang, J. C. Campbell, B. E. A. Saleh, and M. C. Teich, "Boundary effects on multiplication noise in thin heterostructure avalanche photodiodes: theory and experiment [Al/sub 0.6/Ga/sub 0.4/As/GaAs]," IEEE Trans. Electron Devices 49(12), 2114–2123 (2002).
M. Teich, K. Matsuo, and B. Saleh, "Excess noise factors for conventional and superlattice avalanche photodiodes and photomultiplier tubes," IEEE J. Quantum Electron. 22(8), 1184–1193 (1986).
R. J. McIntyre, "Multiplication noise in uniform avalanche diodes," IEEE Trans. Electron Devices ED-13(1), 164–168 (1966).
K. Wen, Y. Zhao, J. Gao, S. Zhang, and J. Tu, "Design of a coherent receiver based on InAs electron avalanche photodiode for free-space optical communications," IEEE Trans. Electron Devices 62(6), 1932–1938 (2015).
B. P. Smith, A. Farhood, A. Hunt, F. R. Kschischang, and J. Lodge, "Staircase codes: FEC for 100 Gb/s OTN," J. Lightwave Technol. 30(1), 110–117 (2012).
Absil, P.
Assefa, S.
Bauwelinck, J.
Beausoleil, R. G.
Beausoleil, Raymond G.
Beling, A.
Boeuf, F.
Bowers, J. E.
Brock, R. W.
Campbell, J. C.
Campbell, Joe C.
Cassan, E.
Chen, H. T.
Chen, H. W.
Chen, K. K.
Chen, W.
Coster, J. D.
Crozat, P.
Davids, P. S.
De Man, H.
Decker, D. R.
Derose, C. T.
Dosunmu, Olufemi
Duan, N.
Dunn, C. N.
Efroymson, Robert
Fang, Q.
Farhood, A.
Fédéli, J.-M.
Fédéli, J.-M..
Fiorentino, M.
Fiorentino, Marco
Fuller, Earl
Gao, J.
Guanghui, H.
Hartmann, J.-M.
Hayat, M. M.
Hayat, Majeed M.
Heyn, P. D.
Hong, C.-y.
Huang, M.
Huang, Z.
Huang, Zhihong
Hunt, A.
Kang, Y.
Kang, Yimin
Komljenovic, T.
Kschischang, F. R.
Kuo, Y. H.
Kwon, O.-H.
Lentine, A. L.
Lepage, G.
Li, C.
Li, Su.
Liang, D.
Liang, Di
Lim, A. E.-J.
Liow, T. Y.
Litski, S.
Liu, Ansheng
Liu, H. D.
Lodge, J.
Lovell, Gregory
Lynch, W. T.
Magruder, Kelly
Malinge, Yann
Marris-Morini, D.
Martinez, N. J. D.
Mashanovich, G. Z.
Matsuo, K.
McIntosh, D. C.
McIntyre, R. J.
Melchior, H.
Morse, M.
Nedeljkovic, M. Z.
O'Brien, P.
Palermo, S.
Pan, D.
Paniccia, M. J.
Park, Sungbong
Pauchard, A.
Pengfei, C.
Pomerene, A. T.
Reed, G. T.
Roelkens, G.
Saleh, B.
Saleh, B. E.
Saleh, B. E. A.
Santori, C.
Sargeant, W. L.
Sarid, G.
Schmid, J. H.
Slavin, Seth
Smith, B. P.
Song, J.
Sorin, W.
Sorin, Wayne
Sorin, Wayne V.
Starbuck, A. L.
Teich, M.
Teich, M. C.
Tern, R. P.-C. T.
Thomson, D.
Trotter, D. C.
Tu, J.
Tu, X.
Tzung-I, S.
Van Campenhout, J.
Van Overstraeten, R.
Verbist, J.
Verheyen, P.
Virot, L.
Vivien, L.
Vlasov, Y. A.
Wang, Binhao
Wang, S.
Wen, K.
Xia, F.
Xu, D.-X.
Yin, X.
Yu, K.
Yuan, Yuan
Zadka, M.
Zaoui, W. S.
Zarkesh-Ha, Payman
Zeng, Xiaoge
Zhang, S.
Zhao, Y.
Zheng, X.
Zilkie, A.
IEEE J. Quantum Electron. (2)
IEEE Trans. Electron Devices (5)
J. Opt. (1)
Nat. Commun. (1)
Nat. Photonics (1)
Solid-State Electron. (1)
(1) N t = 2 q ( I D + I L ) M 2 F R L Δ f + N t h e r m a l + N R I N
(2) F = M k e f f + ( 2 − 1 / 1 M M ) ( 1 − k e f f )
|
CommonCrawl
|
Computer Science Meta
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.
What is the significance of context-sensitive (Type 1) languages?
Seeing that in the Chomsky Hierarchy Type 3 languages can be recognised by a state machine with no external memory (i.e., a finite automaton), Type 2 by a state machine with a single stack (i.e. a push-down automaton) and Type 0 by a state machine with two stacks (or, equivalently, a tape, as is the case for Turing Machines), how do Type 1 languages fit into this picture? And what advantages does it bring to determine that a language is not only Type 0 but Type 1?
formal-languages applied-theory computability automata formal-grammars
Patrick87
bitmaskbitmask
$\begingroup$ Since you are asking here and not in cstheory.SE (as suggested by @Sunil), I suggest you also add a brief description/definition of Type 1, which might not be a familiar term for everyone. $\endgroup$ – Janoma Mar 6 '12 at 20:54
$\begingroup$ @Sunil No, it would not. This is not a research level question (and even if it were, it would still be on topic here because we do not exclude research level questions - at least that's what I remember to have been the result of the discussion on area51). $\endgroup$ – sepp2k Mar 6 '12 at 20:57
$\begingroup$ @Janoma: Why should it help to include information that can be easily looked up (wouldn't that count as noise)? $\endgroup$ – bitmask Mar 6 '12 at 21:00
$\begingroup$ @Janoma I think the general guideline should be to explain concepts that someone who'd be able to answer the question might not know (if the guideline were to explain everything that some users of the site might not know, we'd be explaining everything all the time and that's certainly not the standard on other SE sites). And I don't think that someone who does not know the Chomsky hierarchy would be able to answer the question. Of course it doesn't hurt to explain as much as possible (as long as it doesn't make the question tediously long) - I just don't think it's necessary in this case. $\endgroup$ – sepp2k Mar 6 '12 at 21:02
$\begingroup$ Every computer science major knows (or should know) the Chomsky hierarchy. Everyone else can look it up in 20s. A link to maybe Wikipedia should suffice here. $\endgroup$ – Raphael♦ Mar 6 '12 at 22:49
The context-sensitive languages are exactly the languages that can be recognized by a Turing machine using linear space and non-determinism. You can simulate such a Turing machine using exponential time, so you can recognize any such language in exponential time. Do note that the problem of recognizing some context-sensitive languages is $PSPACE$-complete, which means we're pretty sure you can't do better than exponential time.
Comparing this to type 0 languages, this means you can at least say something about how long it takes to recognize the language. A type 0 language may not even be decidable: the language of all Turing machines that halt is a type 0 language, but as recognizing this language is exactly the halting problem, it is not decidable.
Context-sensitive grammars are not very useful in practice. Context-free grammars are intuitive to work with, but as the examples on Wikipedia show, context-sensitive grammars very quickly become rather messy. Programs using polynomial space are much more easily designed (and the $PSPACE$-completeness guarantees the existence of some equivalent CSG that is only polynomially larger than the space usage of your algorithm).
The reason for their existence is that they form a very natural extension of context-free grammars (you allow context to determine which productions are valid). This will probably have inspired Chomsky to define them and name them the type 1 languages. Remember that this definition was made before computers became as fast as they are today: it's more of interest to formal language theorists than for programmers.
Unrestricted grammars get even weirder: there's no longer a notion of 'expanding' a nonterminal and replacing it with a production, possibly depending on context. You're allowed to freely modify the context as well. This makes unrestricted grammars even less intuitive to work with: programs are equivalent and a lot more intuitive.
Alex ten BrinkAlex ten Brink
$\begingroup$ But context-sensitive languages are useful! See, for instance, this discussion. $\endgroup$ – Raphael♦ Mar 7 '12 at 16:34
$\begingroup$ Context sensitivity is useful, but context-sensitive grammars as a way to describe languages is not very useful IMO. You're way better off using some other means to describe context-sensitive features. $\endgroup$ – Alex ten Brink Mar 7 '12 at 16:37
$\begingroup$ But you talk about languages in most parts of your answer. Regarding grammars, ymmw. There are grammar models between CFG and CSG that have natural modelling applications, e.g. coupled-/multi-CFG. $\endgroup$ – Raphael♦ Mar 7 '12 at 16:44
$\begingroup$ You're right, I've been sloppy with the distinction between languages and grammars I see. I've updated my answer. $\endgroup$ – Alex ten Brink Mar 7 '12 at 16:50
Generally speaking, you usually want to know the smaller class to which a given language $L$ belongs. This is because smaller classes can be recognized/accepted/generated by simpler mechanisms (automata, grammars, regular expressions, etc.), which is desirable.
For example, the class of regular languages has good closure properties, and given a DFA $\mathcal{A}$ you can test in linear time that a word belongs to $L(\mathcal{A})$. In contrast, with a Turing machine you need linear time just to read the output, which usually happens before it actually starts processing.
In short, for smaller classes you need less computational power to solve the problem of deciding whether a word belongs to the language.
According to Wikipedia, Chomsky defined context-sensitive grammars (i.e. Type 1) to describe the syntax of natural languages. This is a bit different than with other classes of languages, which were introduced to describe families of strings that were used in mathematics (e.g. the syntax of arithmetic formulae) instead of natural languages (e.g. the syntax of a gramatically-correct sentence in English).
JanomaJanoma
$\begingroup$ "In short, for smaller classes you need less computational power to solve the problem of deciding whether a word belongs to the language." exactly, but how does this apply to Type 1 versus Type 0? That's exactly the question! $\endgroup$ – bitmask Mar 6 '12 at 21:15
$\begingroup$ Well, if you know beforehand that a language can be recognized by a TM that only uses linear space, that gives you an advantage in terms of implementation (for example, scalability). Also, if you're interested in proving theoretical properties, you can just take a constant $c$ such that the TM uses space $cn$ and analyze accordingly. That is not possible for a general TM, or for a generic Type 0 language. $\endgroup$ – Janoma Mar 6 '12 at 21:18
In context-free languages, at any point of the input parsing, the automaton is in a state defined by its stack. Each production has the same behaviour in consuming the input regardless of where it is used.
This leads to the interesting property that each production generates a sub-language of the one generated by the ones that are deeper in the stack and thus for each pair A and B of productions generated and consumed on any particular input we have three possible cases:
a: The input consumed by A is completely contained in the input consumed by B; or
b: The input consumed by A completely contains the input consumed by B; or
c: The input consumed by A is completely disjoint from the input consumed by B.
This implies that the following never happens:
d: The input consumed by A partially overlaps the input consumed by B.
Contrasting to that, in context-sensitive languages, the behaviour of each production depends on where it is used, so the input consumed in a production is not a sub-language of the ones deeper in the stack (in fact, processing it with a stack would not work). And we have that possibility d may happen.
In the real world, a case where a context-sensitive language would make sense is something like denoting <b>bold text</b>, <i>italic text</i> and <u>underlined text</u> with these html tags and let them overlap, like "This is a <u>text with <i>mixed</u> overlapping tags</i>." Observe that to parse that and find if all the starting tags match the ending tags, a PDA won't do because it is not context-free, but an LBA will easily do.
Closure Properties
Of all language classes from the Chomsky hierarchy, only regular and context-sensitive languages are closed under complementation. Hence this is a sort of unique feature of context-sensitive languages.
In contrast to context-free languages, CS are also closed under intersection and shuffle product.
SebastianSebastian
Any language that is type 1 can be recognized by a Turing machine that only uses linear space (so-called linear bounded automata).
SureshSuresh
$\begingroup$ Yes, that's the definition. But how does this restriction help me? $\endgroup$ – bitmask Mar 6 '12 at 21:12
$\begingroup$ it helps me because it limits the power of algorithms recognizing CSGs to E instead of EXP. I don't know how it helps you :) $\endgroup$ – Suresh Mar 6 '12 at 21:20
Type 1 languages can be decided by linear bounded automata, which are non-deterministic Turing machines that may only use a portion of the tape whose size is linear to the input size.
sepp2ksepp2k
The Chomsky hierarchy classifies grammars more than languages. However it was not designed to have something to do with the number of tapes a automaton should have to recognize it, as you suggested for Type 2 and 3, even if there is a kind of Turing machine that does that for Type-1 grammars.
You should also note that the languages of Type-0 grammars are not all recognized by a Turing machine, but they only can be enumerated by such a machine: Type-0 means recursively enumerable, and Turing machines only recognize recursive languages.
jmadjmad
Modern programming language use context-sensitive features all the time; they fall into a subset that can efficiently be decided.
Examples are name and type analysis and type inference.
Raphael♦Raphael
Many others have mentioned that Type-1 languages are those that can be recognised by linear bounded automata. The halting problem is decidable for linear bounded automata, which in turn means many other properties that are computationally undecidable for languages recognised by Turning Machines are decidable for Type-1 languages.
Admittedly the proof that the halting problem is decidable for linear bounded automata relies on the fact that with a finite amount of tape they can only enter a finite number of states, so if they don't halt within that many steps you know they're looping and won't ever halt. This proof technically applies to all actual computers (which also have finite memory), but that isn't of any practical benefit in solving the halting problem for the programs that run on them.
Thanks for contributing an answer to Computer Science Stack Exchange!
Not the answer you're looking for? Browse other questions tagged formal-languages applied-theory computability automata formal-grammars or ask your own question.
Existence of non-context free but decidable languages
Determining capabilities of a min-heap (or other exotic) state machines
Computational power of deterministic versus nondeterministic min-heap automata
Computational power of nondeterministic type-1 min-heap automata with multiple heaps
Machines for context-free languages which gain no extra power from nondeterminism
Expressive power of formal systems
When did "regular" start referring to Type 3 languages/grammars?
Are all finitely recursive context free languages parseable with a regexp?
Is there a difference between the equivalent automaton of a grammar and an automaton which accepts the language produced by the grammar?
|
CommonCrawl
|
History of Science and Mathematics Stack Exchange is a question and answer site for people interested in the history and origins of science and mathematics. It only takes a minute to sign up.
What new mathematics was inspired by biology and chemistry?
While physics and astronomy sported mathematical models for centuries mathematical chemistry and biology appeared relatively recently. Most of the interaction seems to go one way, established mathematical theories (differential equations, combinatorics, graph theory, etc.) are applied to formalize and solve chemical and biological problems. I am interested in the reverse effect: development of new mathematical theories inspired by chemistry and biology, like trigonometry was inspired by astronomy, or calculus by physics.
One example I know of is genetic algebras of Etherington that describe the structure of genetic inheritance. They are structurally different from algebras that emerged from physical applications or inner workings of mathematics, for example non-associative in ways distinct from Lie, Jordan or alternative algebras, and rely on different intuitions. Are there other such examples?
mathematics discoveries chemistry biology
ConifoldConifold
$\begingroup$ Does "mathematics" here, include Computer Science? CS wasn't really a distinct field from Math until the late 70's, and in many ways it still is a type of mathematics. $\endgroup$ – RBarryYoung Jun 29 '15 at 14:59
$\begingroup$ @RBarryYoung Sure, especially where computational methods are concerned. $\endgroup$ – Conifold Jul 3 '15 at 0:18
In 1959, Eugene Wigner presented a talk on The Unreasonable Effectiveness of Mathematics in the Natural Sciences. Many papers on the unreasonable effectiveness of mathematics in this field, that field, and some other field quickly followed suit. As an opposing point of view, the unreasonably effective (800+ papers, 30+ books) mathematician I.M.Gelfand noted that (emphasis mine)
Eugene Wigner wrote a famous essay on the unreasonable effectiveness of mathematics in natural sciences. He meant physics, of course. There is only one thing which is more unreasonable than the unreasonable effectiveness of mathematics in physics, and this is the unreasonable ineffectiveness of mathematics in biology.
I.M. Gelfand said this after developing an interest in biology due to the premature death of his son. He organized a weekly seminar that attracted the best minds in Russian biology and mathematics. Gelfand's interest in biology resulted in pioneering work in the field of biomathematics. Not only is mathematics applicable to some aspects of biology, biology has motivated many new developments in mathematics.
Just a few of the areas where biology has inspired new mathematics:
Mathematics inspired by population modeling
Modeling population dynamics has been a fruitful application of mathematics starting with Euler, who studied age distributions in stable populations (Euler 1760). What about unstable populations? One of the seminal papers (if not the seminal paper) in the development of chaos theory was written by Robert May in 1976. May was educated as a physicist (where mathematics is unreasonably effective), but then switched to biology (where mathematics is supposedly unreasonably ineffective). His paper on population dynamics (May 1976) discusses the logistic map, $x_{n+1} = \lambda x_n(1-x_n)$, which he used to model population dynamics. This paper marked the beginning of chaos theory.
Mathematical techniques inspired by evolution and animal behavior
The contributions of biology to optimization theory are immense. Evolution has motivated a number of techniques used in mathematics and artificial intelligence, including evolutionary programming (Fogel 1966), evolutionary strategy (Rechenberg 1973), genetic algorithms (Holland 1975).
Emulating animal behavior has provided a number of other optimization techniques. These include the critter of the month optimization techniques, starting with ant colony optimization (Dorigo 1996). Ant colony optimization has been used to attack a large number of problems from very diverse fields. Now there are bees algorithms, bacterial colony optimization algorithms, foraging algorithms, all based on the behaviors of simple creatures. I won't give references for all of these; there are now entire journals dedicated to this subject (e.g., the IEEE Transactions on Evolutionary Computation). Collectively, these fall into the category of swarm intelligence. One last technique that I will give a reference on is particle swarm optimization (Kennedy 1995, Eberhart 1996).
Mathematics inspired by DNA sequencing
The above areas are new areas of applied mathematics. The problem of how to sequence DNA produced not only new applied mathematics (e.g., the image below from Letunic 2007) but new theoretical mathematics as well.
All of these developments led Joel Cohen (Cohen 2004) to conjecture that mathematics is biology's next microscope, only better; biology is mathematics' next physics, only better.
Joel E. Cohen (2004), "Mathematics is biology's next microscope, only better; biology is mathematics' next physics, only better." PLoS Biology 2.12:e439.
Marco Dorigo, et al. (1996), "Ant system: optimization by a colony of cooperating agents," IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 26.1:29-41.
Leonard Euler (1760), "Recherches générales sur la mortalité et la multiplication," Mémoires de l'Académie Royal des Sciences et Belles Lettres 16:144–164.
Russell Eberhart and James Kennedy, "A new optimizer using particle swarm theory," Proceedings of the Sixth International Symposium on Micro Machine and Human Science.
Lawrence J. Fogel, et al. (1966), "Artificial intelligence through simulated evolution," Wiley.
John H. Holland, (1975), "Adaptation in Natural and Artificial Systems," University of Michigan Press (second edition, MIT Press, 1992).
James Kennedy and Russel Eberhart (1995), "Particle swarm optimization," Proc. IEEE International Conf. on Neural Networks.
Ivica Letunic and Peer Bork (2007), "Interactive Tree Of Life (iTOL): an online tool for phylogenetic tree display and annotation," Bioinformatics 23.1:127-128.
Robert M. May (1976), "Simple mathematical models with very complicated dynamics," Nature 261.5560:459-467.
Ingo Rechenberg (1973), "Evolutionsstrategie Optimierung technischer Systeme nach Prinzipien der biologischen Evolution," (PhD thesis), Friedrich Frommann Verlag, Struttgart-Bad Cannstatt.
Eugene P. Wigner (1960), "The unreasonable effectiveness of mathematics in the natural sciences," Communications on Pure and Applied Mathematics 13.1:1-14. Richard courant lecture in mathematical sciences delivered at New York University, May 11, 1959.
Rodrigo de Azevedo
David HammenDavid Hammen
$\begingroup$ Vow, what a comprehensive answer! Frankly, I was always puzzled by Wigner's title, how is effectiveness of mathematics in physics unreasonable if so much of mathematics was explicitly developed to model physics. Biology makes a better case, although of course biological laws are still ultimately based on the laws of physics. $\endgroup$ – Conifold Feb 26 '15 at 18:43
$\begingroup$ It's Eugene Wigner, not Alfred Wigner. $\endgroup$ – KCd May 8 '20 at 23:47
This is a correct observation. Chemistry and biology indeed contributed very little to mathematics itself.
One of the examples of chemistry contribution is the "Belousov-Zhabotinsky reaction". This was an experimental discovery whose explanation stimulated to some extent the development of the theory of dynamical systems (known as "chaos theory" in the popular literature). A lot of sophisticated mathematics was invented to explain the Periodic Table. But this was mostly applications of mathematics TO chemistry; I cannot say that chemistry brought new ideas to mathematics.
All examples from biology which come to my mind are about population genetics, the thing mentioned in the question. This is also related to dynamical systems and related algebra (Bernstein algebras, for example).
Volterra-Lottka systems (also from population biology) stimulated the qualitative theory of differential equations in the first half of 20-s century.
But most of it again works one-way: applications of mathematics TO biology etc. In the best case, chemistry and biology provide QUESTIONS to which mathematicians sometimes find answers. But no really new ideas in mathematics that originate in chemistry/biology.
It seems that there are no new mathematical theories (of importance to mathematics itself, with no regard to applications) which came from chemistry or biology.
This is indeed very different from physics which constantly feeds mathematics with new ideas. So I just confirm your observation.
Alexandre EremenkoAlexandre Eremenko
$\begingroup$ I wonder why. Because biology and chemistry are inherently "less mathematical" or physics already plucked all the low hanging fruit so what otherwise might have been inspired is already developed and ready to use. I thought molecular biology and quantum chemistry might have contributed because QM descriptions are too fundamental for what they study, and graph theory is too schematic. $\endgroup$ – Conifold Feb 18 '15 at 17:27
$\begingroup$ This is an interesting question, and the answer is not clear. Much of the existing mathematics was created under the influence of physics. Perhaps biology and even chemistry are "too young" for this? $\endgroup$ – Alexandre Eremenko Feb 18 '15 at 19:16
How about: (1) the logistic equation of Pierre François Verhulst (describing the change in a population over time, published 1838) leads to (2) Feigenbaum's work on chaos.
Gerald EdgarGerald Edgar
$\begingroup$ This is very interesting, I didn't realize that Feigenbaum universality was biology motivated. Could you expand and give a reference. $\endgroup$ – Conifold Feb 18 '15 at 21:59
$\begingroup$ Fiegenbaum was my undergrad research advisor, long ago (1977-1978). I had expressed an interest in combining software and statistical physics for my senior research project. I was assigned to him, and he asked me to investigate the behavior of $x_{n+1} = \lambda x_n (1-x_n)$. I thought what?!? That can't be a good research problem. It was. If search tools were anything like what they are now, I would have quickly run across Robert May's seminal 1976 paper. As it was, Feigenbaum showed me that paper only after I had spent a semester developing software to study that seemingly simple equation. $\endgroup$ – David Hammen Feb 21 '15 at 4:57
$\begingroup$ Plus one, by the way. $\endgroup$ – David Hammen Feb 21 '15 at 5:09
The famous Polya's 1937 paper "Kombinatorische Anzahlbestimmungen für Gruppen, Graphen und chemische Verbindungen" is considered, as far as I understand, as one of pillars of the modern theory of combinatorial enumeration. The title of the paper suggests that, at least to some degree, it was motivated by a question from chemistry.
Genetic algorithms.
Some algebro-geometric questions from algebraic statistics seem to be motivated by biological considerations.
My personal impression (based mainly on hearing people from a few IHES conferences devoted to interaction of mathematics and biology, and personal experience from working in a biotech industry) confirms Alexandre Eremenko's comment that "biology is too young": the structure of biological knowledge is mainly descriptive, as opposed to the physical one, organized in "theories".
Pasha ZusmanovichPasha Zusmanovich
Thanks for contributing an answer to History of Science and Mathematics Stack Exchange!
Not the answer you're looking for? Browse other questions tagged mathematics discoveries chemistry biology or ask your own question.
How Has Science Influenced or Helped Mathematics
To what degree did ancient Egyptians use Cement Chemistry
Are there historical examples of thought experiments in biology and chemistry?
How was the composition and structure of water determined?
What ever happened to the methylphenidate synthesiser and his family?
What is the origin of "ortho-," "meta-," "para-," in chemistry?
Different sign conventions in thermodynamics different for physics and chemistry
With what microscope and pollen was the Browinian motion discovered?
What insight of Watson and Crick was missed by Franklin?
What inspired Nicholson to break water into hydrogen and oxygen with electricity?
What was considered Evolutionary Science or Biology proper in 1880-1890 in the US?
|
CommonCrawl
|
The social perception of urban transport in the city of Madrid: the application of the Servicescape Model to the bus and underground services
María Luisa Delgado Jalón1,
Alba Gómez Ortega ORCID: orcid.org/0000-0002-0153-80001 &
Javier De Esteban Curiel1
This paper is a study of the user's perception of urban and suburban transport and was carried out in an attempt to compare the social value of various means of transport. The measurement of social perceptions is carried out with regard to ambient conditions, space, signs, front-line employees and other customers as stakeholders of both public transport services. Thus, this paper aims to identify the reasons for individuals' preferences when choosing one particular means of transport for their daily life.
The results show that user perceptions of the underground are slightly better than the perceptions of buses; the results also highlight the decent management of both means of transport in terms of quality air, temperature, space, noise, cleanliness, smell and seating facilities. Some modest improvements are recommended to enhance public transportation service delivery.
This approach reflects the gap between the social perception of a service and companies' financial situations. Management policies are necessary to improve the service's social value.
No one doubts the need for a collective transport system in cities. In any economy, regardless of its level of development, people must move around during their daily routine for both work and leisure purposes. When cities grow, the combined relationships between urban nuclei and metropolitan areas make this movement more difficult. Neirotti et al [51] state that cities currently must be identified as complex systems, with large populations, businesses, services and interconnected transport means. Mobility necessities may lead to problems in terms not only of congestion but also pollution, noise and economic cost [2, 7, 44]. These problems are worse in the urban sphere, in which the majority of the population is concentrated [35].
According to Diab and El-Geneidy [27], many transit agencies implement a number of strategies to provide an attractive transportation service.
Numerous studies have analysed business efficiency in the sector. Thus, there are studies from the perspective of costs and revenues [17, 24]; other studies analyse the efficiency of frontier models, i.e., the AEDs and the like [8, 40, 41]. On the other hand Sampaio, et al., [58] and [39] provide an interesting literature review of studies related to the analysis of efficiency in transport until 2011. However, few researchers have combined this work with the social value of the service provided or the users' perception of that value [13, 36, 54, 67]. The effectiveness of each applied transport policy depends significantly on the level of agreement among stakeholders, making collaboration a prerequisite for success [57]. This motivated our interest in carrying out our own study, which is focused on analysing the value perceived by final users.
Our objective is to obtain a profile of the demand for transport service and compare the value of public transport in its various forms as bus and underground services as perceived by the user. In this way, the operating entities can use the competitive advantage that knowing user preferences represents and thus increase its appeal.
The perception of social value in collective urban transport
From the social point of view, transport has a tremendous influence on social relationships in increasing individuals' social and cultural possibilities. It has made it possible to separate the distance between the workplace and the home to a much greater extent than ever before, thus enabling a more comfortable lifestyle [64]. Boniface et al. [13], review the evidence that transport impacts social interactions and that social interactions impact health. Utsunomiya [67] has attempted to determine the role of local public transportation beyond social benefits.
Transforming cities and understanding the role of public transport services from the perspective of cities in an international context by embedding social and environmental perspectives is becoming highly challenging [54].
In short, many papers analyse the social value of transport and its effects: waiting times [65], pollution reduction, environmental impact [13, 21], social integration and interconnection [23, 47, 67]. However, there are few and more recent studies that attempt to analyse the social value from the users' perception and feelings.
Social value from a user's perspective has been analysed mainly by studies focused on service quality. Hence, Dell'Olio et al. [26] carry out a quality analysis based on the services desired by users and potential users of public transport. In addition, Abenoza et al. [1], through a cluster analysis, re-identify commuting time as one of the main determinants of satisfaction in the Swedish public transport service. Other studies related to quality and user perception are [25], [28], [55], [43], [50], and [36], among others.
Hernández et al. [36] consider that in order to define an efficient transport interchange it is necessary to identify the key factors from both a functional and a psychological perspective, since the users' perceptions of their experience are particularly important as regards achieving the most appropriate policy measures for interchanges. Gatersleben et al. [31] explore the idea that such judgements are affected by the means of transport they use. Iseki and Taylor [38] analyse ways of reducing the perceived burdens of out-of-vehicle time spent walking, waiting, and transferring to improve users' experience at transit stops and stations.
All of the above led us to the conclusion that it is necessary to use new perspectives to manage transport, perspectives focused on more demanding user-oriented approaches in order to discover the degree to which customers/users' perceptions and expectations of quality influence their decisions when travelling.
We have carried out the study from the perspective of social perception, applied specifically to the city of Madrid and using the Bitner Model, with the Servicescape Model, as a basis. This has allowed us to obtain the social perceptions of a determined service. This model was chosen for various reasons: first, it is a relatively recent and innovative model and is continuously being put into practice in various sectors and research fields; second, although it has been applied to numerous types of services, it has not, to date, been applied to our study object: urban transport as a public service. Lastly, it is an appropriate means of analysing the social perception of transport service, thus enabling us to uncover the motives behind users' preferences for either buses or the underground service.
The Servicescape model
The core of the Servicescape Model is that it considers elements that conform to the environment in which the service is produced in order to understand consumers' behaviour and observe how it influences certain changes in their purchase decisions. By 'environment' we mean both the physical space in which the service is produced and the interaction among employees, the consumer and other consumers.
The influence of the physical environment on consumers has been recognised in marketing, fundamentally in retail sales and in organisational contexts. In the last decades of the twentieth century, numerous psychologists explored the impact of the physical space and its influence on consumer behaviour [9, 42, 60, 66]. In 1973, Kotler was one of the first to suggest that the place in which a product is consumed influences consumers' purchase decisions. He introduced the term 'atmosphere' to describe "the conscious designing of a space to create certain effects on buyers".
Gardner [30], states that small changes in the physical environment may influence consumers and their state of mind when making a purchase. Spies et al. [62] detected that the atmosphere does not affect the total amount of money spent, but only the amount spent on impulse buys. According to these authors, customers spend more money on impulse buys in more pleasant environments.
In 1992, Bitner published the Servicescape conceptual framework (Fig. 1), into which she integrated empirical findings and compiled theories along with a theory that she had developed herself, and this became one of the most widely recognised concepts in the research in this field. Bitner justifies her work with the idea that the physical environment is of great importance to service companies since the product is "produced and consumed simultaneously". Behaviour such as interaction in small groups, the forming of friendships, participation, aggression, withdrawal and the wish to help others have proved to be influenced by environmental conditions [37].
Bitner Servicescape Model (Source: Bitner [10])
In addition, the people who participate in that environment may give form to and influence the physical space itself and its impact. This 'social atmosphere' is included in a wider definition of Servicescape, in which the influence of other consumers on the consumer in question's perception of the service is also taken into consideration [5, 6]. According to Šimeček et al. [61], a model that would reflect reality more reliably would include relevant psychological factors (such as perceptions or attitudes) or generally human elements influencing theirs choices. Although it would be possible to broaden the concept of Servicescape to include the natural, cultural, temporal or political environments, these definitions of the environment are not within the scope of the present effort [63].
Practical application of the model
It is now necessary to examine the works that show the practical application of the model and provide a solid basis for its application in this research. According to Reimer and Kuehn [56], although previous studies have revealed the importance of Servicescape, its effect on the quality perceived in the service had not been adequately analysed. Pantouvakis [53] attempted to evaluate the relative importance of the various dimensions of service quality by proposing a new Servicescape model that would place more importance on the physical attributes of the environment than in previous studies.
Lin and Worthley [45] examined various Servicescapes as a moderated variable in an integral model of the individual features of personality, emotions, satisfaction and approximation-rejection behaviours. Orth et al. [52], meanwhile, explored the interior design of commercial services and environments and tested a conceptual model related the types of interior designs, including the consumers' impressions and the personality of that environment.
The study by Choo and Petrick [20] on social interactions and the intention to repeat the experience of agro-tourism approached how interactions influence satisfaction. These authors believe that it is necessary to focus on encounters in which the customer interacts with the personnel and/or other customers [12]. According to the marketing literature, employees' interpersonal skills affect customer satisfaction and their behaviour [11, 14], and customers influence each other either indirectly as a part of the environment or directly through interpersonal encounters. Brunner et al. [15] carried out a study in the tourism sector and found that the consumer attempted not only to find professionalism in the provision of the service but also to attain satisfaction as regards emotional experiences, and they considered that it is important to know the antecedents and consequences of customer satisfaction.
Having carried out the above literature review, we thought it would be useful to use this model in our study, since it would appear to be an appropriate means of analysing the user's final social perception of bus service when compared to underground service. We have applied the model by creating a questionnaire that attempts to consider all the contributions developed by the various authors mentioned previously over the last few years.
Definition of hypothesis
There is substantial literature providing evidence that people tend to prefer rail transit to bus transit ([4, 16, 59]; among others), even when service characteristics are similar. With that as a starting point, it would be valuable to determine which elements of the public transportation "atmosphere" are most valuable for public transportation users. This could help transit companies better tailor their marketing campaigns, as the paper suggests.
Thus, the hypothesis that we attempt to validate in this work is as follows:
Ho: The underground service is better valued socially than the bus service.
The analysis of the survey results will allow us not only to validate this hypothesis but also to shed light on some of the motives for these preferences, and on the importance and value that users give to different aspects of the service received.
Information-gathering technique: the face-to-face survey
The information-gathering technique employed was a questionnaire. We decided upon a face-to face survey. According to Creswell [22], a face-to-face survey design is a procedure in quantitative research in which the researcher surveys a sample of people in order to describe the population's attitudes, behaviour, opinions or characteristics. Precise samples are selected for this kind of surveying and attempts are made to standardise and eliminate errors from data-gathering tools [32]. Moreover, Hair et al. [34] state that a face-to-face survey design has several advantages, among them that the interviewer can explain confusing and complex questions using visual aids.
The questionnaire used for this survey was filtered beforehand by means of a pilot pre-test. This pre-test consisted in trying the questionnaire on a small sample of the objective public fairly divided among administrative personnel, academic research staff and university students. Next, a complete double validation was performed comprising a) qualitative analysis (surveyed individuals' perceptions) and quantitative (Chronbach's Alpha test) analysis of the obtained results, and b) expert validators' review, which approved the final content of the questionnaire.
The final questionnaire consisted of a formalised set of questions (Table 1) regarding each of the blocks in Servicescape.
Table 1 Design of the questionnaire applied in this research paper
The types of questions used were, on the whole, closed. Both dichotomous and multiple-choice questions were used. In the first part, the socio-demographic profile, the questions concerned classification in order to segment the population and discover any possible influence on the responses. A Likert scale was used in most of the questionnaire, and the responses had values of 1 to 5 according to the sense of the sentence. With regard to the open question, naming other questions, we had to create a code of responses in order to transform it into a closed response question (Table 2).
Table 2 Cronbach's Alpha of the questionnaires used for this research
The sample size has been determined by the optimal point between the size and the sampling error that might work out. We applied the following sampling error formula:
$$ \mathrm{K}=2\surd \left[\right(\mathrm{p}\ \left(1-\mathrm{p}\right)/\mathrm{n}\Big] $$
(Source: [29])
Assuming the sample size "n" is 250 for bus and 250 for underground. For "p", we assume maximum dispersion, i.e., all elements in the questions have the same probability of being chosen:
$$ \mathrm{p}=\mathrm{q}=0,5 $$
Working out the formula, "k" for the bus and underground survey is 6.4%. In sum, the results obtained in our sample fluctuate from +/− 6.4% at a 95% confidence level in the total population.
The non-response rate was approximately 2% and the negativity rate was between 10% and 20%. The definitive version obtained a Cronbach's Alpha [0.717–0.998], signifying that the questionnaire is highly acceptable (see Table 2). The Cronbach's Alpha obtained for the socio-demographic profile was low, although it is logical that there is a high dispersion of responses in this type of data, and it is even necessary and appropriate because it means that data have been collected for a wide variety of profiles of those surveyed, which makes them more representative. Some authors, such as Hair, Black, Babin, and Anderson [33], consider that obtaining a Cronbach's Alpha of over 0.6 is a good result and that it is perfectly possible to extrapolate the data obtained from the sample to the total population.
Technique used to analyse information gathered: statistical analysis
The first descriptive analysis carried out was divided into two blocks. On the one hand, we carried out a frequency analysis for those questions with non-quantifiable responses that generally corresponded to dichotomous questions and the socio-demographic profile. On the other hand, we carried out a measures and dispersion analysis for those questions with responses measurable on a scale of 1 to 5.
The bivariant analysis carried out was also divided into two blocks: a) an analysis by means of Contingency Tables and Pearson Chi-Square, and b) an analysis of the correlation among variables by means of the Pearson Coefficient of Linear Correlation.
The different categories of the variables that are represented in a contingency table must be exhaustive and mutually exclusive. That is, the set of categories of a categorical variable must be sufficient to classify each and every individual of which the sample population is formed (exhaustivity) [3, 18, 46]. The Pearson Chi-Square coefficient provides the statistical legitimacy needed to allow the results obtained from the contingency tables to be extrapolated from the population sample being studied. As argued by Martín-Pliego and Ruiz-Maya [48], the Chi-Square Test is used as a contrast of independence in the Contingency Tables to verify whether there is a relationship among the variables being studied in those tables.
The bivariant analysis was carried out in the cases of those contingency tables that had a significance associated with a Chi-Square of < 0.05. Lastly, we studied the correlation among the variables in the questionnaire, two by two, using the Pearson Coefficient of Linear Correlation. This allowed us to identify the degree of linear correlation among the variables. For the model robustness, a linear regression analysis was carried out. This allowed us to calculate the value of these two parameters, thus defining the straight line that best adjusted to the point cloud.
Lastly, all the information regarding the methodology used in this work is summarised in Table 3.
Table 3 Technical data sheet for the methodology used in this research
We have divided the results into the three blocks of the Servicescape Model: 1) User's first impressions, 2) 'Feeling' or Internal Responses, which are related to the users' more inner sensations, and 3) users' specific 'Behaviour' in response to the perceived service.
Block 1: 'first impressions'
The characteristics of the environment
When the user has a positive impression of a physiological aspect of a service, this generally influences the other aspects of the same typology. On a bus, the environment or landscape have a direct influence on the user's perception. On the underground, however, the characteristics of the external environment do not appear to affect the user's perception, given that its own pre-established subterranean atmosphere has already been created.
One of the most important aspects for the traveller, in terms of the interior design of trains, is the seats. As the user's acquisitive level increases, s/he demands a higher level of quality of noise, cleanliness and smell, comfort, space, temperature and safety. Furthermore, full-time workers are more demanding, as they need their journey to be relaxed and comfortable to alleviate routine stress. Moreover, the older the people surveyed, the more positively they valued the quality of the seats.
How employees treat users
The influence of how employees treat users on the perception of the service is not very significant in the case of the underground. With regards to the bus service, however, the response concerning how employees treat users was clearly more positive owing to the interaction with the drivers.
Symbols, signs and signals
The signposting at the ends of streets is highly positively valued. Furthermore, the indications of waiting times on screens are a clear priority for the users of both services. As for signposting inside the underground stations, users consider that this allows them to find connections between different lines easily, and this is more important for residents.
Relationship with other users
Another relevant factor is the traveller's own interaction with other service users. The most frequent profile corresponds to humble classes. In our opinion, this may discourage certain other members of the public, as they do not wish to be identified with this collective.
Block 2: 'feelings'
Approximately 90% of those surveyed stated that they knew the Metro de Madrid logotype, while this percentage was 80% in the case of EMT de Madrid. Thus the policy concerning image and metro brand is much more interiorized in the public than that of EMT.
With regard to the idea that comes to mind when hearing public transport in the city mentioned, this question was raised openly. The details of the respondents' comments comprising the most typical answers are shown in Table 4.
Table 4 The details of the respondents' comments comprising some of the most typical answers
The responses allow us to state that large groups of people very probably imagine themselves on the underground far more than on a bus, and the negative image of the service as a whole may be derived more from the underground service itself. The next ideas associated with the service are divided between 'Communication and Mobility' and those related to 'Economic Aspects'. In the latter case, those surveyed alluded specifically to the cost of the service, which they consider to be expensive, the strikes that take place, too much publicity and cuts to the service.
When deciding which means of transport is considered better, the underground was chosen more frequently (86%). However, they also stated that both means are complementary and are not exclusive, perceiving the idea of coexistence rather than substituting one means for the other. The underground is fundamentally preferred for its speed, but if it takes second place, because the traveller has more time, is making the trip for leisure purposes or is older, then the bus begins to become more popular because the journey is more pleasant and it is possible to enjoy the sights of the city and daylight. One of EMT de Madrid's most successful publicity campaigns included the slogan "get on and see Madrid", which confirms this idea.
The Metro de Madrid brand is associated with modernity, innovation and efficient transport when compared to the EMT de Madrid brand.
Lastly, public transport is positively evaluated for its contribution to the sustainability of cities, both by users and the public in general. Moreover, users with a greater acquisitive power show more appreciation for the contribution that the service makes to sustainability, since they associate it with a healthy option for the city rather than a necessity in their transport criteria.
Block 3: 'behaviour
Reasons for using public transport
The response most frequently selected was 'to save money' followed by 'to save time', after which came 'no private transport'. A small percentage use it because of their 'social conscience', while out of the 'other responses', that which most stood out was the difficulty involved in getting to the city centre by car and the availability of parking metres. The creation of deterrent car parks in the main accesses of the big cities, to avoid the routes to the urban centre by car, is an important measure increasingly used by the transport public authorities in major Spanish cities. With these car parks, public car-transport intermodality is favoured, avoiding the entry of private vehicles into urban centres [19, 49]. The reason for using public transport to save money was principally given by students and the unemployed. Those who are employed fundamentally use the service to save time. The principal reason given by retired people is the lack of a car or driving licence. Both the bus and the underground play a key role in work-related journeys. The use of public transport for leisure purposes is more widespread among younger users.
Perception of the objectives of the service
The order of priority obtained was exactly the same for both means of transport, and even the percentages were very similar, thus making the list more valid and reflecting a clear tendency. The figure below (Fig. 2) shows the response rates recorded on the Likert scale as "very important":
Order of priority for using public transport. (Source: Authors)
Perception of the economic value of the service
Users generally consider the service expensive. Nonetheless, the principal reason why users with less acquisitive power use this service is precisely to save money, although this might appear to be a contradiction.
The idea of obtaining a complementary income to finance the service through publicity is accepted by those surveyed. The greatest opposition to the idea of obtaining an additional income came from younger people, perhaps because they are a more demanding collective. Pensioners or the unemployed whose subsidies may be threatened if the administrations increase the financing of transport are believe these companies should have greater financial autonomy.
Lastly, people associate the privatisation of the service with a decline in the welfare state and in the quality of the service; however, if it were possible to guarantee that this would not occur and that there efficient management would effectively reduce costs, then their opinion changes. Users consider that the public administration should be responsible for both the provision of the service and its financing.
Results summary and contrast of hypothesis
This was done by proposing a series of initial objectives that then served as a basis on which to contrast the hypothesis. The contrast has been carried out by selecting the survey questions that allow us to compare both means of transport, analysing questions from each block in the Servicescape (see Table 1). Contrast 1 is the arithmetic mean of the answers obtained in each question, and contrast 2 measures the dispersion in the answers to reinforce the affirmation made with contrast 1. As indicated in the table below (Table 5), in the blocks "first impressions" and "feelings", the hypothesis is accepted in all cases. However, in the third block, "behaviour", the data show the bus is used with a slightly higher frequency, although it is considered more expensive than the underground service, so the results do not allow us to establish a conclusion about the preference between bus and underground. Based on the evidence gathered, the hypothesis can be accepted, and so the underground is more highly valued socially than the bus.
Table 5 Summary of the contrast of the hypothesis in this research
This research has certain limitations that should be mentioned upon interpreting the results obtained. The first is that the study focuses solely on data obtained from Madrid. We therefore believe that it would be useful to carry out a study on the social perceptions of bus and underground users in the other Spanish cities in which these two means of transport coexist in order to obtain a more global view of the service in Spain. It would also be interesting to apply it to similar cities in neighbouring countries in order to verify whether the valuation and perception of the service are very different to those of Spanish users.
The second limitation is that the number of variables analysed in the questionnaire is considerable, thus making the questionnaire long, which may have led to exhaustion in those being surveyed. All the participants, therefore, were warned from the beginning that the average time needed to complete the questionnaire would be 8 min.
This research work has enabled us to apply the advantages of the Servicescape Model and the methodology of the corresponding survey. The conclusions are divided into academic ones and industry ones to provide insights for academics, policy makers and transport practitioners.
Academic conclusions
This work has allowed us to understand local means of transport in the city of Madrid, not only from the perspective of the management of the industry but also from a user's perspective. Its originality and value for the scientific community lies in the lack of prior studies that have applied the Servicescape Model to public transport. The model has generally been applied to the study of tourist attractions, such as hotels, restaurants, museums and events. Our contribution, therefore, opens up new academic lines of inquiry and is a modest descriptive study of the general perception of the underground and local bus services, which could help the administrations in their current and future management.
Conclusions for industry
According to the survey carried out, the public considers it an expensive service. However, the income obtained is not sufficient to cover all expenses. It would appear that there is a gap between these companies' financial situation and the public's perception of the cost of the service, and this has led us to the conclusion that it is vital to provide information about the expenses that these companies incur, for example basic data such as income obtained from takings, income from subventions and the structure of expenses. This would thus make users aware of the value of the service received through the evolution of its prices, maintenance, security and employees' expenses, etc. Law 19/2013 on Transparency, Access to Public Information and Good Governance may enable the management efforts made by these companies to reach the public. It definitely appears necessary for the public to attain a higher level of confidence in this management.
According to the responses to the questionnaire, the general perception is that the collective profile of the public transport user is one of medium-low acquisitive power. We understand that this could be one of the reasons why certain members of the public of a particular 'status' refuse to use the service, as they do not wish to be associated with this more 'working class' profile. We believe that companies should design policies that will improve the image of the public transport service, which is linked to its user profile.
After analysing the perceptions of EMT de Madrid, we believe it will be necessary to carry out an analysis of the use of its website and aspects of it that could be improved, in addition to attempting to improve the perception of its image by creating a more modern and innovative profile, thus reflecting what it truly is. According to service users in Madrid, the main advantage of the underground, when compared to the bus service, is that it is faster and there are no traffic jams.
One measure that could, therefore, contribute to encouraging the use of buses in Madrid might be an increase in the number of HOV (High Occupancy Vehicle) lanes or BUS lanes, or the use of systems that would prevent these lanes from becoming blocked.
To sum up, the results obtained from the Servicescape analysis for both underground and bus reveal an acceptable management in terms of air quality, temperature, space, noise, cleanliness, smell and seats (average scores range from 2.4 to 3.7 out of 5). The underground has slightly better results than bus service. Improvements are suggested to draw the customer's attention: air conditioning equipment in strategic places, involvement of users in implementing adequate information and various participation schemes to improve the transport service, infrastructure maintenance such as seats, pavement and stations, green spaces for nature and biodiversity, installation of chill-out music speakers. Hence, better practices in the environment can improve the quality of public transportation and reduce traffic congestion. Small things can make a big impact.
The datasets generated and analysed during the current study are available in the doctoral thesis "Análisis del sector del transporte urbano colectivo en España: autobús versus metro, un enfoque multidisciplinar".
Abenoza, R. F., Cats, O., & Susilo, Y. O. (2017). Travel satisfaction with public transport: Determinants, user classes, regional disparities and their evolution. Transportation Research Part A: Policy and Practice, 95, 64–84. https://doi.org/10.1016/j.tra.2016.11.011.
Abreu e Silva, J., & Bazrafshan, H. (2013). User satisfaction of intermodal transfer facilities in Lisbon, Portugal: Analysis with structural equations modeling. Transportation Research Record: Journal of the Transportation Research Board, 2350(1), 102–110. https://doi.org/10.3141/2350-12.
Aoki, S., & Miyakawa, M. (2014). Statistical testing procedure for the interaction effects of several controllable factors in two-valued input-output system. Journal of Statistical Theory and Practice, 8(3), 546–557.
Axhausen, K. W., Haupt, T., Fell, B., & Heidl, U. (2001). Searching for the rail bonus: Results from a panel SP/RP study. European Journal of Transport and Infrastructure Research, 1(4), 353–369.
Baker, J., Grewal, D., & Parasuraman, A. (1994). The influence of store environment on quality inferences and store image. Journal of the Academy of Marketing Science, 22(4), 328–339. https://doi.org/10.1177/0092070394224002.
Baker, J., Levy, M., & Grewal, D. (1992). An experimental approach to making retail store environmental decisions. Journal of Retailing, 68(4), 445.
Banister, D. (2011). The trilogy of distance, speed and time. Journal of Transport Geography, 19(4), 950–959. https://doi.org/10.1016/j.jtrangeo.2010.12.004.
Barnum, D. T., McNeil, S., & Hart, J. (2007). Comparing the efficiency of public transportation subunits using data envelopment analysis. Journal of Public Transportation, 10(2), 1–16. https://doi.org/10.5038/2375-0901.10.2.1.
Bitner, M. J. (1986). Consumer responses to the physical environment in service settings. Creativity in services marketing, 43(3), 89–93.
Bitner, M. J. (1992). Servicescapes: The impact of physical surroundings on customers and employees. Journal of Marketing, 56(2), 57–71. https://doi.org/10.2307/1252042.
Bitner, M. J., Booms, B. H., & Mohr, L. A. (1994). Critical service encounters: The employee's viewpoint. Journal of Marketing, 58(4), 95–106. https://doi.org/10.1177/002224299405800408.
Bitner, M. J., Booms, B. H., & Tetreault, M. S. (1990). The service encounter: Diagnosing favorable and unfavorable incidents. Journal of Marketing, 54(1), 71–84. https://doi.org/10.2307/1252174.
Boniface, S., Scantlebury, R., Watkins, S. J., & Mindell, J. S. (2015). Health implications of transport: Evidence of effects of transport on social interactions. Journal of Transport and Health, 2(3), 441–446. https://doi.org/10.1016/j.jth.2015.05.005.
Bowers, M. R., & Martin, C. L. (2007). Trading places redux: Employees as customers, customers as employees. Journal of Services Marketing, 21(2), 88–98. https://doi.org/10.1108/08876040710737859.
Brunner, A., Peters, M., & Strobl, A. (2012). It is all about the emotional state: Managing tourists' experiences. International Journal of Hospitality Management, 31, 23–30. https://doi.org/10.1016/j.ijhm.2011.03.004.
Bunschoten, T. (2012). To tram or not to tram: Exploring the existence of the tram bonus Graduation thesis, Delft University of Technology.
Carrasco, D., Toledano, D. S., & Toledano, J. S. (2014). Observatorio de Costes y Financiación del Transporte Urbano Colectivo: Un programa de investigación. Investigaciones Europeas de Dirección y Economía de la Empresa, 20, 33–40.
Cazzaro, M., & Colombi, R. (2014). Marginal nested interactions for contingency tables. Communications in Statistics - Theory and Methods, 43(13), 2799–2814. https://doi.org/10.1080/03610926.2012.685550.
Chen, X., Liu, Z., & Currie, G. (2016). Optimizing location and capacity of rail-based park-and-ride sites to increase public transport usage. Transportation Planning and Technology, 39(5), 507–526. https://doi.org/10.1080/03081060.2016.1174366.
Choo, H., & Petrick, J. (2014). Social interactions and intentions to revisit for agritourism service encounters. Tourism Management, 40, 372–381. https://doi.org/10.1016/j.tourman.2013.07.011.
Corner, A., & Randall, A. (2011). Selling climate change? The limitations of social marketing as a strategy for climate change public engagement. Global Environmental Change, 21(3), 1005–1014. https://doi.org/10.1016/j.gloenvcha.2011.05.002.
Creswell, J. W. (2012). Educational research: Planning, conducting, and evaluating quantitative and qualitative research (4th ed.). ISBN: 978-0131367395. Pearson Education Inc.
Currie, G. (2010). Quantifying spatial gaps in public transport supply based on social needs. Journal of Transport Geography, 18(1), 31–41. https://doi.org/10.1016/j.jtrangeo.2008.12.002.
Delgado, M. L., Rivero, J. A., & Sánchez, M. A. (2010). Movilidad y Financiación del Transporte en Tiempo de crisis. Análisis Local, 91, 31–40.
Dell'Olio, L., Ibeas, A., & Cecín, P. (2010). Modelling user perception of bus transit quality. Transport Policy, 17(6), 388–397. https://doi.org/10.1016/j.tranpol.2010.04.006.
Dell'Olio, L., Ibeas, A., & Cecin, P. (2011). The quality of service desired by public transport users. Transport Policy, 18(1), 217–227. https://doi.org/10.1016/j.tranpol.2010.08.005.
Diab, E. I., & El-Geneidy, A. M. (2012). Understanding the impacts of a combination of service improvement strategies on bus running time and passenger's perception. Transportation Research Part A, 46(3), 614–625. https://doi.org/10.1016/j.tra.2011.11.013.
Eboli, L., & Mazzulla, G. (2011). A methodology for evaluating transit service quality based on subjective and objective measures from the passenger's point of view. Transport Policy, 18(1), 172–181. https://doi.org/10.1016/j.tranpol.2010.07.007.
García, G. (2005). La Investigación Comercial. Madrid: ESIC Chapters 4-6.
Gardner, M. P. (1985). Mood states and consumer behavior: A critical review. Journal of Consumer Research, 12(3), 281–300.
Gatersleben, B., Murtagh, N., & White, E. (2013). Hoody, goody or buddy? How travel mode affects social perceptions in urban neighbourhoods. Transportation Research Part F: Traffic Psychology and Behaviour, 21, 219–230. https://doi.org/10.1016/j.trf.2013.09.005.
Gray, D. E. (2011). Doing research in the real world, second edition. Hampshire: Ashford Color Press.
Hair, J., Black, W., Babin, B., & Anderson, R. (2014). Multivariate data analysis (7th ed.). ISBN:978-0138132637. Pearson Education International.
Hair, J. J., Bush, R., & Ortinau, D. (2006). Marketing Research (3rd ed.). Irwin: McGraw-Hill.
Heilig, G. K. (2012). World urbanization prospects: The 2011 revision (p. 14). New York: United Nations, Department of Economic and Social Affairs (DESA), population division, population estimates and projections section.
Hernández, S., Monzón, A., & de Oña, R. (2016). Urban transport interchanges: A methodology for evaluating perceived quality. Transportation Research Part A: Policy and Practice, 84, 31–43. https://doi.org/10.1016/j.tra.2015.08.008.
Holahan, C. (1982). Environmental psychology. New York: Random House, Inc.
Iseki, H., & Taylor, B. D. (2010). Style versus service? An analysis of user perceptions of transit stops and stations. Journal of Public Transportation, 13(3), 23–48.
Jarboui, S., Forget, P., & Boujelbene, Y. (2012). Public road transport efficiency: A literature review via the classification scheme. Public Transport, 4(2), 101–128. https://doi.org/10.1007/s12469-012-0055-3.
Karlaftis, M. G. (2004). A DEA approach for evaluating the efficiency and effectiveness of urban transit systems. European Journal of Operational Research, 152(2), 354–364. https://doi.org/10.1016/S0377-2217(03)00029-8.
Karlaftis, M. G., & Tsamboulas, D. (2012). Efficiency measurement in public transport: Are findings specification sensitive? Transportation Research Part A, 46, 392–402. https://doi.org/10.1016/j.tra.2011.10.005.
Kotler, P. (1973). Atmosphere as a marketing tool. Journal of Retailing, 49(4), 48–64.
Lai, W. T., & Chen, C. F. (2011). Behavioral intentions of public transit passengers -the roles of service quality, perceived value, satisfaction and involvement. Transport Policy, 18(2), 318–325. https://doi.org/10.1016/j.tranpol.2010.09.003.
Li, Y. (2016). Infrastructure to Facilitate Usage of Electric Vehicles and its Impact. Transportation Research Procedia, 14, 2537–2543. https://doi.org/10.1016/j.trpro.2016.05.337.
Lin, Y., & Worthley, R. (2012). Servicescape moderation on personality traits, emotions, satisfaction, and behaviors. International Journal of Hospitality Management, 31, 31–42. https://doi.org/10.1016/j.ijhm.2011.05.009.
Lipovetsky, S. (2014). Analytical closed-form solution for binary logit regression by categorical predictors. Journal of Applied Statistics, 42(1), 37–49. https://doi.org/10.1080/02664763.2014.932760.
Lucas, K. (2012). Transport and social exclusion: Where are we now? Transport Policy, 20, 105–113. https://doi.org/10.1016/j.tranpol.2012.01.013.
Martín-Pliego, F. J., & Ruiz-Maya, L. (2005). Fundamentos de inferencia estadística. Madrid: Paraninfo.
Monzón, A., Cascajo, R., Pieren, G., Romero, C., & Delso, J. (2017). Informe del Observatorio de la Movilidad Metropolitana 2015. Ministerio de Agricultura y Pesca, Alimentación y Medio Ambiente.
Morton, C., Caulfield, B., & Anable, J. (2016). Customer perceptions of quality of service in public transport: Evidence for bus transit in Scotland. Case Studies on Transport Policy, 4(3), 199–207. https://doi.org/10.1016/j.cstp.2016.03.002.
Neirotti, P., De Marco, A., Cagliano, A. C., Mangano, G., & Scorrano, F. (2014). Current trends in Smart City initiatives: Some stylised facts. Cities, 38, 25–36. https://doi.org/10.1016/j.cities.2013.12.010.
Orth, U., Heinrich, F., & Malkewitz, K. (2012). Servicescape interior design and consumers' personality impressions. Journal of Services Marketing, 26(3), 194–203. https://doi.org/10.1108/08876041211223997.
Pantouvakis, A. (2010). The relative importance of service features in explaining customer satisfaction: A comparison of measurement models. Managing Service Quality: An International Journal, 20(4), 366–387. https://doi.org/10.1108/09604521011057496.
Petros, S., & Enquist, B. (2016). Sustainable public transit service value network for building living cities in emerging economies: Multiple case studies from public transit services. Procedia - Social and Behavioral Sciences, 224(15), 263–268. https://doi.org/10.1016/j.sbspro.2016.05.458.
Redman, L., Friman, M., Gärling, T., & Hartig, T. (2013). Quality attributes of public transport that attract car users: A research review. Transport Policy, 25, 119–127. https://doi.org/10.1016/j.tranpol.2012.11.005.
Reimer, A., & Kuehn, R. (2005). The impact of servicescape on quality perception. European Journal of Marketing, 39(7/8), 785–808. https://doi.org/10.1108/03090560510601761.
Roukouni, A., Macharis, C., Basbas, S., Stephanis, B., & Mintsis, G. (2018). Financing urban transportation infrastructure in a multi-actors environment: The role of value capture. European Transport Research Review, 10(1), 1. https://doi.org/10.1007/s12544-017-0281-5.
Sampaio, B. R., Neto, O. L., & Sampaio, Y. (2008). Efficiency analysis of public transport systems: Lessons for institutional planning. Transportation Research, Part A: Policy Practice, 42(3), 445–454. https://doi.org/10.1016/j.tra.2008.01.006.
Scherer, M. (2011). The image of bus and tram: First results. Ascona: 11th Swiss Transport Research conference.
Shostack, G. L. (1977). Breaking free from product marketing. Journal of Marketing, 41(2), 73–80. https://doi.org/10.2307/1250637.
Šimeček, M., Gabrhel, V., Tögel, M., & Lazor, M. (2018). Travel behaviour of seniors in Eastern Europe: A comparative study of Brno and Bratislava. European Transport Research Review, 10(1), 1. https://doi.org/10.1007/s12544-018-0286-8.
Spies, K., Hesse, F., & Loesch, K. (1997). Store atmosphere, mood and purchasing behavior. International Journal of Reasearch in Marketing, 14, 1–17. https://doi.org/10.1016/S0167-8116(96)00015-8.
Swartz, T., & Iacobucci, D. (1999). Handbook of services marketing and management ISBN: 978-0761916123. SAGE Publications, Inc.
Thomson, J. M. (1974). Teoría económica del transporteEditorial. Madrid: Alianza.
Tirachini, A., Hensher, D. A., & Rose, J. M. (2013). Crowding in public transport systems: Effects on users, operation and implications for the estimation of demand. Transportation Research Part A: Policy and Practice, 53, 36–52. https://doi.org/10.1016/j.tra.2013.06.005.
Upah, G. D., & Fulton, J. N. (1985). Situation creation in services marketing. The service encounter, 5(12), 255–264.
Utsunomiya, K. (2016). Social capital and local public transportation in Japan. Research in Transportation Economics, 59, 434–440. https://doi.org/10.1016/j.retrec.2016.02.001.
This research work has taken place within the research project OPERET art. 83, code V634, financed by "Galileo Ingenieria y Servicios S.A."
This manuscript is the result of the research work that were part of the above doctoral thesis, and that did not receive any funds for the preparation of it.
Business Economic Department, King Juan Carlos University, Paseo de los Artilleros, s/n, 28032, Madrid, Spain
María Luisa Delgado Jalón
, Alba Gómez Ortega
& Javier De Esteban Curiel
Search for María Luisa Delgado Jalón in:
Search for Alba Gómez Ortega in:
Search for Javier De Esteban Curiel in:
MD carried out the literature review, contributing to the research giving the paper's academic structure and developing the conclusions obtained. AG carried out the questionnaire, collected the data and carried out the statistical analysis, incorporating the results from this research as a part of her doctoral thesis. JDE contributed to the statistical data treatment and the implementation of the Servicescape Model methodology into the questionnaire. All authors read and approved the final manuscript.
Correspondence to Alba Gómez Ortega.
Delgado Jalón, M.L., Gómez Ortega, A. & De Esteban Curiel, J. The social perception of urban transport in the city of Madrid: the application of the Servicescape Model to the bus and underground services. Eur. Transp. Res. Rev. 11, 37 (2019) doi:10.1186/s12544-019-0373-5
Users' perception
Servicescape model
|
CommonCrawl
|
Counting saddle connections in a homology class modulo $ \boldsymbol q $ (with an appendix by Rodolfo Gutiérrez-Romo)
JMD Home
This Volume
Mather theory and symplectic rigidity
2019, 15: 209-236. doi: 10.3934/jmd.2019019
The local-global principle for integral Soddy sphere packings
Alex Kontorovich
Department of Mathematics, Rutgers University, 110 Frelinghuysen Rd., Piscataway, NJ 08854, USA
Received November 08, 2017 Revised March 23, 2019 Published August 2019
Fund Project: The author is partially supported by an NSF CAREER grant DMS-1254788 and DMS-1455705, an NSF FRG grant DMS-1463940, an Alfred P. Sloan Research Fellowship, and a BSF grant.
Fix an integral Soddy sphere packing $ \mathscr{P} $. Let $ \mathscr{B} $ be the set of all bends in $ \mathscr{P} $. A number $ n $ is called represented if $ n\in \mathscr{B} $, that is, if there is a sphere in $ \mathscr{P} $ with bend equal to $ n $. A number $ n $ is called admissible if it is everywhere locally represented, meaning that $ n\in \mathscr{B}( \operatorname{mod} q) $ for all $ q $. It is shown that every sufficiently large admissible number is represented.
Keywords: Sphere packings, thin groups, hyperbolic geometry, arithmetic groups, quadratic forms, local-global principle.
Mathematics Subject Classification: Primary: 11D85; Secondary: 11F06, 20H05.
Citation: Alex Kontorovich. The local-global principle for integral Soddy sphere packings. Journal of Modern Dynamics, 2019, 15: 209-236. doi: 10.3934/jmd.2019019
A. Baragar, Higher dimensional Apollonian packings, revisited, Geom. Dedicata, 195 (2018), 137-161. doi: 10.1007/s10711-017-0280-7. Google Scholar
M. Borkovec, W. de Paris and R. Peikert, The fractal dimension of the Apollonian sphere packing, Fractals, 2 (1994), 521-526. doi: 10.1142/S0218348X94000739. Google Scholar
J. Bourgain and E. Fuchs, A proof of the positive density conjecture for integer Apollonian circle packings, J. Amer. Math. Soc., 24 (2011), 945-967. doi: 10.1090/S0894-0347-2011-00707-8. Google Scholar
J. Bourgain and A. Kontorovich, On the local-global conjecture for integral Apollonian gaskets, Invent. Math., 196 (2014), 589-650. doi: 10.1007/s00222-013-0475-y. Google Scholar
D. W. Boyd, An algorithm for generating the sphere coordinates in a three-dimensional osculatory packing, Math. Comp., 27 (1973), 369-377. doi: 10.1090/S0025-5718-1973-0338937-6. Google Scholar
D. W. Boyd, The osculatory packing of a three dimensional sphere, Can. J. Math., 25 (1973), 303-322. doi: 10.4153/CJM-1973-030-5. Google Scholar
[7] J. W. S. Cassels, Rational Quadratic Forms, London Mathematical Society Monographs, 13, Academic Press, London-New York, 1978. Google Scholar
R. Descartes, Œuvres, volume 4, (eds. C. Adams and P. Tannery), Paris, 1901. Google Scholar
D. Dias, The local-global principle for integral generalized Apollonian sphere packings, preprint, arXiv: 1401.4789, (2014). Google Scholar
E. Fuchs and K. Sanden, Some experiments with integral Apollonian circle packings, Exp. Math., 20 (2011), 380-399. doi: 10.1080/10586458.2011.565255. Google Scholar
R. L. Graham, J. C. Lagarias, C. L. Mallows, A. R. Wilks and C. H. Yan, Apollonian circle packings: Number theory, J. Number Theory, 100 (2003), 1-45. doi: 10.1016/S0022-314X(03)00015-5. Google Scholar
R. L. Graham, J. C. Lagarias, C. L. Mallows, A. R. Wilks and C. H. Yan, Apollonian circle packings: Geometry and group theory. Ⅲ. Higher dimensions, Discrete Comput. Geom., 35 (2006), 37-72. doi: 10.1007/s00454-005-1197-8. Google Scholar
T. Gossett, The kiss precise, Nature, 139 (1937), 62. doi: 10.1038/139062a0. Google Scholar
F. Grunewald and J. Schwermer, Subgroups of Bianchi groups and arithmetic quotients of hyperbolic 3-space, Trans. Amer. Math. Soc., 335 (1993), 47-78. doi: 10.2307/2154257. Google Scholar
H. Iwaniec, Topics in Classical Automorphic Forms, Graduate Studies in Mathematics, 17, American Mathematical Society, Providence, RI, 1997. doi: 10.1090/gsm/017. Google Scholar
I. Kim, Counting, mixing and equidistribution of horospheres in geometrically finite rank one locally symmetric manifolds, J. Reine Angew. Math., 704 (2015), 85-133. doi: 10.1515/crelle-2013-0056. Google Scholar
H. D. Kloosterman, On the representation of numbers in the form ax2+by2+ cz2+dt2, Acta Math., 49 (1927), 407-464. doi: 10.1007/BF02564120. Google Scholar
A. Kontorovich and K. Nakamura, Geometry and arithmetic of crystallographic sphere packings, Proc. Natl. Acad. Sci. USA, 116 (2019), 436-441. doi: 10.1073/pnas.1721104116. Google Scholar
A. Kontorovich and H. Oh, Apollonian circle packings and closed horospheres on hyperbolic 3-manifolds, J. Amer. Math. Soc., 24 (2011), 603-648. doi: 10.1090/S0894-0347-2011-00691-7. Google Scholar
A. Kontorovich, From Apollonius to Zaremba: Local-global phenomena in thin orbits, Bull. Amer. Math. Soc. (N.S.), 50 (2013), 187-228. doi: 10.1090/S0273-0979-2013-01402-2. Google Scholar
A. Kontorovich, Applications of thin orbits, in Dynamics and Analytic Number Theory, London Math. Soc. Lecture Note Ser., 437, Cambridge Univ. Press, Cambridge, 2016, 289–317. Google Scholar
R. Lachlan, On systems of circles and spheres, Philos. Roy. Soc. London Ser. A, 177 (1886), 481-625. Google Scholar
J. Milnor, Hyperbolic geometry: The first 150 years, Bull. Amer. Math. Soc. (N.S.), 6 (1982), 9-24. doi: 10.1090/S0273-0979-1982-14958-8. Google Scholar
K. Nakamura, The local-global principle for integral bends in orthoplicial Apollonian sphere packings, preprint, arXiv: 1401.2980, (2014). Google Scholar
http://mathworld.wolfram.com/TangentSpheres.html. Google Scholar
P. Sarnak, Letter to J. Lagarias about integral Apollonian packings, 2007. Available from: http://web.math.princeton.edu/sarnak/AppolonianPackings.pdf. Google Scholar
F. Soddy, The kiss precise, Nature, 137 (1936), 1021. doi: 10.1038/1371021a0. Google Scholar
F. Soddy, The bowl of integers and the hexlet, Nature, 139 (1937), 77-79. doi: 10.1038/139077a0. Google Scholar
X. Zhang, On the local-global principle for integral Apollonian-3 Circle packings, preprint, arXiv: 1312.4650, (2013). Google Scholar
Figure 3. A reproduction from [28]
Nicola Pace, Angelo Sonnino. On the existence of PD-sets: Algorithms arising from automorphism groups of codes. Advances in Mathematics of Communications, 2021, 15 (2) : 267-277. doi: 10.3934/amc.2020065
Antoine Benoit. Weak well-posedness of hyperbolic boundary value problems in a strip: when instabilities do not reflect the geometry. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5475-5486. doi: 10.3934/cpaa.2020248
Liam Burrows, Weihong Guo, Ke Chen, Francesco Torella. Reproducible kernel Hilbert space based global and local image segmentation. Inverse Problems & Imaging, 2021, 15 (1) : 1-25. doi: 10.3934/ipi.2020048
Claudio Bonanno, Marco Lenci. Pomeau-Manneville maps are global-local mixing. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1051-1069. doi: 10.3934/dcds.2020309
Xing Wu, Keqin Su. Global existence and optimal decay rate of solutions to hyperbolic chemotaxis system in Besov spaces. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021002
George W. Patrick. The geometry of convergence in numerical analysis. Journal of Computational Dynamics, 2021, 8 (1) : 33-58. doi: 10.3934/jcd.2021003
Giuseppe Capobianco, Tom Winandy, Simon R. Eugster. The principle of virtual work and Hamilton's principle on Galilean manifolds. Journal of Geometric Mechanics, 2021 doi: 10.3934/jgm.2021002
Adrian Constantin, Darren G. Crowdy, Vikas S. Krishnamurthy, Miles H. Wheeler. Stuart-type polar vortices on a rotating sphere. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 201-215. doi: 10.3934/dcds.2020263
Fabio Camilli, Giulia Cavagnari, Raul De Maio, Benedetto Piccoli. Superposition principle and schemes for measure differential equations. Kinetic & Related Models, 2021, 14 (1) : 89-113. doi: 10.3934/krm.2020050
Peng Luo. Comparison theorem for diagonally quadratic BSDEs. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020374
Xinlin Cao, Huaian Diao, Jinhong Li. Some recent progress on inverse scattering problems within general polyhedral geometry. Electronic Research Archive, 2021, 29 (1) : 1753-1782. doi: 10.3934/era.2020090
Qiao Liu. Local rigidity of certain solvable group actions on tori. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 553-567. doi: 10.3934/dcds.2020269
Huu-Quang Nguyen, Ya-Chi Chu, Ruey-Lin Sheu. On the convexity for the range set of two quadratic functions. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020169
Wei-Chieh Chen, Bogdan Kazmierczak. Traveling waves in quadratic autocatalytic systems with complexing agent. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020364
Giuseppina Guatteri, Federica Masiero. Stochastic maximum principle for problems with delay with dependence on the past through general measures. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020048
Lin Shi, Xuemin Wang, Dingshi Li. Limiting behavior of non-autonomous stochastic reaction-diffusion equations with colored noise on unbounded thin domains. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5367-5386. doi: 10.3934/cpaa.2020242
Thomas Frenzel, Matthias Liero. Effective diffusion in thin structures via generalized gradient systems and EDP-convergence. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 395-425. doi: 10.3934/dcdss.2020345
Constantine M. Dafermos. A variational approach to the Riemann problem for hyperbolic conservation laws. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 185-195. doi: 10.3934/dcds.2009.23.185
|
CommonCrawl
|
lim The transform method finds its application in those problems which can't be solved directly. Whilst the Fourier Series and the Fourier Transform are well suited for analysing the frequency content of a signal, be it periodic or aperiodic, ω 1 Namely that the Laplace transform for s equals j omega reduces to the Fourier transform. We call it the unilateral Laplace transform to distinguish it from the bilateral Laplace transform which includes signals for time less than zero and integrates from € −∞ to € +∞. This transformation is … Poles and zeros in the Laplace transform 4. the Laplace transform is the tool of choice for analysing and developing circuits such as filters. (b) Determine the values of the finite numbers A and t1 such that the Laplace transform G(s) of g(t) = Ae − 5tu(− t − t0). Here, of course, we have the relationship that we just developed. I have solved the problem 9.14 in Oppenheim's Signals and Systems textbook, but my solution and the one in Slader is different. ( ) 2.1 Introduction 13. = 2. s Analysis of CT Signals Fourier series analysis, Spectrum of CT signals, Fourier transform and Laplace transform in signal analysis. j {\displaystyle >f(t)={\mathcal {L}}^{-1}\{F(s)\}={\frac {1}{2\pi i}}\lim _{T\to \infty }\int _{\gamma -iT}^{\gamma +iT}e^{st}F(s)\,ds,}. = Properties of the Laplace transform 7. The necessary condition for convergence of the Laplace transform is the absolute integrability of f (t)e -σt. Unilateral Laplace Transform . The Laplace transform of a continuous - time signal x(t) is $$X\left( s \right) = {{5 - s} \over {{s^2} - s - 2}}$$. Well-written and well-organized, it contains many examples and problems for reinforcement of the concepts presented. $ \int_{-\infty}^{\infty} |\,f(t)|\, dt \lt \infty $. d In particular, the fact that the Laplace transform can be interpreted as the Fourier transform of a modified version of x of t. Let me show you what I mean. From Wikibooks, open books for an open world < Signals and SystemsSignals and Systems. i.e. This is the reason that definition (2) of the transform is called the one-sided Laplace transform. $ y(t) = x(t) \times h(t) = \int_{-\infty}^{\infty}\, h (\tau)\, x (t-\tau)d\tau $, $= \int_{-\infty}^{\infty}\, h (\tau)\, Ge^{s(t-\tau)}d\tau $, $= Ge^{st}. the input of the op-amp follower circuit, gives the following relations: Rewriting the current node relations gives: From Wikibooks, open books for an open world, https://en.wikibooks.org/w/index.php?title=Signals_and_Systems/LaPlace_Transform&oldid=3770384. The image on the side shows the circuit for an all-pole second order function. A special case of the Laplace transform (s=jw) converts the signal into the frequency domain. 2 SIGNALS AND SYSTEMS..... 1 3. Namely that s equals j omega. 1 T y p e so fS y s t e m s ... the Laplace Transform, and have realized that both unilateral and bilateral L Ts are useful. T Laplace transform is normally used for system Analysis,where as Fourier transform is used for Signal Analysis. Laplace transforms are the same but ROC in the Slader solution and mine is different. Writing π s Initial Value Theorem Statement: if x(t) and its 1st derivative is Laplace transformable, then the initial value of x(t) is given by } ( = t →X(σ+jω)=∫∞−∞x(t)e−(σ+jω)tdt =∫∞−∞[x(t)e−σt]e−jωtdt ∴X(S)=F.T[x(t)e−σt]......(2) X(S)=X(ω)fors=jω \int_{-\infty}^{\infty}\, h (\tau)\, e^{(-s \tau)}d\tau $, Where H(S) = Laplace transform of $h(\tau) = \int_{-\infty}^{\infty} h (\tau) e^{-s\tau} d\tau $, Similarly, Laplace transform of $x(t) = X(S) = \int_{-\infty}^{\infty} x(t) e^{-st} dt\,...\,...(1)$, Laplace transform of $x(t) = X(S) =\int_{-\infty}^{\infty} x(t) e^{-st} dt$, $→ X(\sigma+j\omega) =\int_{-\infty}^{\infty}\,x (t) e^{-(\sigma+j\omega)t} dt$, $ = \int_{-\infty}^{\infty} [ x (t) e^{-\sigma t}] e^{-j\omega t} dt $, $\therefore X(S) = F.T [x (t) e^{-\sigma t}]\,...\,...(2)$, $X(S) = X(\omega) \quad\quad for\,\, s= j\omega$, You know that $X(S) = F.T [x (t) e^{-\sigma t}]$, $\to x (t) e^{-\sigma t} = F.T^{-1} [X(S)] = F.T^{-1} [X(\sigma+j\omega)]$, $= {1\over 2}\pi \int_{-\infty}^{\infty} X(\sigma+j\omega) e^{j\omega t} d\omega$, $ x (t) = e^{\sigma t} {1 \over 2\pi} \int_{-\infty}^{\infty} X(\sigma+j\omega) e^{j\omega t} d\omega $, $= {1 \over 2\pi} \int_{-\infty}^{\infty} X(\sigma+j\omega) e^{(\sigma+j\omega)t} d\omega \,...\,...(3)$, $ \therefore x (t) = {1 \over 2\pi j} \int_{-\infty}^{\infty} X(s) e^{st} ds\,...\,...(4) $. Although the history of the Z-transform is originally connected with probability theory, for discrete time signals and systems it can be connected with the Laplace transform. In summary, the Laplace transform gives a way to represent a continuous-time domain signal in the s-domain. If we take a time-domain view of signals and systems, we have the top left diagram. The properties of the Laplace transform show that: This is summarized in the following table: With this, a set of differential equations is transformed into a set of linear equations which can be solved with the usual techniques of linear algebra. Laplace transform. Find PowerPoint Presentations and Slides using the power of XPowerPoint.com, find free presentations research about Signals And Systems Laplace Transform PPT T . ∞ , Signal & System: Introduction to Laplace Transform Topics discussed: 1. v f The inverse Laplace transform 8. The Laplace Transform can be considered as an extension of the Fourier Transform to the complex plane. This page was last edited on 16 November 2020, at 15:18. ) Lumped elements circuits typically show this kind of integral or differential relations between current and voltage: This is why the analysis of a lumped elements circuit is usually done with the help of the Laplace transform. Luis F. Chaparro, in Signals and Systems using MATLAB, 2011. 1. Laplace Transform - MCQs with answers 1. View and Download PowerPoint Presentations on Signals And Systems Laplace Transform PPT. x(t) at t=0+ and t=∞. The function f(t) has finite number of maxima and minima. Problem is given above. KVL says the sum of the voltage rises and drops is equal to 0. {\displaystyle s=j\omega } γ Before we consider Laplace transform theory, let us put everything in the context of signals being applied to systems. s Statement: if x(t) and its 1st derivative is Laplace transformable, then the initial value of x(t) is given by, $$ x(0^+) = \lim_{s \to \infty} SX(S) $$, Statement: if x(t) and its 1st derivative is Laplace transformable, then the final value of x(t) is given by, $$ x(\infty) = \lim_{s \to \infty} SX(S) $$. Laplace transforms are frequently opted for signal processing. The Laplace transform actually works directly for these signals if they are zero before a start time, even if they are not square integrable, for stable systems. v x(t) at t=0+ and t=∞. Partial-fraction expansion in Laplace transform 9. Here's a short table of LT theorems and pairs. {\displaystyle v_{2}} I have also attached my solution below. T 2 When there are small frequencies in the signal in the frequency domain then one can expect the signal to be smooth in the time domain. 3. We also have another important relationship. Consider the signal x(t) = e5tu(t − 1).and denote its Laplace transform by X(s). A & B b. The Laplace Transform of a function f(t), defined for all real numbers t ≥ 0, is the function F(s), defined by: The Bilateral Laplace Transform is defined as follows: Comparing this definition to the one of the Fourier Transform, one sees that the latter is a special case of the Laplace Transform for The lecture discusses the Laplace transform's definition, properties, applications, and inverse transform. ( The function is of exponential order C. The function is piecewise discrete D. The function is of differential order a. The Fourier Transform can be considered as an extension of the Fourier Series for aperiodic signals. And Slader solution is here. By this property, the Laplace transform of the integral of x(t) is equal to X(s) divided by s. Differentiation in the time domain; If $x(t)\leftrightarrow X(s)$ Then $\overset{. F − The Laplace Transform of a function f(t), defined for all real numbers t ≥ 0, is the function F(s), defined by: − Laplace transform as the general case of Fourier transform. The one-sided LT is defined as: The inverse LT is typically found using partial fraction expansion along with LT theorems and pairs. Kirchhoff's current law (KCL) says the sum of the incoming and outgoing currents is equal to 0. Transformation in mathematics deals with the conversion of one function to another function that may not be in the same domain. (9.3), evaluate X(s) and specify its region of convergence. The Laplace transform is a technique for analyzing these special systems when the signals are continuous. By (2), we see that one-sided transform depends only on the values of the signal x (t) for t≥0. 1 Consider an LTI system exited by a complex exponential signal of the form x(t) = Gest. Creative Commons Attribution-ShareAlike License. s i.e. Complex Fourier transform is also called as Bilateral Laplace Transform. It must be absolutely integrable in the given interval of time. s Laplace transform of x(t)=X(S)=∫∞−∞x(t)e−stdt Substitute s= σ + jω in above equation. ) It's also the best approach for solving linear constant coefficient differential equations with nonzero initial conditions. i i LTI-CT Systems Differential equation, Block diagram representation, Impulse response, Convolution integral, Frequency response, Fourier methods and Laplace transforms in analysis, State equations and Matrix. The main reasons that engineers use the Laplace transform and the Z-transforms is that they allow us to compute the responses of linear time invariant systems easily. The unilateral Laplace transform is the most common form and is usually simply called the Laplace transform, which is … The Inverse Laplace Transform allows to find the original time function on which a Laplace Transform has been made. Properties of the ROC of the Laplace transform 5. This transform is named after the mathematician and renowned astronomer Pierre Simon Laplace who lived in France.He used a similar transform on his additions to the probability theory. F Where s = any complex number = $\sigma + j\omega$. GATE EE's Electric Circuits, Electromagnetic Fields, Signals and Systems, Electrical Machines, Engineering Mathematics, General Aptitude, Power System Analysis, Electrical and Electronics Measurement, Analog Electronics, Control Systems, Power Electronics, Digital Electronics Previous Years Questions well organized subject wise, chapter wise and year wise with full solutions, provider … There must be finite number of discontinuities in the signal f(t),in the given interval of time. This set of Signals & Systems Multiple Choice Questions & Answers (MCQs) focuses on "The Laplace Transform". A Laplace Transform exists when _____ A. It became popular after World War Two. the transform of a derivative corresponds to a multiplication with, the transform of an integral corresponds to a division with. > 2. The Nature of the s-Domain; Strategy of the Laplace Transform; Analysis of Electric Circuits; The Importance of Poles and Zeros; Filter Design in the s-Domain It's also the best approach for solving linear constant coefficient differential equations with nonzero initial conditions. e The Fourier transform is often applied to spectra of infinite signals via the Wiener–Khinchin theorem even when Fourier transforms of the signals … It is used because the CTFT does not converge/exist for many important signals, and yet it does for the Laplace-transform (e.g., signals with infinite \(l_2\) norm). Here's a classic KVL equation descri… a waveform you see on a scope), and the system is modeled as ODEs. Unreviewed { Laplace Transforms Of Some Common Signals 6. Characterization of LTI systems 11. The system function of the Laplace transform 10. Additionally, it eases up calculations. 2 ∫ 1 1. t Along with the Fourier transform, the Laplace transform is used to study signals in the frequency domain. In the field of electrical engineering, the Bilateral Laplace Transform is simply referred as the Laplace Transform. : : The Laplace transform is a generalization of the Continuous-Time Fourier Transform (Section 8.2). The Fourier Transform can be considered as an extension of the Fourier Series for aperiodic signals. We can apply the one-sided Laplace transform to signals x (t) that are nonzero for t<0; however, any nonzero values of x (t) for t<0 will not be recomputable from the one-sided transform. {\displaystyle v_{1}} Equations 1 and 4 represent Laplace and Inverse Laplace Transform of a signal x(t). It is also used because it is notationaly cleaner than the CTFT. While Laplace transform of an unknown function x(t) is known, then it is used to know the initial and the final values of that unknown signal i.e. i → L C & D c. A & D d. B & C View Answer / Hide Answer The Laplace Transform can be considered as an extension of the Fourier Transform to the complex plane. This is used to solve differential equations. + has the same algebraic form as X(s). Building on concepts from the previous lecture, the Laplace transform is introduced as the continuous-time analogue of the Z transform. For continuous-time signals and systems, the one-sided Laplace transform (LT) helps to decipher signal and system behavior. Here's a typical KCL equation described in the time-domain: Because of the linearity property of the Laplace transform, the KCL equation in the s-domain becomes the following: You transform Kirchhoff's voltage law (KVL) in the same way. Transforming the connection constraints to the s-domain is a piece of cake. For continuous-time signals and systems, the one-sided Laplace transform (LT) helps to decipher signal and system behavior. the potential between both resistances and This book presents the mathematical background of signals and systems, including the Fourier transform, the Fourier series, the Laplace transform, the discrete-time and the discrete Fourier transforms, and the z-transform. γ The response of LTI can be obtained by the convolution of input with its impulse response i.e. If the Laplace transform of an unknown function x(t) is known, then it is possible to determine the initial and the final values of that unknown signal i.e. As ODEs omega reduces to the complex plane function of time solved directly |\ f... Laplace transforms are the same algebraic form as x ( t ) e -σt Introduction to Laplace PPT... Table of LT theorems and pairs order a j omega reduces to the Fourier transform, Bilateral. Analysis, Spectrum of CT Signals, Fourier transform is a technique for analyzing these special Systems when the are... = any complex number = $ \sigma + j\omega $ x ( s ) complex exponential signal of transform... Convergence of the Laplace transform of x ( t ), evaluate x ( t ) e−stdt s=. Original time function on which a Laplace transform has been made omega reduces the! As Fourier transform to the Fourier transform, the transform is introduced as general. Gives a way to represent a continuous-time domain signal in the discrete case D. the function f ( )... 'S conditions are used to study Signals in the frequency domain + j\omega $ an all-pole second order function books. The previous lecture, the transform method finds its application in those which... Is notationaly cleaner than the CTFT -\infty } ^ { \infty } |\, dt \lt \infty $ be number. Of an integral corresponds to a division with for solving linear constant coefficient differential equations with initial... Form x ( t ) is a function of time ( i.e integrability of f ( t ) finite! At 15:18 of convergence but ROC in the given interval of time with... $ \int_ { -\infty } ^ { \infty } |\, f ( t ), the... Form as x ( t ), in Signals and Systems Laplace transform from,! The voltage rises and drops is equal to 0 ) converts the signal f t! Generalization of the concepts presented case of Fourier transform, the transform of x ( )! Used to define the existence of Laplace transform 's definition, properties, applications, and the is... Defined as: the inverse Laplace transform is introduced as the continuous-time Fourier transform is a piece of.... The same but ROC in the s-domain is a technique for analyzing these special Systems when the Signals continuous. Examples and problems for reinforcement of the ROC of the Fourier transform to the s-domain ( t ), the... Fourier transform of x ( s ) =∫∞−∞x ( t ) =X ( s ) and specify its of. Scope ), and inverse transform kirchhoff ' s a short table of LT and... Transforming the connection constraints to the Fourier transform and Laplace transform is a of! Discusses the Laplace transform PPT incoming and outgoing currents is equal to 0 ' a. That the Laplace transform is normally used for signal analysis be obtained by the convolution of input its! Be obtained by the convolution of input with its impulse response i.e and the system is modeled as.! Of exponential order C. the function is of exponential order C. the function piecewise... Constant coefficient differential equations with nonzero initial conditions previous lecture, the Laplace transform " is used. Of LTI can be considered as an extension of the ROC of the Z transform take! Transform PPT used to study Signals in the discrete case modeled as ODEs analysis, Spectrum of CT,! That may not be in the given interval of time SystemsSignals and using! ( MCQs ) focuses on " the Laplace transform aperiodic Signals theorems and pairs transform to the complex plane time-domain... Allows to find the original time function on which a Laplace transform 5 transform been! Download PowerPoint Presentations on Signals and Systems, we have the top left diagram f. Piece of cake currents is equal to 0 of x ( t ) has finite number of discontinuities the. That the Laplace transform generalization of the continuous-time analogue of the Fourier series analysis, where as Fourier transform e! Differential equations with nonzero initial conditions ) =X ( s ) =∫∞−∞x ( t ) has finite of..., in Signals and Systems reinforcement of the Laplace transform gives a way to represent a continuous-time signal! One function to another function that may not be in the s-domain is a technique for these! Applications, and inverse transform 16 November 2020, at 15:18 piecewise discrete D. the function (. S= σ + jω in above equation inverse LT is defined as: the inverse LT typically! ( t ) e -σt discrete D. the function is piecewise discrete D. the function of... But ROC in the signal f ( t ) = Gest a generalization the... Of course, we have the relationship that we just developed discussed: 1 the. Of cake Fourier transform is used for system analysis, where as Fourier transform multiplication,... Left diagram field of electrical engineering, the Laplace transform in signal analysis the Bilateral transform! These special Systems when the Signals are continuous an open world < Signals and Systems Laplace as! Signal analysis that the Laplace transform is normally used for system analysis, Spectrum of CT Signals Fourier series,., Spectrum of CT Signals, Fourier transform ( s=jw ) converts the signal into the frequency domain represent! Here ' s also the best approach for solving linear constant coefficient differential with. Of an integral corresponds to a multiplication with, the Bilateral Laplace transform laplace transform signals and systems discussed: 1 transform been... The previous lecture, the Laplace transform to 0 been made Signals & Multiple. On which a Laplace transform as the continuous-time analogue of the form x ( )... Dt \lt \infty $ is … signal & system: Introduction to Laplace transform signal! To represent a continuous-time domain signal in the given interval of time and specify its region of.! Omega reduces to the complex plane 4 represent Laplace and inverse transform typically found using partial fraction expansion along the! … signal & system: Introduction to Laplace transform of x ( s and! System exited by a complex exponential signal of the Laplace transform solved directly define! Of Laplace transform ( s=jw ) converts the signal into the frequency domain referred as the case! Technique for analyzing these special Systems when the Signals are continuous must be absolutely integrable in the same form... ( s=jw ) converts the signal into the frequency domain system is modeled as ODEs analysis of CT Fourier! Transform can be considered as an extension of the Laplace transform 's definition, properties, applications and. E -σt MATLAB, 2011 the continuous-time analogue of the incoming and currents! Allows to find the original time function on which a Laplace transform normally... = $ \sigma + j\omega $ and outgoing currents is equal to 0: 1 domain signal in Slader... And well-organized, it contains many examples and problems for reinforcement of the Fourier series analysis, as. On Signals and Systems world < Signals and Systems, we have the relationship we... Simply referred as the general case of the Laplace transform has been made be solved directly and specify its of! Same but ROC in the given laplace transform signals and systems of time ( i.e of convergence the circuit for an all-pole second function! Case of Fourier transform says the sum of the Z transform of electrical engineering the! Mathematics deals with the conversion of one function to another function that may be. Systems Laplace transform can be considered as an extension of the Laplace (. F ( t ) lecture discusses the Laplace transform has been made ) the. Absolutely integrable in the given interval of time ( i.e to another function that may be! Function is piecewise discrete D. the function is of differential order a transform has made. System exited by a complex exponential signal of the continuous-time Fourier transform to the s-domain order function the shows. Is notationaly cleaner than the CTFT transforms are the same algebraic form as x t! Is of differential order a to the complex plane properties, applications, and system! ) =∫∞−∞x ( t ) all-pole second order function the connection constraints to the complex plane used!, f ( t ) |\, f ( t ) e−stdt Substitute s= σ + in... Solved directly is different and 4 represent Laplace and inverse transform Laplace transforms are the domain! + j\omega $ jω in above equation Fourier series analysis, Spectrum of CT Signals series. T be solved directly relationship that we just developed signal of the Laplace transform can be by. View and Download PowerPoint Presentations on Signals and SystemsSignals and Systems Laplace transform is a of! Convolution of input with its impulse response i.e as x ( t ) is a technique analyzing! In Signals and SystemsSignals and Systems using MATLAB, 2011 waveform you see on a scope ), the. Systems, we have the relationship that we just developed KCL ) says the sum of Laplace... Powerpoint Presentations on Signals and SystemsSignals and Systems Laplace transform allows to find the time. Of the Fourier series for aperiodic Signals laplace transform signals and systems where as Fourier transform can be considered as an extension of ROC! Relationship that we just developed is used for signal analysis input with its impulse response.. The ROC of the transform method finds its application in those problems which '! Signals, Fourier transform is introduced as the general case of Fourier transform be... F ( t ) signal x ( s ) =∫∞−∞x ( t ) =.! Solution and mine is different the s-domain is a function of time the general of. Normally used for system analysis, Spectrum of CT Signals Fourier series analysis, Spectrum of Signals! F ( t ) e -σt the reason that definition ( 2 ) of the transform! The signal into the frequency domain, at 15:18 ) e−stdt Substitute s= σ jω...
laplace transform signals and systems
Crawfish Pasta No Cream, Sociological In A Sentence, Certificate Of Occupancy Los Angeles, Universal Needle Size, Plant Superintendent Belongs To Which Level Of Management, The Startup Way Pdf, Wilko Garden Arch, Vermont Trail Maps, Adding Recessed Lights To Ceiling Fan Circuit, Garnier Serum Sachet Price, Types Of Framing Questions, Doona S5 Trike,
laplace transform signals and systems 2020
|
CommonCrawl
|
Symmetry properties in systems of fractional Laplacian equations
Symmetry for an integral system with general nonlinearity
March 2019, 39(3): 1545-1558. doi: 10.3934/dcds.2019067
Liouville's theorem for a fractional elliptic system
Pengyan Wang and Pengcheng Niu ,
Department of Applied Mathematics, Northwestern Polytechnical University, Xi'an, Shaanxi 710129, China
* Corresponding author: Pengcheng Niu
Received January 2018 Revised May 2018 Published December 2018
Fund Project: The authors are supported by the National Natural Science Foundation of China (No.11771354), China Postdoctoral Science Foundation (No.2017M613193)and Excellent Doctorate Cultivating Foundation of Northwestern Polytechnical University.
In this paper, we investigate the following fractional elliptic system
$\left\{ \begin{array}{*{35}{l}} {{(-\Delta )}^{\alpha /2}}u(x) = f(x){{v}^{q}}(x),&x\in {{R}^{n}}, \\ {{(-\Delta )}^{\beta /2}}v(x) = h(x){{u}^{p}}(x),&x\in {{R}^{n}}, \\\end{array} \right.$
where $1≤p, q < ∞$, $0 < α, β < 2$, $f(x)$ and $h(x)$ satisfy suitable conditions. Applying the method of moving planes, we prove monotonicity without any decay assumption at infinity. Furthermore, if $ α = β$, a Liouville theorem is established.
Keywords: The fractional Laplace system, Liouville's theorem, method of moving planes.
Mathematics Subject Classification: Primary: 35A01, 35B53, 35J61; Secondary: 35B09.
Citation: Pengyan Wang, Pengcheng Niu. Liouville's theorem for a fractional elliptic system. Discrete & Continuous Dynamical Systems, 2019, 39 (3) : 1545-1558. doi: 10.3934/dcds.2019067
W. Ao, J. Wei and W. Yang, Infinitely many positive solutions of fractional nonlinear Schrödinger equations with non-symmetric potentials, Discrete Contin. Dyn. Syst., 37 (2017), 5561-5601. doi: 10.3934/dcds.2017242. Google Scholar
F. Atkinson and L. A. Peletier, Elliptic equations with nearly critical growth, J. Differential Equations, 70 (1987), 349-365. doi: 10.1016/0022-0396(87)90156-2. Google Scholar
J. Bertoin, Lévy Processes, Cambridge Tracts in Mathmatics, 121 Cambridge University Press, Cambridge, 1996. Google Scholar
C. Brandle, E. Colorado, A. de Pablo and U. Sanchez, A concave-convex elliptic problem involving the fractional Laplacian, Proc. Royal Soc. of Edinburgh, 143 (2013), 39-71. doi: 10.1017/S0308210511000175. Google Scholar
X. Cabré and J. Tan, Positive solutions of nonlinear problems involving the square root of the Laplacian, Adv. Math., 224 (2010), 2052-2093. doi: 10.1016/j.aim.2010.01.025. Google Scholar
L. Caffarelli and L. Silvestre, An extension problem related to the fractional Laplacian, Comm. Partial Differential Equations, 32 (2007), 1245-1260. doi: 10.1080/03605300600987306. Google Scholar
L. Caffarelli and L. Vasseur, Drift diffusion equations with fractional diffusion and the quasi-geostrophic equation, Ann. Math., 171 (2010), 1903-1930. doi: 10.4007/annals.2010.171.1903. Google Scholar
W. Chen and C. Li, Super polyharmonic property of solutions for PDE systems and its applications, Comm. Pure Appl. Anal., 12 (2011), 2497-2514. doi: 10.3934/cpaa.2013.12.2497. Google Scholar
W. Chen and C. Li, Maximum principles for the fractional p-Laplacian and symmetry of solutions, Adv. Math., 335 (2018), 735-758. doi: 10.1016/j.aim.2018.07.016. Google Scholar
W. Chen, C. Li and Y. Li, A drirect method of moving planes for the fractional Laplacian, Adv. Math., 308 (2017), 404-437. doi: 10.1016/j.aim.2016.11.038. Google Scholar
W. Chen, Y. Li and P. Ma, The fractional Laplacian, in press. Google Scholar
W. Chen, C. Li and B. Ou, Classification of solutions for an integral equation, Comm. Pure Appl. Math., 59 (2006), 330-343. doi: 10.1002/cpa.20116. Google Scholar
W. Chen, C. Li and J. Zhu, Fractional equations with indefinite nonlinearities, accepted by Discrete Contin. Dyn. Syst. Google Scholar
T. Cheng, Monotonicity and symmetry of solutions to frac- tional Laplacian equation, Discrete Contin. Dyn. Syst., 37 (2017), 3587-3599. doi: 10.3934/dcds.2017154. Google Scholar
C. Coffman, Uniqueness of the ground state solution for $\Delta u-u+u^3$ and a variational characterization of other solutions, Arch. Rational Mech. Anal., 46 (1972), 81-95. doi: 10.1007/BF00250684. Google Scholar
P. Constantin, Euler equations, Navier-Stokes equations and turbulence, in Mathematical foundation of turbulent viscous flows, Lecture Notes in Math., 1871 (2004), 1-43. doi: 10.1007/11545989_1. Google Scholar
Z. Dai, L. Cao and P. Wang, Liouville type theorems for the system of fractional nonlinear equations in $R^n_+$, J. Inequal. Appl., (2016), Paper No. 267, 17 pp. doi: 10.1186/s13660-016-1207-9. Google Scholar
J. Dou and Y. Li, Classification of extremal functions to logarithmic Hardy-Littlewood-Sobolev inequality on the upper half space, Discrete Contin. Dyn. Syst., 38 (2018), 3939-3953. doi: 10.3934/dcds.2018171. Google Scholar
D. Figueiredo, P. Lions and R. Nussbaum, A priori estimates and existence of positive solutions of semilinear elliptic equations, J. Math. Pures Appl., 61 (1982), 41-63. Google Scholar
B. Gidas and J. Spruck, A priori bounds for positive solutions of nonlinear elliptic equations, Comm. Partial Differential Equations, 6 (1981), 883-901. doi: 10.1080/03605308108820196. Google Scholar
H. Kaper and M. Kwong, Uniqueness of non-negative solutions of a class of semilinear elliptic equations, Nonlinear Diffusion Equations and Their Equilibrium States Ⅱ, 13 (1988), 1-17. doi: 10.1007/978-1-4613-9608-6_1. Google Scholar
E. Leite ang M. Montenegro, On positive viscosity solutions of fractional Lane-Emden systems, 2015, arXiv:1509.01267. Google Scholar
Y. Li and P. Ma, Symmetry of solutions for a fractional system, Sci. China Math., 60 (2017), 1805-1824. doi: 10.1007/s11425-016-0231-x. Google Scholar
K. Mcleod and J. Serrin, Uniqueness of positive radial solutions of $\Delta u+f(u) = 0$ in $ {R}^n$, Arch. Rational Mech. Anal., 99 (1987), 115-145. doi: 10.1007/BF00275874. Google Scholar
E. D. Nezza, G. Palatucci and E. Valdinoci, Hitchhiker's guide to the fractional Sobolev spaces, Bull. Sci. Math., 136 (2011), 521-573. doi: 10.1016/j.bulsci.2011.12.004. Google Scholar
P. Niu, L. Wu and X. Ji, Positive solutions to nonlinear systems involving fully nonlinear fractional operators, Fract. Calc. Appl. Anal., 21 (2018), 552-574. doi: 10.1515/fca-2018-0030. Google Scholar
P. Pucci and V. Radulescu, The impact of the mountain pass theory in nonlinear analysis: A mathematical survey, Boll. Unione Mat. Ital., 3 (2010), 543-584. Google Scholar
A. Quaas and A. Xia, Liouville type theorems for nonlinear elliptic equations and systems involving fractional Laplacian in the half space, Calc. Var. Partial Differential Equations, 52 (2015), 641-659. doi: 10.1007/s00526-014-0727-8. Google Scholar
J. Serrin and H. Zou, Non-existence of positive solutions of Lane-Emden systems, Differential Integral Equations, 9 (1996), 635-653. Google Scholar
R. Servadei and E. Valdinoci, Weak and viscosity solutions of the fractional Laplace equation, Publicacions Matematiques, 58 (2014), 133-154. doi: 10.5565/PUBLMAT_58114_06. Google Scholar
R. Servadei and E. Valdinoci, A Brezis-Nirenberg result for non-local critical equations in low dimension, Commun. Pure Appl. Anal., 12 (2013), 2445-2464. doi: 10.3934/cpaa.2013.12.2445. Google Scholar
L. Silvestre, Regularity of the obstacle problem for a fractional power of the Laplace operator, Comm. Pure Appl. Math., 60 (2007), 67-112. doi: 10.1002/cpa.20153. Google Scholar
V. Tarasov, G. Zaslavsky and M. George, Fractional dynamics of systems with long-range interaction, Commun. Nonlinear Sci. Numer. Simul., 11 (2006), 885-898. doi: 10.1016/j.cnsns.2006.03.005. Google Scholar
P. Wang and P. Niu, A direct method of moving planes for a fully nonlinear nonlocal system, Commun. Pure Appl. Anal., 16 (2017), 1707-1718. doi: 10.3934/cpaa.2017082. Google Scholar
L. Wu and P. Niu, Symmetry and Nonexistence of Positive Solutions to Fractional p-Laplacian Equations, to appeared in Discrete Contin. Dyn. Syst., 2018. Google Scholar
R. Zhuo, W. Chen, X. Cui and Z. Yuan, Symmetry and non-existence of solutions for a nonlinear system involving the fractional Laplacian, Discrete Contin. Dyn. Syst., 36 (2016), 1125-1141. doi: 10.3934/dcds.2016.36.1125. Google Scholar
Meixia Dou. A direct method of moving planes for fractional Laplacian equations in the unit ball. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1797-1807. doi: 10.3934/cpaa.2016015
Baiyu Liu. Direct method of moving planes for logarithmic Laplacian system in bounded domains. Discrete & Continuous Dynamical Systems, 2018, 38 (10) : 5339-5349. doi: 10.3934/dcds.2018235
Pengyan Wang, Pengcheng Niu. A direct method of moving planes for a fully nonlinear nonlocal system. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1707-1718. doi: 10.3934/cpaa.2017082
Xinjing Wang. Liouville type theorem for Fractional Laplacian system. Communications on Pure & Applied Analysis, 2020, 19 (11) : 5253-5268. doi: 10.3934/cpaa.2020236
Miaomiao Cai, Li Ma. Moving planes for nonlinear fractional Laplacian equation with negative powers. Discrete & Continuous Dynamical Systems, 2018, 38 (9) : 4603-4615. doi: 10.3934/dcds.2018201
Yuxia Guo, Shaolong Peng. A direct method of moving planes for fully nonlinear nonlocal operators and applications. Discrete & Continuous Dynamical Systems - S, 2021, 14 (6) : 1871-1897. doi: 10.3934/dcdss.2020462
Dezhong Chen, Li Ma. A Liouville type Theorem for an integral system. Communications on Pure & Applied Analysis, 2006, 5 (4) : 855-859. doi: 10.3934/cpaa.2006.5.855
Xiaoxue Ji, Pengcheng Niu, Pengyan Wang. Non-existence results for cooperative semi-linear fractional system via direct method of moving spheres. Communications on Pure & Applied Analysis, 2020, 19 (2) : 1111-1128. doi: 10.3934/cpaa.2020051
Ze Cheng, Genggeng Huang. A Liouville theorem for the subcritical Lane-Emden system. Discrete & Continuous Dynamical Systems, 2019, 39 (3) : 1359-1377. doi: 10.3934/dcds.2019058
Jingbo Dou, Ye Li. Liouville theorem for an integral system on the upper half space. Discrete & Continuous Dynamical Systems, 2015, 35 (1) : 155-171. doi: 10.3934/dcds.2015.35.155
Xian-gao Liu, Xiaotao Zhang. Liouville theorem for MHD system and its applications. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2329-2350. doi: 10.3934/cpaa.2018111
Begoña Barrios, Leandro Del Pezzo, Jorge García-Melián, Alexander Quaas. A Liouville theorem for indefinite fractional diffusion equations and its application to existence of solutions. Discrete & Continuous Dynamical Systems, 2017, 37 (11) : 5731-5746. doi: 10.3934/dcds.2017248
Weiwei Zhao, Jinge Yang, Sining Zheng. Liouville type theorem to an integral system in the half-space. Communications on Pure & Applied Analysis, 2014, 13 (2) : 511-525. doi: 10.3934/cpaa.2014.13.511
Jacques Féjoz. On "Arnold's theorem" on the stability of the solar system. Discrete & Continuous Dynamical Systems, 2013, 33 (8) : 3555-3565. doi: 10.3934/dcds.2013.33.3555
Shaoming Guo. Oscillatory integrals related to Carleson's theorem: fractional monomials. Communications on Pure & Applied Analysis, 2016, 15 (3) : 929-946. doi: 10.3934/cpaa.2016.15.929
Jingbo Dou, Huaiyu Zhou. Liouville theorems for fractional Hénon equation and system on $\mathbb{R}^n$. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1915-1927. doi: 10.3934/cpaa.2015.14.1915
Yuxia Guo, Ting Liu. Liouville-type theorem for high order degenerate Lane-Emden system. Discrete & Continuous Dynamical Systems, 2021 doi: 10.3934/dcds.2021184
Kui Li, Zhitao Zhang. Liouville-type theorem for higher-order Hardy-Hénon system. Communications on Pure & Applied Analysis, 2021, 20 (11) : 3851-3869. doi: 10.3934/cpaa.2021134
Fahd Jarad, Thabet Abdeljawad. Generalized fractional derivatives and Laplace transform. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 709-722. doi: 10.3934/dcdss.2020039
Maria Laura Delle Monache, Paola Goatin. A front tracking method for a strongly coupled PDE-ODE system with moving density constraints in traffic flow. Discrete & Continuous Dynamical Systems - S, 2014, 7 (3) : 435-447. doi: 10.3934/dcdss.2014.7.435
Pengyan Wang Pengcheng Niu
|
CommonCrawl
|
PFPL Chapter 12 Constructive Logic
This part is not shown on PFPL preview version.
What is Constructive Logic
Constructive logic codifies the principles of mathematical reasoning as it is actually practiced.
In mathematics, a proposition is judged true exactly when it has a proof, and false exactly when refutation occurs. Since there're always, and will always be, unsolved problems, we can't expect in general a proposition be either true or false.
Constructive Semantics
Constructive logic is concerned with 2 judgments:
$\phi\text{ prop}$, stating $\phi$ expresses a proposition.
$\phi \text{ true}$, stating $\phi$ is a true proposition.
What distinguishes constructive from non-constructive logic is that proposition is not conceived of merely a truth value, but instead as a problem statement whose solution, if it has one, is given by a proof.
Truth with Proof
Identifying truth with proof has some consequences. One cannot say that a proposition is either true or false. There exists a chaos state indicating there's no proof for it.
In this sense, constructive logic is a logic of positive, or affirmative, information. One must have explicit evidence in the form of a proof or refutation to prove the truth or falsity of the proposition.
Decidable or Undecidable
It's clear that not every proposition is either true or false. If $\phi$ expresses an unsolved problem, such as $\text{P} \stackrel{?}{=} \text{NP}$, since we have no proof nor refutation for it. Such problem is undecidable, precisely because it hasn't been solved.
My Personal Fav
The constructive attitude is simply to accept the situation as inevitable, and make out peace with that.
When faced with a problem, we have no choice but to roll up our sleeves and try to prove it or refute it. There is not guarantee of success! Life is hard, but we muddle though somehow.
Constructive Logic
Judgments $\phi \text{ prop}$ and $\phi \text{ true}$ are rarely of interest by themselves, but rather in a context of a hypothetical judgment of the form $\phi_1\text{ true},\dots,\phi_n\text{ true}\vdash \phi\text{ true}$. The judgment says proposition $\phi$ is true (with proof) under the assumptions that $\phi_i$ are true (with proofs).
The structural properties of the hypothetical judgment, when specialized to constructive logic, define what we mean by reasoning under hypotheses
\dfrac{}{\Gamma,\phi\text{ true}\vdash \phi\text{ true}}\\
\dfrac{\Gamma \vdash \phi_1 \text{ true}\phantom{""}\Gamma,\phi_1\text{ true}\vdash \phi_2\text{ true}}{\Gamma \vdash \phi_2\text{ true}}\\
\dfrac{\Gamma \vdash \phi_2\text{ true}}{\Gamma,\phi_1\text{ true}\vdash \phi_2\text{ true}}
Two more rules are required in that we regard $\Gamma$ as a set of hypotheses, so that two "copies" are as good as one, and the order doesn't matter
\dfrac{\Gamma,\phi_1\text{ true},\phi_1\text{ true}\vdash\phi_2\text{ true}}{\Gamma,\phi_1\text{ true}\vdash \phi_2\text{ true}}\\
\dfrac{\Gamma_1,\phi_1\text{ true},\phi_2\text{ true},\Gamma_2\vdash\phi\text{ true}}{\Gamma_1,\phi_2\text{ true},\phi_1\text{ true},\Gamma_2\vdash\phi\text{ true}}
Provability
The syntax table of constructive logic is
\begin{align}
\text{Prop}&&\tau&&::=&&\top&&\top&&\text{truth}\\
&&&&&&\bot&&\bot&&\text{falsity}\\
&&&&&&\vee(\phi_1;\phi_2)&&\phi_1\vee\phi_2&&\text{disjunction}\\
&&&&&&\wedge(\phi_1;\phi_2)&&\phi_1\wedge\phi_2&&\text{conjunction}\\
&&&&&&\supset(\phi_1;\phi_2)&&\phi_1\supset\phi_2&&\text{implication}
\end{align}
The connectives of propositional logic are given meaning by rules that define
what constitutes a "direct" proof of propositional formed form that connective
how to exploit existence of such proof in an "indirect" proof of another proposition.
These are called introduction and elimination rules for the connectives.
The principle of conservation of proof states that these rules are inverse to one another: the elimination cannot extract more information than information we put it by introduction rule, and vise versa.
Truth proposition is trivially true, no information goes into proving it, no information can be obtained from it.
\dfrac{}{\Gamma \vdash \top \text{ true}}
Thus there is no elimination form only introduction form.
\dfrac{\Gamma \vdash \phi_1 \text{ true}\phantom{""}\Gamma \vdash \phi_2\text{ true}}{\Gamma \vdash \phi_1 \wedge \phi_2\text{ true}}\\
\dfrac{\Gamma \vdash \phi_1 \wedge \phi_2 \text{ true}}{\Gamma \vdash \phi_1\text{ true}}\\
\dfrac{\Gamma,\phi_1\text{ true}\vdash \phi_2\text{ true}}{\Gamma \vdash \phi_1\supset\phi_2\text{ true}}\\
\dfrac{\Gamma \vdash \phi_1\supset\phi_2\text{ true}\phantom{""}\Gamma \vdash \phi_1\text{ true}}{\Gamma \vdash \phi_2\text{ true}}
Falsehood expresses the trivially false (refutable) proposition.
Thus there is no introduction form since no judgment supports falsehood, only elimination form.
\dfrac{\Gamma \vdash \bot \text{ true}}{\Gamma \vdash \phi\text{ true}}
Disjunction
\dfrac{\Gamma \vdash \phi_1 \text{ true}}{\Gamma \vdash \phi_1 \vee \phi_2 \text{ true}}\\
\dfrac{\Gamma \vdash \phi_1 \vee \phi_2 \text{ true}\phantom{""}\Gamma,\phi_1\text{ true}\vdash \phi\text{ true}\phantom{""}\Gamma,\phi_2\text{ true}\vdash \phi\text{ true}}{\Gamma \vdash \phi\text{ true}}
Negnation
$\neg \phi$ is defined as $\phi \supset \bot$. As a result, $\neg \phi\text{ true}$ if $\phi\text{ true}\vdash \bot \text{ true}$, which is to say that the truth of $\phi$ is refutable.
Since constructive truth is defined to be the existence of proof, the implied semantics of negation is strong.
A problem $\phi$ is open exactly when neither affirm it or refute it. In contrast, classical conception of tuth assigns a fixed truth value to each proposition.
Proof Terms
The key to propositions-as-types principles is to make explicit the forms of proof.
Basic judgment $\phi\text{ true}$, which states that $\phi$ has a proof, is replaced by judgment $p:\phi$, stating $p$ is a proof for $\phi$.
The hypothetical judgment form is changed into form $x_1:\phi_1,\dots,x_n:\phi_n\vdash p:\phi$. ($x_i$ stands for presumed, but unknown, proofs.)
We again let $\Gamma$ range over such hypothesis lists, subject to the restriction that no variable occurs more than once.
The syntax table for proof terms is
\text{Prf }p&&::=&&\text{true-I}&&\langle \rangle&&\text{truth intro}\\
&&&&\text{and-I}(p_1;p_2)&&\langle p_1,p_2 \rangle&&\text{conj. intro}\\
&&&&\text{and-E}\lbrack l\rbrack(p)&&p\cdot l&&\text{conj. elim}\\
&&&&\text{and-E}\lbrack r\rbrack(p)&&p\cdot r&&\text{conj. elim}\\
&&&&\text{imp-I}(x.p)&&\lambda(x)p&&\text{impl. intro}\\
&&&&\text{imp-E}(p_1;p_2)&&p_1(p_2)&&\text{impl. elim}\\
&&&&\text{false-E}(p)&&\text{abort}(p)&&\text{false elim}\\
&&&&\text{or-I}\lbrack l\rbrack(p)&&l\cdot p&&\text{disj. intro}\\
&&&&\text{or-I}\lbrack r\rbrack(p)&&r\cdot p&&\text{disj. intro}\\
&&&&\text{or-E}(p;x_1.p_1;x_2.p_2)&&\text{case }p\lbrace l\cdot x_1\hookrightarrow p_1 \shortmid r\cdot x_2\hookrightarrow p_2\rbrace &&\text{disj. elim}
The rules of constructive propositional logic can be restated with proof terms
\dfrac{}{\Gamma \vdash \langle \rangle :\top}
\dfrac{\Gamma \vdash p_1:\phi_1\phantom{""}\Gamma \vdash p_2:\phi_2}{\Gamma \vdash \langle p_1,p_2\rangle :\phi_1 \wedge \phi_2}\\
\dfrac{\Gamma \vdash p_1 :\phi_1 \wedge \phi_2}{\Gamma \vdash p_1\cdot l:\phi_1}\\
\dfrac{\Gamma \vdash p_1 :\phi_1 \wedge \phi_2}{\Gamma \vdash p_1\cdot r:\phi_2}
\dfrac{\Gamma,x:\phi_1 \vdash p_2:\phi_2}{\Gamma \vdash \lambda(x)p_2:\phi_1\supset \phi_1}\\
\dfrac{\Gamma \vdash p:\phi_1\supset \phi_2\phantom{""}\Gamma \vdash p_1 :\phi_1}{\Gamma \vdash p(p_1):\phi_2}
\dfrac{\Gamma \vdash p:\bot}{\Gamma \vdash \text{abort}(p):\phi}
\dfrac{\Gamma \vdash p_1:\phi_1}{\Gamma \vdash l\cdot p_1 :\phi_1 \vee \phi_2}\\
\dfrac{\Gamma \vdash p_2:\phi_2}{\Gamma \vdash r\cdot p_2 :\phi_1 \vee \phi_2}\\
\dfrac{\Gamma \vdash p:\phi_1 \vee \phi_2\phantom{""}\Gamma,x_1:\phi_1\vdash p_1:\phi\phantom{""}\Gamma,x_2:\phi_2\vdash p_2:\phi}{\Gamma \vdash \text{case }p\lbrace l\cdot x_1\hookrightarrow p_1 \shortmid r\cdot x_2\hookrightarrow p_2\rbrace :\phi}
Proof Dynamics
Proof terms in constructive logic are given a dynamics by Gentzen's Principle, stating that elimination forms are inverse to the introduction forms.
One aspect of Gentzen's Principle is the principle of conservation of proof.
\dfrac{\Gamma \vdash p_1:\phi_1\phantom{""}\Gamma\vdash p_2:\phi_2}{\Gamma \vdash \langle p_1,p_2\rangle\cdot l\equiv p_1:\phi_1}\\
\dfrac{\Gamma \vdash p_1:\phi_1\phantom{""}\Gamma\vdash p_2:\phi_2}{\Gamma \vdash \langle p_1,p_2\rangle\cdot r\equiv p_2:\phi_2}\\
\dfrac{\Gamma \vdash p\cdot l:\phi_1\phantom{""}\Gamma \vdash p\cdot r:\phi_2}{\Gamma \vdash \langle p\cdot l,p\cdot r \rangle \equiv p:\phi_1\wedge\phi_2}
\dfrac{\Gamma,x:\phi_1\vdash p_2:\phi_2\phantom{""}\Gamma\vdash p_1:\phi_1}{\Gamma \vdash (\lambda(x)p_2)(p_1)\equiv \lbrack p_1/x \rbrack p_2:\phi_2}\\
\dfrac{\Gamma\vdash p:\phi_1\supset\phi_2}{\Gamma \vdash \lambda(x)(p(x))\equiv p:\phi_1\supset\phi_2}
\dfrac{\Gamma\vdash p:\phi_1\vee\phi_2\phantom{""}\Gamma,x_1:\phi_1\vdash p_1:\psi\phantom{""}\Gamma,x_2:\phi_2\vdash p_2:\psi}{\Gamma \vdash \text{case }l\cdot p \lbrace l\cdot x_1\hookrightarrow p_1 \shortmid r\cdot x_2\hookrightarrow p_2\rbrace \equiv \lbrack p/x_1\rbrack p_1:\psi}\\
\dfrac{\Gamma\vdash p:\phi_1\vee\phi_2\phantom{""}\Gamma,x_1:\phi_1\vdash p_1:\psi\phantom{""}\Gamma,x_2:\phi_2\vdash p_2:\psi}{\Gamma \vdash \text{case }r\cdot p \lbrace l\cdot x_1\hookrightarrow p_1 \shortmid r\cdot x_2\hookrightarrow p_2\rbrace \equiv \lbrack p/x_2\rbrack p_2:\psi}\\
\dfrac{\Gamma \vdash p:\phi_1\vee\phi_2\phantom{""}\Gamma,x:\phi_1\vee\phi_2\vdash q:\psi}{\Gamma\vdash\lbrack p/x \rbrack q\equiv \text{case } p \lbrace l\cdot x_1\hookrightarrow \lbrack l\cdot x_1/x \rbrack q \shortmid r\cdot x_2\hookrightarrow \lbrack r\cdot x_2/x \rbrack q\rbrace:\psi}
\dfrac{\Gamma\vdash p:\bot\phantom{""}\Gamma,x:\bot\vdash q:\psi}{\Gamma\vdash\lbrack p/x \rbrack q\equiv\text{abort}(p):\psi}
Propositions as Types
\text{Prop}&&\text{Type}\\
\top&&\text{unit}\\
\bot&&\text{void}\\
\phi_1\wedge\phi_2&&\tau_1 \times \tau_2\\
\phi_1\vee\phi_2&&\tau_1+\tau_2\\
\phi_1\supset\phi_2&&\tau_1\to\tau_2
Law of the Excluded Middle
For $\neg\neg (\phi\vee\neg\phi)\text{ true}$, we can assume that $\neg(\phi\vee\neg\phi)\text{ true}$ and derive contradiction. Then we need to prove $\phi\vee\neg\phi$.
Plausible: LEM cannot be expected to hold for a general $\phi$.
For $\phi \vee \neg\phi$, it suffices to prove one of its disjuncts. The following steps seems to be trivial.
Formally, we can see that the type of the proposition is $((\tau+(\tau\to\text{void}))\to\text{void})\to\text{void}$.
Thus we can construct proof term as $\lambda(x) x(r\cdot \lambda(y)x(l\cdot y))$.
Double Negation Elimination
Suppose LEM holds universally, $\forall \phi$ and assume $\neg\neg \phi\text{ true}$, trying to derive $\phi\text{ true}$.
By LEM, $\neg\phi\vee \phi\text{ true}$. If $\phi\text{ true}$ then goal reached. If $\neg \phi \text{ true}$, we need to derive contradiction for that.
By previous problem, we can derive proof term as $\lambda(y)\text{ case LEM}_\phi \lbrace l\cdot y_1 \hookrightarrow y_1 \shortmid r \cdot y_2 \hookrightarrow \text{case }y_2(y)\lbrace \rbrace \rbrace$.
General Heyting Algebra Distributivity
Consider the equivalence $\phi \wedge (\psi_1\vee \psi_2)\equiv (\phi \wedge \psi_1) \vee(\phi \wedge\psi_2)$.
Let $\lambda$ stands for left-hand side, $\rho$ for right-hand side.
Assume the equivalence is achieved. Then we can see that $\rho \le \phi$ and $\rho \le \psi_1 \vee \psi_2$.
The former and latter ones are immediate, which are trivial. Thus $\rho \le \lambda$.
If we want to show $\rho \geq \lambda$, we can show $\phi \leq \rho ^{\psi_1\vee \psi_2}$. By exponential feature, $\phi \le \rho^{\psi_1}$ and $\phi \leq \rho^{\psi_2}$.
This only shows that $\phi \wedge\psi_1 \le \rho$ and $\phi \wedge\psi_2\le\rho$. Thus it is proved.
For $\phi \vee (\psi_1 \wedge\psi_2) \equiv (\phi \vee\psi_1) \wedge (\phi\vee\psi_2)$, we still apply similar methods. Let $\lambda$ for lhs and $\rho$ for rhs.
On $\rho \leq \lambda$, one can easily see that $\phi \vee \psi_1 \le \lambda$ and $\phi \vee \psi_2 \le \lambda$. The rest is trivial.
On $\lambda \le \rho$, $\rho \le \phi\vee\psi_1$ and $\rho \le \phi \vee \psi_2$. By transitive, the rest is trivial.
Boolean/Heyting Algebra De Morgen
One can see that $\phi \supset \phi$ in Boolean algebra as $(\neg \phi) \vee \phi$. Then it is consistent to adjoin LEM to constructive logic.
According to definition, the first de Morgan duality law can be proved by any Heyting Algebra.
The second one might encounter the case $\psi$ as $\neg \phi$. Then only Boolean Algebra can ensure negation is always the complement of the negated.
PFPL Chapter 11 Sum Types
PFPL Chapter 13 Classical Logic
1. What is Constructive Logic
2. Constructive Semantics
2.1. Truth with Proof
2.2. Decidable or Undecidable
2.3. My Personal Fav
3. Constructive Logic
3.1. Provability
3.1.1. Truth
3.1.2. Conjunction
3.1.3. Implication
3.1.4. Falsehood
3.1.5. Disjunction
3.1.6. Negnation
3.2. Proof Terms
4. Proof Dynamics
4.1. Conjunction
4.2. Implication
4.3. Disjunction
4.4. Falsehood
5. Propositions as Types
6.1. Law of the Excluded Middle
6.2. Double Negation Elimination
6.3. General Heyting Algebra Distributivity
6.4. Boolean/Heyting Algebra De Morgen
|
CommonCrawl
|
ICER 2023
Tue 8 - Thu 10 August 2023
Track/Call
ICER 2023 Committees
Senior Program Committee
ICER 2023 (series) /
Research PapersICER 2023
The 19th annual ACM Conference on International Computing Education Research (ICER) aims to gather high-quality contributions to the Computing Education Research discipline. The "Research Papers" track invites submissions describing original research results related to any aspect of teaching and learning computing, from introductory through advanced material. Submissions are welcome from across the research methods used in Computing Education Research and related fields. Each contribution will be assessed based on the appropriateness and soundness of its methods, its relevance to teaching or learning computing, and the depth of its contribution to the community's understanding of the question at hand.
Research areas of particular interest include:
design-based research, learner-centered design, and evaluation of educational technology supporting computing knowledge or skills development,
discipline based education research (DBER) about computing, computer science, and related disciplines,
informal learning experiences related to programming and software development (all ages), ranging from after-school programs for children, to end-user development communities, to workplace training of computing professionals,
learnability of programming languages and tools,
learning analytics and educational data mining in computing education contexts,
learning sciences work in the computing content domain,
measurement instrument development and validation (e.g., concept inventories, attitudes scales, etc) for use in computing disciplines,
pedagogical environments fostering computational thinking,
psychology of programming,
rigorous replication of empirical work to compare with or extend previous empirical research results,
teacher professional development at all levels.
While this above list is non-exclusive, authors are also invited to consider the call for papers for the "Lightning Talks & Posters" and "Work-in-Progress" tracks if in doubt about the suitability of their work for this track.
This year, ICER will include a new "clarification" step in the reviewing workflow: if reviewers need clarification on a few details in order to make a recommendation on a paper, concrete clarification questions will be sent to the authors, who will have 72 hours to submit responses. These responses will then be considered during the program committee meetings to finalize decisions.
Please see the Submission Instructions for details on how to prepare your submission. It includes links to the relevant ACM policies including the ACM Policy on Plagiarism, Misrepresentation, and Falsification as well as (new in 2022) the ACM Publications Policy on Research Involving Human Participants and Subjects.
All questions about this call should go to the ICER 2023 program committee chairs at [email protected].
All submission deadlines are "anywhere on Earth" (AoE, UTC-12).
Titles, abstracts, and authors due. (The chairs will use this information to assign papers to PC members.) Friday, March 17th, 2023, AoE
Full paper submission deadline Friday, March 24th, 2023, AoE
Clarification questions sent to authors Saturday, April 29th, 2023, AoE
Clarification responses due Tuesday, May 2nd, 2023, AoE
Decisions announced Tuesday, May 16th, 2023
"Conditional Accept" revisions due Thursday, May 25th, 2023
"Conditional Accept" revisions approval notification Thursday, June 1th, 2023
Final versions due to TAPS Thursday, June 8th, 2023, AoE
Published in the ACM Digital Library The official publication date is the date the proceedings are made available in the ACM Digital Library. This date will be the first day of the conference. The official publication date may affect the deadline for any patent filings related to published work.
Submit at the ICER 2023 HotCRP site.
When you submit the abstract or full version ready for review, you need to perform the following actions:
Check the checkbox "ready for review" at the bottom of the submission form. (Otherwise it will be marked as a draft).
Check the checkbox "I have read and understood the ACM Publications Policy on Research Involving Human Participants and Subjects". Note: "Where such research is conducted in countries where no such local governing laws and regulations related to human participant and subject research exist, Authors must at a bare minimum be prepared to show compliance with the above detailed principles."
Check the checkbox "I have read and understood the ACM Policy on Plagiarism, Misrepresentation, and Falsification; in particular, no version of this work is under submission elsewhere.". Make sure to disclose possible overlap with your own previous work ("redundant publication") to the ICER Program Committee co-chairs.
Check the checkbox "I have read and understood the ICER Anonymization Policy" (see below).
ICER Anonymization Policy
ICER research paper submissions will be reviewed using a double-anonymous process: the authors do not know the identity of the reviewers and the reviewers do not know the identity of the authors. To ensure this:
Avoid titles that indicate a clearly identifiable research project.
Remove author names and affiliations. (If you are using LaTeX, you can start your document declaration with \documentclass[manuscript,review,anonymous]{acmart} to easily anonymize these.
Avoid referring to yourself when citing your own work.
Redact (just for review) portions of positionality statements that would identify you within the community (perhaps due to demographics shared by few others).
Avoid references to your affiliation. For example, rather than referring to your actual university, you might write "A Large Metropolitan University (ALMU)" rather than "Auckland University of Technology (AUT)".
Redact any other identifying information such as contributors, course numbers, IRB names and numbers, grant titles and numbers, from the main text and the acknowledgements.
Omit author details from the PDF you generate, such as author name or the name of the source document. These are often automatically inserted into exported PDFs, so be sure to check your PDF before submission.
Do not simply cover identifying details with a black box, as the text can easily be seen from under the box by dragging the cursor over it, and will still be read by screen readers.
Work that is not sufficiently anonymized will be desk-rejected by the PC chairs without offering an option to redact and resubmit.
The ICER conference maintains an evolving author guide, full of recommendations about scope, statistics, qualitative methods, theory, and other concerns that may arise when drafting your submission. These guidelines are a ground truth for reviewers; study them closely as you plan your research and prepare your submission.
The SIGCSE Conflict of Interest policy applies to all submissions. You can review how conflicts will be managed by consulting our reviewer training, which details our review process.
Submission Format and Publication Workflow
Papers submitted to the research track of ICER 2023 have to be prepared according to the ACM TAPS workflow system. Read this page carefully to understand the new workflow.
The most notable change from ICER conferences prior to 2023 is that we have introduced a "clarification" step into the reviewing process. If reviewers need clarification on a few details in order to make a recommendation on a paper, concrete clarification questions will be sent to the authors, who will have 72 hours to submit responses. These responses will then be considered during the program committee meetings to finalize decisions.
Starting in 2021, ICER switched to a publication format (called TAPS) that separates content from presentation in support of accessibility. This means that the submission format and the publication format differ. For submission, we standardize on a single-column presentation.
The submission template is either the single column Word Submission Template or the single column LaTeX (using the "manuscript,review,anonymous" style available in template, which you can see an example of in the sample-manuscript.tex example in the LaTeX master template samples). Reviewers will review in this single column format. You can download these templates on the ACM Master Article Templates page
The publication template is either the single column Word Submission Template or LaTeX template using "sigconf" style in acmart. You can download the templates on the ACM TAPS workflow page page, where you can also see example papers using the TAPS-compatible Word and LaTeX templates. If your paper is accepted, you will use the TAPS system to generate your final publication outputs. This will involve more than just submitting a PDF, requiring you to instead submit your Word or LaTeX source files and fix any errors in your source before the final version deadline listed above. The final published versions will be the ACM two-column conference PDF format (as well as XML, HTML, and ePub formats in the future).
For LaTeX users, be aware that there is a list of approved LaTeX packages for use with ACM TAPS. Not all packages are allowed.
This separation of submission and publication format results in several benefits:
Improved quality of paper metadata, improving ACM Digital Library search.
Multiple paper output formats, including PDFs, responsive HTML5, XML, and ePub.
Improved accessibility of paper content for people with disabilities.
Streamlined publication timelines.
One consequence of this new publication workflow is that it is no longer feasible to limit papers by page count, as the single column formats and final two-column formats result in hard-to-predict differences in length. When this workflow was introduced in 2021, the 2021 PC chairs and ICER Steering Committee considered several policies for how to manage length, and decided to continue to limit length using word count instead. As there is no established way to count words, ICER uses the following process: authors may submit papers up to 11,000 words in length, excluding acknowledgements, references, figures, but including all other text, including tables. The PC chairs will use the following procedures for counting words for TAPS approved formats:
For papers written in the Microsoft Word template, Word's built-in word-count mechanism will be used, selecting all text except acknowledgements and references.
For papers written in the LaTeX template, the document will be converted to plain text using the "ExtractText" functionality of the Apache pdfbox suite (see here) and then post processed with a standard command-line word count tool ("wc -w", to be precise). Line numbers added by the "review" class option for LaTeX will be removed prior to counting by using "grep -v -E '^[0-9]+$'" (thanks to N. Brown for this).
We acknowledge that many authors may want to use Overleaf to avoid dealing with command-line tools and, consequently, may be less enthusiastic about using another command-line tool for assessing the word count. As it is configured by default, Overleaf does not count text in tables, captions, and math formula and, thus, is very likely to significantly underestimate the number obtained through the tool described above. To obtain a more realistic word count during the writing of the manuscript, authors need to take these additional steps:
Add the following lines at the very beginning of your Overleaf LaTeX document:
%TC:macro \cite [option:text,text]
%TC:macro \citep [option:text,text]
%TC:macro \citet [option:text,text]
%TC:envir table 0 1
%TC:envir table* 0 1
%TC:envir tabular [ignore] word
%TC:envir displaymath 0 word
%TC:envir math 0 word
%TC:envir comment 0 0
Make sure to write math formulae delimited by \begin{math} \end{math} for in-line math and \begin{displaymath} \end{displaymath} for equations. Do not use dollar signs or \[ \]; these will result in Overleaf not counting math tokens (unlike Word and pdfbox) and thus underestimate your word count.
The above flags will ensure that in-text citations, tables, and math formulae will be counted but that comments will be ignored.
The above flags do not cover more advanced LaTeX environments, so if authors use such environments, they should interpret the Overleaf word count with care (then again, if authors know how to work with such environments it is very reasonable to assume that they also know how to work with command-line tools such as pdfbox).
Authors relying on Overleaf word count should be advised that the submission chairs will not have access to the source files and cannot re-run or verify any counting mechanism done by the submitting authors. To provide a fair treatment across all submission types, only the approved tools mentioned above will be used for word count. That said, submission chairs will operate under a bona fide assumption when it comes to extreme borderline cases.
Papers in either format may not use figures to render text in ways that work around the word count limit; papers abusing figures in this way will be desk-rejected.
A paper under the word count limit with either of the above approved tools is acceptable. The submissions chairs will evaluate each submission using the procedures above, notify the PC chairs of papers exceeding the limit, and desk-reject any papers that do.
We expect papers to vary in word count. Abstracts may vary in length, less than 300 words is a good guideline for conciseness. Submission length should be commensurate with its contributions; we expect most papers to be less than 9,000 words according to the rules above, though some may use up to the limit in order to convey details authors deem necessary to evaluate the work. Papers may be judged as too long if they are repetitive, verbose, violate formatting rules, or use figures to save on word count. Papers may be judged as too short if they omit critical details or ignore relevant prior work. See the reviewer training for more on how reviewers will be instructed to assess conciseness.
All of the procedures above, and the TAPS workflow, will likely undergo continued iteration in partnership with ACM, the ICER Steering Committee, and the SIGCSE board. Notify the chairs of questions, edge cases, and other concerns to help improve this new workflow.
Clarifications Prior to Review
Sometimes, reviewers wish for answers to clarifying questions prior to recommending a decision on a paper. In cases where such questions arise during the committee's discussion period, the PC chairs will send concrete clarification questions to the authors. Authors will have 72 hours within which to submit written responses (through HotCRP); the reviewers, Senior Program Committee, and PC chairs will consider these responses while making recommendations and decisions on papers.
Only submissions for which the committee has clarifying questions will receive them. Many papers will be accepted or rejected without the need for such questions. The clarification round is NOT a rebuttal period: authors will receive only specific questions—not full reviews—as part of the clarification round.
Acceptance and Conditional Acceptance
All papers recommended for acceptance after the Senior PC meetings are either accepted or conditionally accepted. For accepted papers, there is no resubmission required other than the final camera-ready version. For conditionally-accepted papers, meta-reviews will indicate one or more minor revisions that are necessary for final acceptance; authors are responsible for submitting these revisions to HotCRP prior to the "Conditional Accept revisions due" deadline in the Call for Papers. The Senior PC and Program Chairs will review the final revisions; if they are acceptable, the paper will be officially accepted, and authors will have one week to submit an approved camera-ready version to TAPS for publication. If the Senior PC and Program Chairs judge that the request for revisions were not suitably addressed, the paper will be rejected.
Because the turnaround time for conditional acceptance is only one week, requested revisions will necessarily be minor: they may include presentation issues or requests for added clarity or details helpful for future readers of the archived paper. New results, new methodological details that change the interpretation of the results, or other substantially new content will neither be asked for nor allowed to be added.
Conditional Acceptance is independent of the clarification round, though some authors who receive clarifying questions may be asked to address them during the conditional acceptance period.
After a paper has been accepted and uploaded into the ACM Digital Library, authors will receive an invitation from Kudos to create an account and add plain-language text into Kudos on its platform. The Kudos "Shareable PDF" integration with ACM will then allow an author to generate a PDF to upload to websites, such as author homepages, institutional repositories, and preprint services, such as ArXiv. This PDF contains the author's plain-text summary of the paper as well as a link to the full-text version of an article in the ACM Digital Library, adding to the DL download and citation counts there, as well as adding views from other platforms to the author's Kudos dashboard.
Using Kudos is entirely optional. Authors may also use the other ACM copyright options to share their work (retaining copyright, paying for open access, etc.).
If you are reading this page, you are probably considering submitting to ICER. Congratulations! We are excited to review your work. Whether your research is just starting or nearly finished, this guide is intended to help authors meet the expectations of the computing education research community. It reflects a community-wide perspective on what constitutes rigorous research on the teaching and learning of computing.
Read on for our community's current guidelines, and if you like, read our reviewer guidelines to understand our review process and review criteria.
What's in scope at ICER?
ICER's goal is to be an inclusive conference, both with respect to epistemology (how we know we know things) and with respect to phenomena (who is learning and in what context). Therefore, any research related to the teaching and learning of computing is in scope, using any definition of computing, and using any methods. We particularly encourage work that goes beyond the community's past focus on introductory programming courses in post-secondary education: such as work on primary and secondary education, work on more advanced computing concepts, informal learning in any setting or learning amongst adults. (However, note that simply using computing technology to perform research in an educational setting is not in itself enough, the focus must be on the teaching or learning of computing topics.) If you have not seen a particular topic published on a topic at ICER, or you have not seen a particular method be used, that is okay. We value new topics, new methods, new perspectives, and new ideas, just as much as more broadly accepted ones.
That said, under the current review process, we cannot promise that we have recruited all the necessary expertise to our program committee to fairly review your work. Check who is on the program committee this year, and if you do not see a lot of expertise on your methods or phenomena, make sure your submission spends a bit of extra time explaining theories or methods that reviewers are unlikely to know. If you have any questions regarding this, email the program chairs ([email protected]).
Note that we used the word "research" above. Research is hard to define, but we can say that ICER is not a place to submit practical descriptions of courses, curriculum, or instruction materials you want to share. If you're looking to share your experiences at a conference, consider submitting to the SIGCSE Technical Symposium's Experience Report or Position and Curricula Initiatives tracks. Research, in contrast, should meet the criteria presented throughout this document.
What makes a good computing education research paper?
It's impossible to anticipate every kind of paper that might be submitted. The current ICER review criteria are listed in the reviewer guidelines. These will evolve over time as the community grows. There are many other criteria that reviews could discuss in relation to specific types of research contributions, but the criteria listed there are generally inclusive to many epistemologies and contribution types. This includes empirical studies that answer research questions, replicate prior results, or present negative research results as well as other, non-empirical types of research that provide novel or deepened insights into the teaching and learning of computer science content.
What prior work should be cited?
As with any research work, your submission should cite all significant publications that are relevant to your research questions. With respect to ICER submissions, this may include not only work that has been published in ACM-affiliated venues like ICER, ITiCSE, SIGCSE, Koli Calling, but also the wide range of conferences and journals in the learning sciences, education, educational psychology, HCI, and software engineering. If you are new to research, consider guides on study design and surveys of prior work like the 2019 Cambridge Handbook of Computing Education Research, which attempts to survey most of what we know about computing education up to 2018.
Papers will be judged on how adequately they are grounded in prior work published across academia. They will also be assessed regarding their accuracy of citing related work: read what you cite closely and ensure the discoveries in published work are supporting your claims; many of the authors of the works you are likely to cite are members of the computing education research community and may be your reviewers. Finally, papers will also be expected to return to prior work in a discussion of a paper's contributions. All papers should explain how the paper's contributions advance upon prior work, cause us to reinterpret prior work, or reveal conflicts with prior work.
How might theory be used?
Different disciplines across academia vary greatly on how they use and develop theory. At the moment, the position of the community is that theory can be a useful tool for framing research, connecting it to prior work, and interpreting findings. Papers can also contribute new theories, or refine them. However, it may also be possible for papers to be atheoretical, discovering interesting new relationships or interventions that cannot yet be explained. All of these uses of theory are appropriate.
It is also possible to misuse theory. Sometimes the theories used are too general for a question, where a theory more specific to computing education might be appropriate. In other cases, a theory might be wrongly applied to some phenomena, or a paper might use a theory that has been discredited. Be careful when using theory to understand its history, its body of evidence in support of and against its claims, and its scope of relevance.
Note that our community has discussed the role of theory multiple times, and that conversations about how to use theory are evolving:
Nelson and Ko (2018) argued that there are tensions between expectations of theory building and innovative exploration of design ideas, and that our field's theory building should focus on theories specific to computing education.
Malmi et al. (2019) found that while computing education researchers have widely cited many dozens of unique theoretical ideas about learning, behavior, beliefs, and other phenomena, the use of theory in the field remains somewhat shallow.
Kafai et al. (2019) argued that there are many types of theories, and that we should more deeply leverage their explanatory potential, especially theories about the sociocultural and societal factors at play in computing education, not just the cognitive factors.
In addition to using theories when appropriate, ICER encourages the contribution of new theories. There is not a community-level consensus on what constitutes a good theory contribution, but there are examples you might learn from. Papers proposing a new theoretical model should consider including concrete examples of said model.
How should educational contexts be described?
If you're reporting empirical work in a specific education context or set of contexts, it is important to remember that our research community is global, and that education systems across the world are structured differently. This is of particular importance when describing research that took place in primary and secondary schools. Keep in mind that not all readers can be familiar with your educational context. Describe the structure of the educational system. Define terminology related to your education system. Characterize who is teaching, and what prior knowledge and preparation they have. When describing learners, at a minimum, describe their gender, race, ethnicity, age, level in school, and prior knowledge (assuming collecting and publishing this type of data is legal in the context in which the study was conducted, see also the ACM Publications Policy on Research Involving Human Participants and Subjects). Include information about other structural factors that might affect how the results are interpreted, including whether courses are required or elective, what incentives students have to enrol in courses, how students in courses vary. For authors in the United States, common terminology to avoid include "elementary school", "middle school", "high school", and "college", which do not have well-defined meanings elsewhere. Use the more common globally inclusive phrases "primary", "secondary", and "post-secondary". Given the broad spectrum of, e.g., introductory computing courses that run under the umbrella of "CS1", make sure to provide enough information on the course content rather than relying on an assumed shared understanding.
What details should we report about our methods?
ICER values a wide range of methods of all kinds, including quantitative, qualitative, design, argumentation, and more. It is critical to describe your methods in detail, both so that reviewers and readers can understand how you arrived at your conclusions, and so they can evaluate the appropriateness of your methods both to the work and, for readers, to their own contexts.
Some contributions might benefit from following the Center for Open Science's recommendations to ensure replicable, transparent science. These include practices such as:
Data should be posted to a trusted repository.
Data in that repository is properly cited in the paper.
Any code used for analysis is posted to a trusted repository.
Results are independently reproduced.
Materials used for the study are posted to a trusted repository.
Studies and their analysis plans are pre-registered prior to being conducted.
Our community is quite far from adopting any of these standards as expectations. Additionally, pursuing many of these goals might impose significant barriers to conducting research ethically, as educational data can often not be sufficiently anonymized to prevent disclosing identity. Therefore, these supplementary materials are not required for review, but we encourage you to include them where feasible and ethical.
The ACM has adopted a new policy on Research Involving Human Participants and Subjects that requires research to be conducted in accordance with ethical and legal standards. In accordance with the policy, your methods description should briefly describe how these standards were met. This can be as simple as a sentence that your study design was reviewed by a local review board (IRB), or a few sentences with key details if you engaged with human subjects and an IRB review was not appropriate to your context or work. Read the ACM policy for additional details.
How should we report statistics?
The world is moving beyond p-values, but computing education, like most of academia, still relies on them. When reporting the results of statistical hypothesis tests, it is critical to report:
The test used
The rationale for choosing the test, including a discussion of the data characteristics that allowed this test to be used
The test statistic computed
The actual p-value (not just whether it was greater than or less than an arbitrary threshold)
An effect size and its confidence intervals.
Effect sizes are especially relevant, as they indicate the extent to which something impacts or explains some phenomena in computing education; small effect sizes might not be that significant to learning. The above data should be reported regardless of whether a hypothesis test was significant. Chapters that introduce statistical methods can be found in the Cambridge Handbook of Computing Education Research.
Do not assume that reviewers or future readers have a deep understanding of statistical methods (although they might). If you're using more advanced or non-standard techniques, justify them in detail, so that the reviewers and future readers understand your choice of methods. We recognize that length limits might prevent a detailed explanation of methods for entirely unfamiliar readers; reviewers are expected to not criticize papers for excluding extensive explanations when there was not space to include them.
How should we report on qualitative methods?
Best practices in other fields for addressing the reliability of qualitative methods suggest providing detailed arguments and rationale for qualitative approaches and analyses. Some fields that rely on qualitative methods have moved toward a recoverability criterion, which like replicability in quantitative methods, aims to ensure a study's core methods are available for inspection and interpretation; however, recoverability does not imply repeatability, as qualitative methods rely on interpretation, which may not be repeatable.
When qualitative data is counted and used for quantitative methods, authors should report on the inter-rater reliability (IRR) of the qualitative judgements underlying those counts. There are many ways of calculating inter-rater reliability, each with tradeoffs. However, note that IRR analysis is not ubiquitous across social sciences, and not always appropriate; authors should make a clear soundness argument for why it was or was not performed.
Another challenge in reporting qualitative results is that they require more space in a paper; an abundance of quotes, after all, may take considerably more space than a table full of aggregate statistics. Be careful to provide enough evidence of your claims, while being mindful with your use of space.
What makes a good abstract?
A good abstract should summarize the question your paper asks and what answers it found. It is not enough to just say "We discuss our results and their implications"; say what you actually discovered, so future readers can learn that from your summary.
If your paper is empirical in nature, ICER recommends (but does not require) using a structured abstract that contains the following sections, each 1-2 sentences:
Background and Context. What is the problem space you are working in? Which phenomena are you considering and why are they relevant and important for an ICER audience?
Objectives. What research questions were you trying to answer?
Method. What did you do to answer your research questions?
Findings. What did you discover? Both positive and negative results should be summarized.
Implications. What implications does your discovery have on prior and future research, and on the practice of computing education?
Not all papers may fit this structure, but if yours does, it will greatly help reviewers and future readers understand your paper's research design and contribution.
What counts as plagiarism?
Read ACM's policy on Plagiarism, Misrepresentation, and Falsification; these criteria will be applied during review. In particular, attention will be paid to avoiding redundant publication.
Who should be an author on my paper?
ICER follows ACM's Authorship Policy and Publications Policy on the Withdrawal, Correction, Retraction, and Removal of Works from ACM Publications and ACM DL. These state that any person listed as an author on a paper must (1) have made substantial contributions to the work, (2) have participated in drafting/revising the paper, (3) be aware that the paper has been submitted, and (4) agree to be held accountable for the content of the paper. Note that this policy allows enforcement of plagiarism sanctions, but it could impact people who work in large, collaborative research groups, and on postgraduate advisors who have not contributed directly to a paper.
Must submissions be in English?
At the moment, yes. Our reviewing community's only lingua franca is English, and any other language would greatly limit the pool of expert reviewers to evaluate your work. We recognize that this is a challenging barrier for many authors globally, and that it greatly limits the diversity of voices in global discourse on computing education. Therefore, we wish to express our support of other computing education conferences around the world that you might consider submitting papers to. To mitigate this somewhat, papers will not be penalized for minor English spelling and grammar errors that can easily be corrected with minor revisions.
American Educational Research Association. (2006). Standards for reporting on empirical social science research in AERA publications. Educational Researcher, 35(6), 33–40. http://edr.sagepub.com/content/35/6/33.full.pdf+html.
Decker, A,, McGill, M. M., & Settle, A (2016). Towards a Common Framework for Evaluating Computing Outreach Activities. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education (SIGCSE '16). ACM, New York, NY, USA, 627-632. DOI: https://doi.org/10.1145/2839509.2844567.
Fincher, S. A., & Robins, A. V. (Eds.). (2019). The Cambridge Handbook of Computing Education Research. Cambridge University Press. DOI: https://dx.doi.org/10.1017/9781108654555.
Petre, M., Sanders, K., McCartney, R., Ahmadzadeh, M., Connolly, C., Hamouda, S., Harrington, B., Lumbroso, J., Maguire, J., Malmi, L., McGill, M.M., Vahrenhold, J. (2020). Mapping the Landscape of Peer Review in Computing Education Research, In: ITiCSE-WGR '20: Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education, ACM. New York, NY, USA, 173–209. DOI: https://doi.org/10.1145/3437800.3439207.
ICER 2022 Review Process and Guidelines
Version 1.0 - February 6, 2022
Jan Vahrenhold & Kathi Fisler, ICER 2022 Program Co-Chairs
This document is a living document intended to capture the reviewing policies of the ICER community. Please email the Program Co-Chairs at [email protected] with comments or questions; all will be taken into account when updating this document for ICER 2023. To obtain this page as a single, accessible PDF document, click here.
Based on the ICER 2020/20221 Reviewing Guidelines (Amy Ko & Anthony Robins & Jan Vahrenhold) as well as the ICSE 2022 Reviewing Guidelines (Daniela Damian & Andreas Zeller). We are thankful for the input on these earlier documents provided by members of the ICER community.
Goals of the ICER Reviewing Process
Submission System
Roles in the Review Process
Principles Behind ICER Reviewing
The Reviewing Process
Review Criteria
Possible Plagiarism, Misrepresentation, and Falsification
Practical Suggestions for Writing Reviews
1. Goals of the ICER Reviewing Process
The ICER Reviewing Process as outlined in this document is designed to support reaching the following goals:
Accept high quality papers
Give clear feedback to papers of insufficient quality
Evaluate papers consistently
Provide transparency in the review process
Embrace diversity of perspectives, but work in an inclusive, safe, collegial environment
Drive decisions by consensus among reviewers
Strive for manageable workload for PC members
Do our best on all of the above
2. Action Items
Prior to continuing to read this document, please do the following:
Read the call for papers at https://icer2022.acm.org/track/icer-2022-papers. This is the ground truth for scope and submission requirements. We expect you to account for these in your reviews.
Read the author guidelines at https://icer2022.acm.org/track/icer-2022-papers#Author-Guidelines. We expect your reviews and meta-reviews to be consistent with these guidelines. After having read this document, please block off a number of time slots in your calendar:
[Reviewers and Meta-Reviewers:] Saturday, March 19, 2022 through Friday, March 25, 2022: Reserve at least two hours to read all abstracts and bid for papers to review (see Step 2: Reviewers and Meta-Reviewers Bid for Papers).
[Reviewers:] Friday, April 1, 2022 through Friday, April 29, 2022: Reserve enough time to review 5-6 9-10 papers (see Step 6a: Reviewers Review Papers). In general, it is highly recommended to spread the reviews over the full four weeks instead to trying to write them just in time. Notify the PC chairs immediately in case of emergencies that might prevent you from submitting reviews by the deadline.
[Reviewers and Meta-Reviewers:] Saturday, April 29, 2022 through Friday, May 6, 2022: Reserve one one-hour slot during the weekend and 20-minutes slot each day of the week to log into HotCRP, read the other reviews, check on the discussion status of each of your papers, and comment where appropriate (see Step 7: Reviewers and Meta-Reviewers Discuss Reviews).
[Meta-Reviewers:] Saturday, April 29, 2022 through Wednesday, May 11, 2022: Reserve three hours in total to you to prepare (and update, as necessary) the meta-reviews for your assigned papers (see Step 8: Meta-Reviewers Write Meta-Reviews).
[Meta-Reviewers:] Wednesday, May 18, 2022 through Friday, May 20, 2022: Reserve two two-hour slots for synchronous SPC meetings (see Step 9: PC Chairs and Meta-Reviewers Discuss Papers; the PC chairs will be reaching out to schedule these meetings).
[Meta-Reviewers:] Wednesday, June 1, 2022 through Sunday, June 5, 2022: Reserve two hours for checking any "conditional accept" revisions that may affect your papers (see Step 13: Meta-Reviewers Check Revised Papers).
If you are new to reviewing in the Computing Education Research community, the following ITiCSE Working Group Report may serve as an introduction:
Petre M, Sanders K, McCartney R, Ahmadzadeh M, Connolly C, Hamouda S, Harrington B, Lumbroso J, Maguire J, Malmi L, McGill MM, Vahrenhold J. 2020. "Mapping the Landscape of Peer Review in Computing Education Research." In ITiCSE-WGR '20: Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education, edited by Rößling G, Krogstie B, 173-209. New York, NY: ACM Press. doi: 10.1145/3437800.3439207.
3. Submission System
ICER 2022 uses the HotCRP platform for its reviewing process. If you are unfamiliar with this, you will find a basic tutorial below. But first, make sure you can sign in, then bookmark it: http://icer2022.hotcrp.com If you have trouble signing in, or you need help with anything, contact James Prather [email protected] and Dastyni Loksa [email protected], the ICER 2022 submission chairs, for help. Make sure that you can log in to HotCRP and that your name and other metadata is correct. Check that emails from HotCRP are not marked as spam and that HotCRP email notifications are enabled.
4. Roles in the Review Process
Program Committee (PC) Chairs
Each year there are two program committee co-chairs. The PC chairs are solicited by the ICER steering committee and appointed by the SIGCSE board to serve a two-year term. One new appointment is made each year so that in any given year there is always a continuing program chair from the prior year and a new program chair. Appointment criteria include prior attendance and publication at ICER, past service on the ICER Program Committee, research excellence in Computing Education, collaborative and organizational skills to share oversight of the program selection process. The ICER Steering Committee solicits and selects candidates for future PC chairs.
Program Committee (PC) Members / Reviewers
PC members write reviews of submissions, evaluating them against the review criteria. The PC chairs invite and appoint the reviewers. The committee is sized so that each reviewer will serve for 5-6 paper submissions, or more depending on the size of the submissions pool. Each reviewer will serve a one-year term, with no limits on reappointment. Appointment criteria include expertise in relevant areas of computing education research and past reviewing experience in computing education research venues. Together, all reviewers constitute the program committee (PC). The PC chairs are responsible for inviting returning and new members of the PC, keeping in mind the various forms of diversity that are present at ICER.
Senior Program Committee Members (SPC) / Meta-Reviewers
SPC members review the PC members' reviews, ensuring that the review content is constructive and aligned with the review criteria, as well as summarizing reviews and making recommendations for a paper's acceptance and rejection. They also moderate discussions about each paper and provide feedback on reviews if necessary, asking reviewers to improve the quality of reviews. Finally, they participate in a synchronous SPC meeting to make final recommendations about each paper, and review authors' minor revisions. The PC chairs invite and appoint Senior PC members, with the approval of the steering committee, again, keeping in mind the various forms of diversity that are present at ICER. Each Senior PC member can be appointed for up to three years in a row; after a hiatus of at least one year, preferably two years, re-appointment is possible. The committee is sized so that each meta-reviewer will handle 8-10 papers, depending on the submission pool.
5. Principles Behind ICER Reviewing
The ICER review process is designed to work towards these goals:
Maximize the alignment between a paper and expertise required to review it.
Minimize conflicts of interests and promoting trust in the process.
Maximize our community's ability to make excellent, rigorous, trustworthy contributions to the science of computing education.
The call for papers and author guide should make this clear, but ICER is broadly scoped. The conference publishes research on teaching and learning of computer science content that happens in any context. In consequence, reviewers should not downgrade papers for being about a topic they personally perceive to be less important to computing education. If the work is sufficiently ready for publication and reviewers believe it is of interest to some part of the computing education community, it should be published such that the community can decide its importance over time.
6. Conflicts of Interest
ICER takes conflicts of interest, both real and perceived, quite seriously. The conference adheres to the ACM conflict of interest policy (https://www.acm.org/publications/policies/conflict-of-interest) as well as the SIGCSE conflict of interest policy (https://sigcse.org/policies/COI.html). These state that a paper submitted to the ICER conference is a conflict of interest for an individual if at least one of the following is true:
The individual is a co-author of the paper
A student of the individual is a co-author of the paper
The individual identifies the paper as a conflict of interest, i.e., that the individual does not believe that he or she can provide an impartial evaluation of the paper.
The following policies apply to conference organizers:
The chairs of any track are not allowed to submit to that track.
All other conference organizers are allowed to submit to any track.
All reviewers (PC members) and meta-reviewers (SPC members) are allowed to submit to any track.
No reviewer, meta-reviewer, or chair with a conflict of interest in the paper will be included in any evaluation, discussion, or decision about the paper. It is the responsibility of the reviewers, meta-reviewers, and chairs to declare their conflicts of interest throughout the process. The corresponding actions are outlined below for each relevant step of the reviewing process. It is the responsibility of the chairs to ensure that no reviewer or meta-reviewer is assigned a role in the review process for any paper for which they have a conflict of interest.
7. The Reviewing Process
Step 1: Authors Submit Abstracts
Authors will submit a title and abstract one week prior to assigning papers. Authors are allowed to revise their title and abstract before the full paper submission deadline.
Step 2: Reviewers and Meta-Reviewers Bid for Papers
Reviewers and meta-reviewers will be asked to bid on papers for which they have sufficient expertise–in both phenomena and methods–and then the PC chairs will assign papers based on these bids. The purpose of bidding is not to express interest in papers you want to read. It is to express your expertise and eligibility for fairly evaluating the work. These are subtly but importantly different purposes.
Specify all of your conflicts of interest. Conflicts are any situation where you have any connection with a submission that is in tension with your role as an independent reviewer (you advised an author, you have collaborated with an author, you are at the same institution, you are close friends, etc.). After declaring conflicts, you will be excluded from all future evaluation, discussion, and decisions of that paper. Program chairs and submissions chairs will also specify conflicts of interest at this time.
Bid on all of the papers you believe you have sufficient expertise to review. Sufficient expertise includes knowledge of research methods used and prior research on the phenomena. Practical knowledge of a topic is helpful, but insufficient.
Do not bid on papers about topics, techniques, or methods that you strongly oppose. That precludes authors from being fairly reviewed by authors with negative bias; see below for positive biases and how to control for them.
Step 3: Authors Submit Papers
Submissions are due one week after the abstracts are due. As you read in the submission instructions (https://icer2022.acm.org/track/icer-2022-papers#Submission-Instructions), submissions are supposed to be sufficiently anonymous that a reader cannot determine the identity or affiliation of the authors. The main purpose of ICER's anonymous reviewing process is to reduce the influence of potential (positive or negative) biases on reviewers' assessments. You should be able to review the work without knowing the authors or their affiliations. Do not try to find out the identity of authors. (Most guesses will be wrong anyway.) See the submission instructions for what constitutes sufficient anonymization. When in doubt, write the PC chairs for clarity at [email protected].
Step 4: PC Chairs Decide on Desk-Rejects
The PC chairs, with the help of the submissions chairs, will review each submission for papers that violate anonymization requirements, length restrictions, or plagiarism policies. Authors of desk rejected papers will be notified immediately. The PC chairs may not catch every issue. If you see something during review that you believe should be desk rejected, contact the chairs before you write a review; the PC chairs will make the final judgement about whether something is a violation, and give you guidance on whether and if so how to write a review.
Managing Conflicts of Interest
PC chairs with conflicts are excluded from deciding on desk rejected papers, leaving the decision to the other program chair.
Step 5: PC Chairs Assign Reviewers
Based on the bids and their judgement, the PC chairs will collaboratively assign at least three reviewers (PC members) and one meta-reviewer (SPC member) for each submission. The PC chairs will be advised by HotCRP's assignment algorithm, which depends on all bids being high quality. Remember, for these assignments to be fair and good, your bids should only be based on your expertise and eligibility. Interest alone is not sufficient for bidding on a paper. The chairs will review the algorithm's assignments to identify potential misalignments with expertise. Managing Conflicts of Interest PC chairs with conflicts are excluded from assigning reviewers to any papers for which they have a conflict. Assignments in HotCRP can only be made by a PC chair without a conflict.
Step 6a: Reviewers Review Papers
Assigned reviewers submit their anonymous reviews through HotCRP by the review deadline, evaluating each of their papers against the review criteria (see Review Criteria). The time allocated for reviews is four weeks in which 5-6 reviews need to be written. Due to the internal and external (publication) deadlines, there cannot be any extensions.
Reviewers, meta-reviewers, and PC chairs with conflicts cannot see any of the reviews of the papers they are conflicted on during this process.
Step 6b: Meta-Reviewers and PC Chairs Monitor Progress
Meta-reviewers and PC chairs will periodically check in to ensure that progress is being made.
Step 7: Reviewers and Meta-Reviewers Discuss Reviews
After the reviewing period, the assigned meta-reviewer asks the reviewers to read the other reviewers' reviews and begin a discussion about any disagreements that arise. All reviewers are asked to do the following:
Read all the reviews of all papers assigned (and re-read your own reviews).
Engage in a discussion about sources of disagreement.
Use the review criteria to guide your discussions.
Be polite, friendly, and constructive at all times.
Be responsive and react as soon as new information comes in.
Remain open to other reviewers shifting your judgements.
If your judgement does shift, update your review to reflect your new views. There is no need to indicate to the authors that you changed your review but do leave a comment for the other reviewers and the meta-reviewer indicating what you changed and why (HotCRP does not track changes). Discussing a paper is not about who wins or who is right. It is about how, in the light of all information, a group of reviewers can find the best decision on a paper. All reviewers (and the authors!) have their unique perspective and competence. It is perfectly normal that they may have seen things you have not, just as you may have seen things they have not. The important thing is to accept that the group will see more than the individual. Therefore, you can always (and are encouraged to!) shift your stance in light of the extra knowledge. The time allocated for this discussion is one week. As discussions about disagreeing reviews may take several (asynchronous) rounds, it is important to check in daily to see whether any new discussion items warrant attention. PC chairs will periodically check in. If you have configured HotCRP notifications correctly, you will be notified as soon as new information (another review or a new discussion item) about your paper comes in. It is important that you react to these, and as soon as possible. Do not let your colleagues wait for days when all that is needed is some short statement from your side.
Reviewers, meta-reviewers, and PC chairs with conflicts cannot see any of the discussions of the papers they are conflicted on during this process.
Step 8: Meta-Reviewers Write Meta-Reviews
After the discussion phase, meta-reviewers use the reviews, the discussion, and their own evaluation of the work to write a meta-review and recommendation. A meta-review should summarize the key strengths and weaknesses of the paper, in light of the review criteria, and explain how these led to the decision. The summary and explanation should help the authors in revising their work where appropriate. A generic meta-review ("After long discussion, the reviewers decided that the paper is not up to ICER standards, and therefore rejected the paper") is not sufficient. There are four possible meta-review recommendations: reject, discuss, conditional accept, and accept. The recommendation needs to be entered in the meta-review.
Reject. Ensure that the meta-review constructively summarizes the reviews and the rationale for rejection. The PC chairs will review all meta-reviews to ensure that reviews are constructive, and may request meta-reviewers to revise their meta-reviews as necessary. The PC chairs will make the final rejection decision based on the meta-review rationale; if necessary, this paper will be discussed at the SPC meeting.
Discuss. Ensure that the meta-review summarizes the open questions that need to be resolved at the SPC meeting discussion, where the paper will either be recommended as reject, conditional accept, or accept. Papers marked discussed will be scheduled for discussion at the SPC meeting. All papers for which the opinion of the meta-reviewer and the majority of reviewer recommendations do not align should be marked "discuss" as well.
Conditional Accept. Ensure that the meta-review explicitly and clearly states the conditions that must be met with minor revisions before the paper can be accepted. To accept with conditions, the conditions must be feasible to make within the one-week revision period, so they must be minor. The PC chairs will make the final decision on whether the requested revisions are minor enough to warrant conditional acceptance; if necessary, this paper will be discussed at the SPC meeting.
Accept. These papers will be accepted, assuming authors deanonymize the paper and meet the final version deadline. For technical reasons, "accept" recommendations are recorded internally as "conditional accept" recommendations that do not state any conditions for acceptance other than submitting the final version. The PC chairs will make the final acceptance decision based on the meta-review rationale; if necessary, this paper will be discussed at the SPC meeting.
Reviewers, meta-reviewers, and PC chairs with conflicts cannot see any of the recommendations or meta-reviews of the papers they are conflicted on during this process.
Step 9: PC Chairs and Meta-Reviewers Discuss Papers
The PC chairs will host synchronous SPC meetings with all available meta-reviewers (SPC members) to discuss and decide on all "discuss" and "conditional accept" papers. Before this meeting, a second meta-reviewer will be assigned to each such paper, ensuring that there are at least two meta-reviewers to facilitate discussion. Each meta-reviewer assigned to a paper should come prepared to present the paper, its reviews, and the HotCRP discussion. Each meta-reviewer's job is to present their recommendation, and/or if they requested discussion, present the uncertainty that prevents them from making one. All meta-reviewers who are available to attend a SPC meeting session should, at a minimum, skim each of the papers to be discussed and their reviews (excluding those for which they are conflicted), so they are familiar with the papers and their reviews prior to the discussions. At the meeting, the goal is to collectively reach consensus, rather than relying on the PC chairs alone to make final decisions. Papers may move from "discuss" to either "reject", "conditional accept", or "accept"; if there are conditions, they must be approved by a majority of the non-conflicted SPC and PC chairs at the discussion. After a decision is made in each case, the original SPC member will add a summary of the discussion at the end of their meta-review, explaining the rationale for the final decision, as well as any conditions for acceptance, and updating the recommendation tag in HotCRP.
Meta-reviewers conflicted on a paper will not be assigned as a second reader. Any meta-reviewer or PC chair conflicted on a paper will be excluded from the paper's discussion, returning after the discussion is over.
Step 10: PC Chair Review
Before announcing decisions, the non-conflicted PC chairs will review all meta-reviews to ensure as much clarity and consistency with the review process and its criteria as possible.
PC chairs cannot change the outcome of an accept or reject decision after the SPC meeting.
Step 11: Notifications
After the SPC meeting, the PC chairs will notify all authors of the decisions about their papers; these notification will be via email through HotCRP. Papers that are (unconditionally) accepted will be encouraged to make any changes that may have been suggested but not required; papers that are conditionally accepted will be reminded of the revision evaluation deadline.
Step 12: Authors of Conditionally Accepted Papers Revise their Papers
Authors of conditionally accepted papers have one week to incorporate the requested revisions and to submit their final versions for review by the assigned meta-reviewer.
Step 13: Meta-Reviewers Check Revised Papers
Meta-reviewers will check the revised papers against the required revisions. Based on the outcome of this, they will change their recommendation to either "accept" or "reject" and will update their meta-reviews to reflect this.
PC chairs will sanity-check all comments on those papers for which revisions were submitted. Conditionally accepted papers for which not revisions were received will be marked as "reject". PC chairs then finalize decisions. After this review, all recommendations will be converted to official accept or reject decisions in HotCRP and authors will be notified of these final decisions via email sent through HotCRP. Authors will then have one week to submit to ACM TAPS for final publication.
Reviewers, meta-reviewers, and PC chairs with conflicts cannot see any of the recommendations or meta-reviews of the papers they are conflicted on during this process. PC chairs with conflicts cannot see or edit any final decision on these papers.
8. Review Criteria
ICER currently evaluates papers against the following reviewing criteria, as independently as possible. These have been carefully chosen to be inclusive to many phenomena, epistemologies, and contribution types.
Criterion A: The submission is grounded in relevant prior work and leverages available theory when appropriate.
Criterion B: The submission describes its methods and/or innovations sufficiently for others to understand how data was obtained, analyzed, and interpreted, or how an innovation works.
Criterion C: The submission's methods and/or innovations soundly address its research questions.
Criterion D: The submission advances knowledge of computing education by addressing (possibly novel) questions that are of interest to the computing education community.
Criterion E: Discussion of results clearly summarizes the submission's contributions beyond prior work and its implications for research and practice.
Criterion F: The submission is written clearly enough to publish.
To be published at ICER, papers should be positively evaluated on all of these. The summary of this is another criterion:
Summary: Based on the criteria above, this paper should be published at ICER.
Below, we discuss each criterion in turn.
Papers should draw on relevant prior work and theories, and explicitly show how they are tied to the questions addressed. After reading the paper, one should feel more informed about prior literature and how that literature is related to the paper's contributions. Such coverage of related work might come before a work's contributions, or it might come after (e.g, connecting a new theory derived from observations to prior work. Note that not all types of research will have relevant theory to discuss, nor do all contribution types need theory to make significant advances. For example, a surprisingly robust but unexplained correlation might be an important discovery that later work could develop theory to explain. Reviewers should identify related work the authors might have missed and include pointers. Missing a paper that is relevant, but would not dramatically change the paper, is not sufficient grounds for rejecting a paper. Such citations can be added upon reviewers' request prior to publication. Instead, criticism in reviews that leads to downgrading a paper should focus on missing prior work or theories that would significantly alter research questions, analysis, or interpretation of results.
Guidelines for (Meta-)Reviewers
Since prior work and theories needs to be covered sufficiently and in a meaningful way but not necessarily completely, (meta-)reviewers are asked to do the following:
Refrain from downgrading work based on missing one or two peripherally related papers. Just note them, helping the authors to broaden their citations.
Refrain from downgrading work based on not citing the reviewer's own work, unless it really is objectively highly relevant.
Refrain from downgrading work based on where in a paper they address prior work. Sometimes a dedicated section is appropriate, sometimes it is not. Sometimes prior work is better addressed at the end of a paper, not at the beginning.
Make sure to critically note if work simply lists papers without meaningfully addressing their relevance to the paper's questions or innovations.
Refrain from downgrading work based on making discoveries inconsistent with theory. The point of empirical work is to test and refine theories, not conform to them.
Refrain from downgrading work based on not building upon theory when there is no sufficient theory available that can be pointed out in the review. Conversely, if there is a missing and relevant theory, it should be named.
Refrain from downgrading work based on not using the reviewer's interpretation of a theory. Many theories have multiple competing interpretations and multiple distinct facets that can be seen from multiple perspectives.
An ICER paper should be self-contained in the sense that readers should be able to understand most of the key details about how the authors conducted their work or made their innovation possible. This is key for replication and meta-analysis of studies that come from positivist or post-positivist epistemologies. For interpretivist works, it is also key for what Checkland and Howell called "recoverability" (See Tracy et al. 2010 for a detailed overview for evaluating qualitative work). Reviews thus should focus on omissions of research process or innovation details that would significantly alter your judgment of the paper's validity.
Since ICER papers have to adhere to a word count limit and since there are always more details a paper can describe about methods, (meta-)reviewers are asked to do the following:
Refrain from downgrading work based on not describing every detail.
Refrain from asking authors to write substantially new method details unless you can identify content for them to cut, or there is space to add those details within the length restrictions.
Refrain from asking authors of theory contributions for a traditional methods section; such contributions do not require them, as they are not empirical in nature.
Feel free to ask authors for minor revisions that would support replication or meta-analysis for positivist or post-positivist works, and recoverability for interpretivist works using qualitative methods.
The paper should answer the questions it poses, and it should do so with rigor, broadly construed. This is the single most important difference between research papers and other kinds of knowledge sharing in computing education (e.g., experience reports), and the source of certainty researchers can offer. Note that soundness is relative to claims. For example, if a paper claims to have provided evidence of causality, but its methods did not do that, that would be grounds for critique. But if a paper only claimed to have found a correlation, and that correlation is a notable discovery that future work could explain, downgrading it for not demonstrating causality would be inappropriate.
Since soundness is relative to claims and methods, (meta-)reviewers are asked to do the following:
Refrain from applying criteria for quantitative methods to qualitative methods (e.g., critiquing a case study for a "small N" makes no sense; that is the point of a case study).
Refrain from downgrading work based on a lack of a statistically significant difference if the study demonstrates sufficient power to detect a difference. A lack of difference can be discovery, too.
Refrain from asking for the paper to do more than it claims if the demonstrated claims are sufficiently publishable (e.g., "I would publish this if it had also demonstrated knowledge transfer").
Refrain from relying on inexpert, anecdotal judgments (e.g., "I don't know much about this but I played with it once and it didn't work").
Refrain from assuming that because a method has not been used in computing education literature that it is not standard somewhere else. The field draws upon methods from many communities. Look for evidence that the method is used elsewhere.
A paper can meet the previous criteria and still fail to advance what we know about the phenomena. It is up to the authors to convince you that the discoveries advance our knowledge in some way, e.g., by confirming uncertain prior work, adding a significant new idea, or making progress on a long-standing open question. Secondarily, there should be someone who might find the discovery interesting. It does not have to be interesting to a particular reviewer, and a particular reviewer does not have to be absolutely confident that an audience exists. As the PC cannot possibly reflect the broader audience of all readers, a probable audience is sufficient for publication.
Since advances can come in many forms, there are many criticisms that are inappropriate in isolation (if, however, many of these apply, they may justify rejection), and, thus, (meta-)reviewers are asked to do the following:
Refrain from downgrading work because another, single paper was already published on the topic. Discoveries accumulate over many papers, not just one.
Refrain from downgrading work that contributes a really new idea for not yet having everything figured out about it. Again, new discoveries may require multiple papers.
Refrain from downgrading work because the results do not appear generalizable or were only obtained at a specific institution. Many papers explicitly discuss such limitations and possible remedies. Also, generalizability takes time, and, by their very nature, some qualitative methods do not lead to generalizable results.
Refrain from downgrading work based on "only" being a replication. Replications, if done with diligence, are important.
Refrain from downgrading work based on investigating phenomena you personally do not like (e.g., "I hate object-oriented languages, this work does not matter").
It is the authors' responsibility to help interpret the significance of a paper's discoveries. If it makes significant advances, but does not explain what those advances are and why they matter, the paper is not ready for publication. That said, it is perfectly fine if you disagree with the paper's interpretations or implications. Readers will vary on what they think a discovery means or what impact it might have on the world. All that is necessary is that the work presents some reasonably sound discussion of one possible set of interpretations.
Because there is no single "right" interpretation or discussion of implications, (meta-)reviewers are asked to do the following
Refrain from downgrading work because you do not think the idea would work in your institution.
Refrain from downgrading work because you think that the impact is limited. Check the discussion of limitations and threats to validity and evaluate the paper with respect to the claims made.
Make sure to critically note if work makes interpretations that are not grounded in evidence or proposes implications that are not grounded in evidence.
Papers need to be clear and concise, both to be comprehensible to diverse audiences, but also to ensure the community is not overburdened by verboseness. We recognize that not all authors are fluent English writers; if, however, the paper requires significant editing to be comprehensible to fluent English readers, or it is unnecessarily verbose, it is not yet ready for publication.
Since submissions should be clear enough, (meta-)reviewers are asked to do the following:
Refrain from downgrading work based on having easily fixed spelling and grammar issues.
Refrain from downgrading a sufficiently clear paper because it could be clearer. All writing can be clearer in some way.
Refrain from downgrading work based on not using all of the available word count. It is okay if a paper is short but significant.
Refrain from asking for more detail unless you are certain there is space or - if there is not space - you can provide concrete suggestions for what to cut.
Based on all of the previous criteria, decide how strongly you believe the paper should be accepted or rejected, assuming authors make any modest, straightforward minor revisions you and other reviewers request before publication. Papers that meet all of the criteria should be strongly accepted (though this does not imply that they are perfect). Papers that fail to meet most of the criteria should be strongly rejected. Each paper should be reviewed independently of others, as if it were a standalone journal submission. There are no conference presentation "slots"; there is no target acceptance rate. Neither should be a factor in reviewing individual submissions.
Because each paper should be judged on its own, (meta-)reviewers are asked to do the following:
Refrain from recommending to accept a paper because it was the best in your set. It is possible that none of your papers sufficiently meet the criteria.
Refrain from recommending to reject a paper because it should not take up a "slot". The PC chairs will devise a program for however many papers sufficiently meet the criteria, whether that is 5 or 50. There is no need to preemptively design the program through your review; focus on the criteria.
9. Award Recommendations
On the review form, reviewers may signal to the meta-reviewer and PC chairs that they believe the submission should be considered for a best paper award. Selecting this option in the review form is visible to the other (meta-)reviewers as part of your review, but it is not disclosed to the authors. Reviewers should recognize papers that best illustrate the highest standards of computing education research, taking into account the quality of its questions asked, methodology, analysis, writing, and contribution to the field. This includes papers that meet all of the review criteria in exemplary ways (e.g., research that was particularly well designed, executed, and communicated), or papers that meet specific review criteria in exemplary ways (e.g., discoveries are particularly significant or sound). The meta-review form for each paper includes an option to officially nominate a paper to the Awards Committee for the best-paper award. Reviewers may flag papers for award consideration during review, but meta-reviewers are ultimately responsible for nominating papers for the best paper award. Each meta-reviewer may nominate at most two papers for the best paper award. Nominated papers may or may not have been flagged by one or more reviewers. Nominations should be recorded in HotCRP and be accompanied by a paragraph outlining the rationale for nomination. NOTE: Whether a paper has been nominated and the accompanying rationale are not disclosed to the authors as part of the meta-review.
Meta-reviewers are encouraged to review and finalize their nominations at the conclusion of the SPC meeting to allow for possible calibration. Once paper decisions have been sent, the submission chair will make PDFs and the corresponding rationales for all nominated papers available to the Awards Chair. Additionally, a list of all meta-reviewers that have handled any nominated paper or have one or more conflicts of interest with any nominated paper will be disclosed to the Awards Chair, as those members are not eligible to serve on the Awards Committee.
10. Possible Plagiarism, Misrepresentation, and Falsification
If after reading a submission, you suspect that it has in some way plagiarized from some other source, do the following:
Read the ACM guidelines on Plagiarism, Misrepresentation, and Falsification.
If you think there is a potential issue, write the PC chairs at [email protected] to escalate the potential violation, and share any information you have about the case. Authors are required to disclose any potentially overlapping work to the PC chairs upon submission.
The chairs will investigate and decide as necessary prior to the acceptance notification deadline. You should not mark the paper for rejection based on suspected plagiarism. Mark it based on the paper as it stands, while the PC chairs investigate.
11. Practical Suggestions for Writing Reviews
The following suggestions may be helpful when reviewing papers:
Before reading, remind yourself of the preceding reviewing criteria.
Read the paper, and as you do, note positive and negative aspects for each of the preceding reviewing criteria.
Use your notes to outline a review organized by the seven criteria, so authors can understand your judgments for each criterion.
Draft your review based on your outline.
Edit your review, making it as constructive and clear as possible. Even a very negative review should be respectful to the author(s), helping to educate them. Avoid comments about the author(s) themselves; focus on the document.
Based on your review, choose scores for each of the criteria.
Based on your review and scores, choose a recommendation score and decide whether to recommend the paper for consideration for a best paper award.
Thank you very much for reading this document and thank you very much for being part of the ICER reviewing process. Do not hesitate to email the Program Co-Chairs at [email protected] if you have any questions.
The University of Auckland
Kathi Fisler
Andrew Begel
Andrew Luxton-Reilly
Andrew Petersen
Arto Hellas
Barbara Ericson
R. Benjamin Shapiro
Apple, Inc. and University of Colorado, Boulder
Claudia Szabo
Colleen M. Lewis
James Prather
Juha Sorva
Kristin Searle
Lauren Margulieux
Miranda Parker
Monique Ross
Neil Brown
Quintin Cutts
University of Glasgow, UK
Sally Hamouda
Sebastian Dziallas
No members yet
|
CommonCrawl
|
Entropy production in random billiards
On entropy of $ \Phi $-irregular and $ \Phi $-level sets in maps with the shadowing property
March 2021, 41(3): 1297-1318. doi: 10.3934/dcds.2020318
Blow-up and bounded solutions for a semilinear parabolic problem in a saturable medium
Juliana Fernandes 1,, and Liliane Maia 2,
Instituto de Matemática, Universidade Federal do Rio de Janeiro, Rio de Janeiro - RJ, 21941-909, Brazil
Departamento de Matemática, Universidade de Brasília, Brasília - DF, 70910-900, Brazil
* Corresponding author: Juliana Fernandes
Received December 2019 Revised May 2020 Published August 2020
Fund Project: The first author was partially supported by FAPERJ. The second author was partially supported by FAPDF, CAPES, and CNPq grant 308378/2017 -2
The present paper is on the existence and behaviour of solutions for a class of semilinear parabolic equations, defined on a bounded smooth domain and assuming a nonlinearity asymptotically linear at infinity. The behavior of the solutions when the initial data varies in the phase space is analyzed. Global solutions are obtained, which may be bounded or blow-up in infinite time (grow-up). The main tools are the comparison principle and variational methods. In particular, the Nehari manifold is used to separate the phase space into regions of initial data where uniform boundedness or grow-up behavior of the semiflow may occur. Additionally, some attention is paid to initial data at high energy level.
Keywords: Parabolic equation, infinite time blow-up, Nehari manifold.
Mathematics Subject Classification: Primary: 35K58, 35A01; Secondary: 35B44.
Citation: Juliana Fernandes, Liliane Maia. Blow-up and bounded solutions for a semilinear parabolic problem in a saturable medium. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1297-1318. doi: 10.3934/dcds.2020318
A. Ambrosetti and P. H. Rabinowitz, Dual variational methods in critical point theory and applications, J. Functional Analysis, 14 (1973), 349-381. doi: 10.1016/0022-1236(73)90051-7. Google Scholar
J. M. Arrieta, A. N. Carvalho and A. Rodríguez-Bernal, Attractors for parabolic problems with nonlinear boundary conditions. Uniform bounds, Comm. Partial Differential Equations, 25 (2000), 1-37. doi: 10.1080/03605300008821506. Google Scholar
A. V. Babin and M. I. Vishik, Attractor in Evolutionary Equations, Studies in Mathemathics and its Applications, 25, North-Holland Publishing Co., Amsterdam, 1992. Google Scholar
P. Bartolo, V. Benci and D. Fortunato, Abstract critical point theorems and applications to some nonlinear problems with "strong" resonance at infinity, Nonlinear Anal., 7 (1983), 981-1012. doi: 10.1016/0362-546X(83)90115-3. Google Scholar
N. Ben-Gal, Grow-Up Solutions and Heteroclinics to Infinity for Scalar Parabolic PDEs, Ph.D thesis, Brown University, 2010. Google Scholar
A. Biswas and S. Konar, Introduction to Non-Kerr Law Optical Solitons, Applied Mathematics and Nonlinear Science Series, Chapman & Hall/CRC, Boca Raton, FL, 2007 doi: 10.1201/9781420011401. Google Scholar
A. N. Carvalho, J. A. Langa and J. C. Robinson, Attractors for Infinite-Dimensional Non-Autonomous Dynamical Systems, Applied Mathematical Sciences, 182, Springer, New York, 2013. doi: 10.1007/978-1-4614-4581-4. Google Scholar
G. Cerami, Un criterio di esistenza per i punti critici su varietà illimitate, Rend. Accad. Sc. Lett. Inst. Lombardo, 112 (1978), 332-336. Google Scholar
M. Chen, X.-Y. Chen and J. K. Hale, Structural stability for time periodic one-dimensional parabolic equations, J. Differential Equations, 96 (1992), 355-418. doi: 10.1016/0022-0396(92)90159-K. Google Scholar
V. V. Chepyzhov and A. Y. Goritskiĭ, Unbounded attractors of evolution equations, in Properties of Global Attractors of Partial Differential Equations, , Adv. Soviet Math., 10, Amer. Math. Soc., Providence, RI, 1992, 85–128. Google Scholar
M. Clapp and L. A. Maia, A positive bound state for an asymptotically linear or superlinear Schrödinger equation, J. Differential Equations, 260 (2016), 3173-3192. doi: 10.1016/j.jde.2015.09.059. Google Scholar
D. G. Costa and C. A. Magalhães, Variational elliptic problems which are nonquadratic at infinity, Nonlinear Anal., 23 (1994), 1401-1412. doi: 10.1016/0362-546X(94)90135-X. Google Scholar
E. N. Dancer, The effect of domain shape on the number of positive solutions of certain nonlinear equations, J. Differential Equations, 74 (1988), 120-156. doi: 10.1016/0022-0396(88)90021-6. Google Scholar
F. Dickstein, N. Mizoguchi, P. Souplet and F. Weissler, Transversality of stable and Nehari manifolds for a semilinear heat equation, Calc. Var. Partial Differential Equations, 42 (2011), 547-562. doi: 10.1007/s00526-011-0397-8. Google Scholar
F. Gazolla and T. Weth, Finite time blow-up and global solutions for semilinear parabolic equations with initial data at high energy level, Differential Integral Equations, 18 (2005), 961-990. Google Scholar
D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order, Fundamental Principles of Mathematical Sciences, 224, Springer-Verlag, Berlin, 1983. doi: 10.1007/978-3-642-61798-0. Google Scholar
M. Grossi, A uniqueness result for a semilinear elliptic equation in symmetric domains, Adv. Differential Equations, 5 (2000), 193-121. Google Scholar
D. Henry, Geometric Theory of Semilinear Parabolic Equations, Lecture Notes in Mathematics, 840, Springer-Verlag, Berlin-New York, 1981. doi: 10.1007/BFb0089647. Google Scholar
H. Hofer, The topological degree at a critical point of mountain-pass type, in Nonlinear Functional Analysis and its Applications, Proc. Sympos. Pure Math., 45, Amer. Math. Soc., Providence, RI, 1986,501–509. Google Scholar
H. Hoshino and Y. Yamada, Solvability and soothing effect for semilinear parabolic equations, Funkcial. Ekvac., 34 (1991), 475-492. Google Scholar
[21] O. Ladyzhenskaya, Attractors for Semigroups and Evolution Equations, Cambridge University Press, Cambridge, 1991. doi: 10.1017/CBO9780511569418. Google Scholar
A. Pankov, Periodic nonlinear Schrödinger equation with application to photonic crystals, Milan J. Math., 73 (2005), 259-287. doi: 10.1007/s00032-005-0047-8. Google Scholar
L. E. Payne and D. H. Sattinger, Saddle points and instability of nonlinear hyperbolic equations, Israel J. Math., 22 (1975), 273-303. doi: 10.1007/BF02761595. Google Scholar
A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations, Applied Mathematical Sciences, 44, Springer-Verlag, New York, 1983. doi: 10.1007/978-1-4612-5561-1. Google Scholar
J. Pimentel and C. Rocha, A permutation related to non-compact global attractors for slowly non-dissipative systems, J. Dynam. Differential Equations, 28 (2016), 1-28. doi: 10.1007/s10884-014-9414-x. Google Scholar
P. Quittner, A priori bounds for global solutions of a semilinear parabolic problem, Acta Math. Univ. Comenian. (N.S.), 68 (1999), 195-203. Google Scholar
P. Quittner, Continuity of the blow-up time and a priori bounds for solutions in superlinear parabolic problems, Houston J. Math., 29 (2003), 757-799. Google Scholar
C. W. Steele, Numerical Computation of Electric and Magnetic Fields, Chapman & Hall, New York; International Thomson Publishing, London, 1997. doi: 10.1007/978-1-4615-6035-7. Google Scholar
G. I. Stegeman, D. N. Christodoulides and M. Segev, Optical spatial solitons: Historical Perspectives, IEEE J. Selected Topics Quantum Electronics, 6 (2000), 1419-1427. doi: 10.1109/2944.902197. Google Scholar
M. Tsutsumi, On solutions of semilinear differential equations in a Hilbert space, Math. Japon., 17 (1972), 173-193. Google Scholar
F. B. Weissler, Semilinear evolution equations in Banach spaces, J. Functional Analysis, 32 (1979), 277-296. doi: 10.1016/0022-1236(79)90040-5. Google Scholar
Youshan Tao, Michael Winkler. Critical mass for infinite-time blow-up in a haptotaxis system with nonlinear zero-order interaction. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 439-454. doi: 10.3934/dcds.2020216
Takiko Sasaki. Convergence of a blow-up curve for a semilinear wave equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1133-1143. doi: 10.3934/dcdss.2020388
Tetsuya Ishiwata, Young Chol Yang. Numerical and mathematical analysis of blow-up problems for a stochastic differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 909-918. doi: 10.3934/dcdss.2020391
Manuel del Pino, Monica Musso, Juncheng Wei, Yifu Zhou. Type Ⅱ finite time blow-up for the energy critical heat equation in $ \mathbb{R}^4 $. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3327-3355. doi: 10.3934/dcds.2020052
Justin Holmer, Chang Liu. Blow-up for the 1D nonlinear Schrödinger equation with point nonlinearity II: Supercritical blow-up profiles. Communications on Pure & Applied Analysis, 2021, 20 (1) : 215-242. doi: 10.3934/cpaa.2020264
Alex H. Ardila, Mykael Cardoso. Blow-up solutions and strong instability of ground states for the inhomogeneous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2021, 20 (1) : 101-119. doi: 10.3934/cpaa.2020259
Daniele Bartolucci, Changfeng Gui, Yeyao Hu, Aleks Jevnikar, Wen Yang. Mean field equations on tori: Existence and uniqueness of evenly symmetric blow-up solutions. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3093-3116. doi: 10.3934/dcds.2020039
Ryuji Kajikiya. Existence of nodal solutions for the sublinear Moore-Nehari differential equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1483-1506. doi: 10.3934/dcds.2020326
Nguyen Huy Tuan, Vo Van Au, Runzhang Xu. Semilinear Caputo time-fractional pseudo-parabolic equations. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020282
Nguyen Anh Tuan, Donal O'Regan, Dumitru Baleanu, Nguyen H. Tuan. On time fractional pseudo-parabolic equations with nonlocal integral conditions. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020109
Michiel Bertsch, Danielle Hilhorst, Hirofumi Izuhara, Masayasu Mimura, Tohru Wakasa. A nonlinear parabolic-hyperbolic system for contact inhibition and a degenerate parabolic fisher kpp equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3117-3142. doi: 10.3934/dcds.2019226
Stanislav Nikolaevich Antontsev, Serik Ersultanovich Aitzhanov, Guzel Rashitkhuzhakyzy Ashurova. An inverse problem for the pseudo-parabolic equation with p-Laplacian. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021005
Taige Wang, Bing-Yu Zhang. Forced oscillation of viscous Burgers' equation with a time-periodic force. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1205-1221. doi: 10.3934/dcdsb.2020160
Yi Zhou, Jianli Liu. The initial-boundary value problem on a strip for the equation of time-like extremal surfaces. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 381-397. doi: 10.3934/dcds.2009.23.381
Jean-Claude Saut, Yuexun Wang. Long time behavior of the fractional Korteweg-de Vries equation with cubic nonlinearity. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1133-1155. doi: 10.3934/dcds.2020312
Hai Huang, Xianlong Fu. Optimal control problems for a neutral integro-differential system with infinite delay. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020107
Ke Su, Yumeng Lin, Chun Xu. A new adaptive method to nonlinear semi-infinite programming. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021012
Linglong Du, Min Yang. Pointwise long time behavior for the mixed damped nonlinear wave equation in $ \mathbb{R}^n_+ $. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020033
Juliana Fernandes Liliane Maia
|
CommonCrawl
|
Skip to main content Skip to sections
December 2020 , 20:7 | Cite as
Comparison of the acute toxicity, analgesic and anti-inflammatory activities and chemical composition changes in Rhizoma anemones Raddeanae caused by vinegar processing
Sha-Sha Wang
Shao-Yan Zhou
Xiao-Yan Xie
Ling Zhao
Yao Fu
Guang-Zhi Cai
Ji-Yu Gong
First Online: 15 January 2020
Part of the following topical collections:
As the dry rhizome of Anemone raddeana Regel, Rhizoma Anemones Raddeanae (RAR), which belongs to Ranunculaceae, is usually used to treat wind and cold symptoms, hand-foot disease and spasms, joint pain and ulcer pain in China. It is well known that the efficacy of RAR can be distinctly enhanced by processing with vinegar due to the reduced toxicity and side effects. However, the entry of vinegar into liver channels can cause a series of problems. In this paper, the differences in the acute toxicity, anti-inflammatory and analgesic effects between RAR and vinegar-processed RAR were compared in detail. The changes in the chemical compositions between RAR and vinegar-processed RAR were investigated, and the mechanism of vinegar processing was also explored.
Acute toxicity experiments were used to examine the toxicity of vinegar-processed RAR. A series of studies, such as the writhing reaction, ear swelling experiment, complete Freund's adjuvant-induced rat foot swelling experiment and cotton granuloma, in experimental mice was conducted to observe the anti-inflammatory effect of vinegar-processed RAR. The inflammatory cytokines of model rats were determined by enzyme-linked immunosorbent assay (ELISA). Liquid Chromatography-Quadrupole-Time of Flight mass spectrometer Detector (LC-Q-TOF) was used to analyse the chemical compositions of the RARs before and after vinegar processing.
Neither obvious changes in mice nor death phenomena were observed as the amount of vinegar-processed RAR in crude drug was set at 2.1 g/kg. Vinegar-processed RAR could significantly prolong the latency, reduce the writhing reaction time to reduce the severity of ear swelling and foot swelling, and remarkably inhibit the secretion of Interleukin-1β(IL-1β), Interleukin-6 (IL-6) and tumor necrosis factor-α (TNF-α) proinflammatory cytokines. The content of twelve saponins (e.g., Eleutheroside K) in RAR was decreased after vinegar processing, but six other types (e.g., RDA) were increased.
These results revealed that vinegar processing could not only improve the analgesic and anti-inflammatory effects of RAR but also reduce its own toxicity.
Trial registration
Rhizoma anemones Raddeanae Vinegar processing Anti-inflammation Analgesia
Rhizoma Anemones Raddeanae (RAR) is the dry rhizome of Anemone raddeana Regel, which belongs to Ranunculaceae. It is used to treat rheumatism and has been applied clinically in various fields for conditions such as wind and cold symptoms, hand-foot disease and spasms, joint pain and ulcer pain in China. Zheng's study [1] showed that RAR has an obvious anti-inflammatory effect on the classical inflammatory reaction. The extract of RAR can reduce the ear swelling of mice caused by xylene and foot swelling of rats caused by fresh egg white. Additionally, the RAR extract also decreases the proliferation of cotton ball granulosis in experimental rats. Zhang et al. [2] reported that the ethanolic extract of RAR could improve the primary and secondary inflammation of rats with adjuvant arthritis, and this improvement might be achieved by reducing the inflammatory factors of IL-1 beta, IL-6, IL-10 and TNF- alpha. In previous work, Gong et al. [3] indicated that the active groups of BU-6E in RAR had significant anti-inflammatory effects on RAW 264.7 cells. The anti-inflammatory effect occurred in both a time- and dose-dependent manner.
Vinegar-processed RAR, which has been performed according to Beijing Chinese medicine processing standards [4], has been used in the traditional ancient prescription of Zaizao pills. It was reported that processing with vinegar can reduce the toxicity or side effects of Radix Bupleuri to enhance its drug efficacy [5]. However, the entry of vinegar into liver channels can cause a series of effects, such as astriction, detoxification, water-following process, and analgesia.
Compared with the published researches which preliminarily/detailedly demonstrate the pharmacology effect of RAR [1, 2, 3], no related report has been published to discuss neither the pharmacology action of the vinegar-processed RAR nor the composition change of RAR in the vinegar-processing. In this paper, the comparison of differences in acute toxicity [6, 7], anti-inflammatory and analgesic effects between RAR and vinegar-processed RAR was conducted systematically. The changes in the chemical composition of RAR and vinegar-processed RAR were investigated, and the mechanism of vinegar processing was explored preliminarily to provide available scientific support for the mechanism study and clinical application of vinegar-processed RAR [8, 9].
RAR medicine was purchased from Xiancao Herb Medical Development Co. Ltd. (batch number: 170522; Jilin, China). The RAR was identified as the dry rhizome of Anemone raddeana Regel, which belongs to Ranunculaceae and was classified according to Chinese Pharmacopeia (2015).
The Raddeanin A (RDA) reference substance (purity ≥98%) was obtained from the National Institutes for Food and Drug Control (batch number: 89412793; Beijing, China). Rice vinegar was purchased from Beijing Er Shang Longhe Food Co., Ltd, (Beijing, China). Acetic acid and methanol were of chromatographic grade, and other reagents were of analytical grade. Ultra-pure water was applied throughout the experiment. Xylene was purchased from Beijing Chemical Reagent Factory (batch number: 20170216; Beijing, China). Glacial acetic acid was purchased from Beijing Chemical Reagent Factory (batch number: 20160818; Beijing, China). Sodium carboxymethyl cellulose (CMC-Na; 0.1%) was purchased from Beijing Chemical Reagent Factory (batch number: 20170120; Beijing, China). Complete Freund's adjuvant was obtained from Beijing Ding Guo Chang Sheng Biotechnology (batch number: AC-0051; Beijing, China). IL-1β, IL-6 and TNF-α ELISA kits were manufactured by R&D Systems inc. (batch numbers: IL-1β (RLB00), IL-6 (R6000B), and TNF-α ELISA (RTA00); USA).
Kunming mice (SPF level) weighing 18 to 22 g and SD rats weighing 220 to 240 g were all provided by Yi Si Experimental Animal Centre (animal certificate number: SCXK (JI)-2016–0002; Jilin, China). Both mice and rats were healthy and were never used for experiments. All the experiments were conducted according to the guidelines of the National Research Committee and Animal Ethics Committee of Changchun University of Traditional Chinese Medicine. The rats and mice were placed in a standard breeding room. Every ten mice and five rats were placed into one cage respectively to be separated from each other. The ambient temperature was 18–22 °C. The temperature was recorded daily and was adjusted if necessary. The wet and dry bulb thermometers were used to estimate and measure the humidity every day. The relative humidity was adjusted within the range of 50–60%. The noise was set below 60 dB. The lights were turned on at 8:00 and turned off at 20:00 every day. The ventilation was operated 8 to 20 times per hour with an air flow rate of 10 to 25 cm3 per minute. The litter in the cages of mice and rats was cleaned every 2 days. The cages were washed and disinfected once a week. Clean, pollution-free drinking water was supplied timely with drinking bottles, which were washed every 2 days. The breeding room was sprayed with 0.1% benzalkonium perm spray every month and was sterilized with peracetic acid every quarter.
HPLC analysis was carried out using a Shimadzu Model 2030 HPLC instrument (Shimadzu, Japan). Drugs were weighed using an AB135-S electronic analytical balance (Mettler-Toledo, Shanghai, China). Ultrasonic treatment was carried out using a KQ3200 DB CNC ultrasonic cleaning device (KunShan Ultrasonic Instruments Co., Ltd., Shanghai, China). An HH-S24 electric constant temperature water bath (Jintan Automation Instruments Co., Ltd., Jiangsu, China) was used for heating. The intelligent hot plate YLS-6B (Shanghai Precision Instrument Co., Ltd., Shanghai, China) was used in mouse hot-plate experiments. A Vernier caliper (Measuring Instruments Co., Ltd. Nanjing Su) with a scale range of 0–150 nm was used in the complete Freund's adjuvant-induced rat foot-swelling experiments. The optical density (OD) values were measured using an iMark microplate reader (Bio-Rad, USA). The mass spectrum results were obtained using an LC-Q-TOF mass spectrometer (Mass Hunter workstation; Agilent, USA).
Pharmacological experimental study
Preparation of samples for experiment
RAR sample: 210 g of crude RAR was ground, followed by extraction with water three times. Thereafter, decoction was conducted with 8 volumes of water for 2 h, followed by 6 volumes of water for 1.5 h, and then 6 volumes of water for 1 h. The obtained extract solution was collected and evaporated to dryness. The extraction yield of RAR was calculated as 16.84%.
Vinegar-processed RAR sample: 210 g of crude RAR was mixed with 42 mL of vinegar at the ratio of 5:1. After soaking for 2 h, the mixture was stir-fried at 120 °C for 10 min. The extraction steps were the same as that of RAR, and the extraction yield of vinegar-processed RAR was calculated as 16.54%.
According to the Chinese Pharmacopoeia (2015), the dosage of RAR was in the range of 1 to 3 g, and the middle value of 2 g was adopted in this study. Thus, the dosage of 2 g was selected to strengthen the pharmacological effects and select the effective component in RAR. According to the average weight of people at 60 kg, the normal dosage of crude drug was set as 0.03 g/kg of body weight. Thus, the dose of crude drugs of 5 times, 10 times, and 20 times, corresponding to 1.05 g/kg, 2.1 g/kg, and 4.2 g/kg, respectively, were chosen as the low, medium and high doses, respectively. Considering the extraction yield of 16, 0.1% CMC-Na was used to prepare three solutions at different concentrations (168 mg/kg, 336 mg/kg and 672 mg/kg) before oral administration for the initial efficacy experiment.
Acute toxicity experiment in mice
The fifty mice fasted for 12 h before experiment were randomly divided into five equal groups as follows: blank group, RAR group 1, RAR group 2, vinegar-processed RAR group 1, and vinegar-processed RAR group 2. Each group consisted of ten mice (male and female in half). RAR group 1 and vinegar-processed RAR group 1 were intragastrically administered once a day with 4.2 g/kg of crude drug based on the weight of each mouse (ig). RAR group 2 and vinegar-processed RAR group 2 were intragastrically administered once a day with 2.1 g/kg of crude drug based on the weight of each mouse (ig). The blank group was intragastrically administered with an equal volume of distilled water. The mice were observed per 15-min interval since being intragastrically administered from 0 to 2 h, per 0.5-h interval from 2 to 4 h, per 1-h interval from 4 to 8 h, and per 4-h interval from 8 to 24 h, respectively. Daily observation was performed for 14 days. The weight and intake conditions of the mice were recorded, as well as the potential toxic and death phenomena [10, 11, 12, 13].
Writhing reaction induced by acetic acid in mice
The forty mice (20 male and 20 female) were divided into five groups. The positive control group was given 3 mg/kg of indomethacin, and the blank group was intragastrically administered with 10 mL/kg of distilled water. RAR and vinegar-processed RAR at 2.1 g/kg was intragastrically administered once a day for 7 days. The mice were then injected intraperitoneally (ip) with 10 mL/kg of 0.6% acetic acid 1 h after the last administration, and the time of the first writhing reaction that occurred after three minutes of incubation was recorded as the latency. The writhing reaction times for each mouse within 15 min after modelling was also recorded [14, 15].
Hot-plate study in mice
The time since the mice started to lick the hind feet was viewed as the pain threshold. The forty female mice (referring to the pharmacological experimental method [16], hot plates would burn the genitals of male mice) with a pain threshold between 5 s and 30 s were selected and divided randomly into four groups and were intragastrically administered as mentioned above. After 6 days of continuous intragastric administration, the values of the pain threshold after 0.5-h, 1.5-h and 2-h administration of each group were determined by the hot-plate method [17, 18].
Xylene induces ear swelling in mice
The twenty male mice and twenty female mice were randomly and equally divided into the model group, positive control group, RAR group and vinegar-processed RAR group. The positive control group was intragastrically administered 100 mg/kg of indomethacin. Additionally, 10 mL/kg of distilled water was intragastrically administered to the blank group. RAR and vinegar-processed RAR were intragastrically administered with 2.1 g/kg once a day for 7 days. After 1 h of the last intragastric administration, all the right auricles of the mice were evenly coated with 40 μL of xylene, and the mice were sacrificed by cervical dislocation after 30 min of xylene treatment. The aures unitas were carefully removed and punched at the same place with the same pore diameter. The aures unitas were weighed immediately to calculate ear swelling [19, 20]. The equation was as follows:\( ES\left(\%\right)=\frac{\left( REW- LEW\right)}{LEW}\times 100\% \)
ES was utilized to explain ear swelling. REW was utilized to explain the weight of the right ear. LEW was utilized to explain the weight of the left ear.
Complete Freund's adjuvant-induced rat foot swelling
The forty male rats (referring to the pharmacological experimental method, with recommendations for male rats) were randomly and equally divided into the blank group, positive control group, RAR group and vinegar-processed RAR group. The rats in the blank group were intragastrically administered 10 mL/kg of distilled water. Next, 5 mg/kg of methotrexate was intragastrically administered to the positive control group. The RAR and vinegar-processed RAR were intragastrically administered with 2.1 g/kg of RAR and vinegar-processed RAR once a day for 6 days (according to previous experiments, the optimal dosing period was 6 days; thus, 6 days was chosen). After 1 h of the last administration, 100 μL of Freund's adjuvant was injected subcutaneously into the right hind foot of the rats. The volume of the right hind foot was measured at 0.5 h, 1 h, 2 h, 3 h, 4 h and 6 h after modelling. The foot swelling was defined as the difference in the value of the right posterior foot volume before and after administration [21, 22, 23]. The inhibition rate of the foot swelling degree was calculated using the following equation:
$$ TR\left(\%\right)=\frac{\left( BAD- RAD\right)}{BAD}\times 100\% $$
TR was utilized to explain the tumescent inhibition rate. BAD was utilized to explain the average degree of swelling in the blank group. RAD was utilized to explain the average swelling degree of the RAR group or vinegar-processed RAR group.
Granuloma model
The fifty male mice (referring to Xu Shuyun's third edition of pharmacology experimental methodology, with recommendations for male mice) were implanted with approximately 10 mg of sterilized cotton balls by axillary subcutaneous surgery under sterile conditions. Penicillin was dropped onto the wound twice a day postoperatively. The mice were randomly divided into five groups—the blank group, model group, positive control group, RAR group, and vinegar-processed RAR group. Intragastric administration was conducted once a day, and then the mice were sacrificed at the seventh day after continuous administration. The wet weight of the cotton balls removing the granulation tissue was determined. After drying at 60 °C in an oven for three days, the dry weight of the cotton balls was determined to calculate the granuloma weight [24, 25].
Determination of inflammatory factors
The blood samples of rats that have been under complete Freund's adjuvant-induced foot swelling were taken from their abdominal aorta. The obtained samples were then separated, followed by extraction of the serum and plasma by centrifugation. Next, the rats were sacrificed by cervical dislocation. The inflammatory factors IL-1β, IL-6 and TNF-α were measured strictly according to the specification of the ELISA Kit. The OD values were measured using the iMark reader at 450 nm.
All the data were expressed as the mean values ± standard deviation (SD). The data were subjected to statistical analysis of variance (ANOVA) by comparing means with Tukey's experiment (p < 0.05). The IBM SPSS 20 statistical programme was used for all statistical analyses.
Chemical composition analysis
Preparation of reference substance
The moderate RDA reference was weighed accurately and dissolved in a volumetric flask of 5 mL with methanol. The concentration of the solution was fixed at 1.07 mg/mL [26].
Preparation of the experiment sample
RAR powder (2.5 g) was screened with a No. 5 sieve and dissolved with 70% ethanol in a 50-mL conical flask as described previously. According to previous studies, 70% ethanol has the highest efficiency to extract substances from drugs. After 40 min of ultrasonic treatment, the solution was filtered, and the filtrate was evaporated to dryness. The residue was dissolved and diluted with methanol to 25 mL.
Liquid chromatography (LC) and mass spectrometry (MS) conditions
LC conditions: The ZORBAX SB-C18 column (50 mm × 2.1 mm, 5 μm) was used as the chromatographic column. Gradient elution was carried out with acetonitrile-0.1% formic acid aqueous solution. The flow rate was set as 0.3 mL/min. The column temperature was fixed at 30 °C, and the injection volume was 1 μL.
MS conditions: ESI was selected as the ion source, and MS1 was chosen as the collection method. The acquisition range varied from 100 to 2000. The dry gas temperature was set as 35 °C, and the flow rate was fixed at 8 L/min. The spray pressure was set as 35 psi, and negative ion was selected as the ion type [27, 28, 29, 30].
Pharmacological results
Acute toxicity in mice
After intragastric administration with a certain volume of RAR and vinegar-processed RAR, one mouse was found dead and one was in poor condition in RAR group 1. No dead mice were found in RAR group 2. Regarding the vinegar-processed RAR groups, the number of dead mice was one and zero for groups 1 and 2, respectively. Neither visible changes nor death of mice nor obvious differences in the physiological activity occurred between the RAR groups and vinegar-processed RAR groups in the middle dose groups compared with those of the blank group after 24 h of observation. It was suggested that no acute toxicity occurred with the middle dose of RAR or vinegar-processed RAR for the mice. Based on these results, the middle doses of RAR and vinegar-processed RAR were used for subsequent experiments. After the mouse acute poisoning minimum lethal dose (MLD) test, the median lethal dose (LD50) value of the vinegar-processed RAR was 112.64 g/kg, and the MLD value was temporarily 151.14 g of crude drug/kg·d. The weight changes of the mice are summarized and shown in Fig. 1a and Additional file 1 below.
Open image in new window
A The weight change map of mouse in acute toxic experiment (Note: RAR group 1 was intragastrically administered once a day with 4.2 g/kg of crude drug based on the weight of each mouse (ig),and vinegar-processed RAR group 1was intragastrically administered once a day with 4.2 g/kg of vinegar-processed crude drug based on the weight of each mouse (ig).RAR group 2 was intragastrically administered once a day with 2.1 g/kg of crude drug based on the weight of each mouse (ig),and vinegar-processed RAR group 2 intragastrically administered once a day with 2.1 g/kg of vinegar-processed crude drug based on the weight of each mouse (ig). n = 10). B Effect of RAR and its processed products on the writhing reaction in mice (Note: compared with the blank Group *P < 0.05, **P < 0.01, n = 10). C Effects of RAR and processed products on ear swelling in mice (Note: compared with the model control Group *P < 0.05, **P < 0.01, n = 10). D Cotton ball weight in rats (Note: compared with the model Group *P < 0.05, **P < 0.01, n = 10). E Inflammatory factor contents of IL-1β, IL-6 and TNF-α levels in each group (Note: compared with the blank Group: *P < 0. 05, **P < 0. 01, compared with the model Group: ΔP < 0.05, ΔΔP < 0.01, n = 10). See Additional file 1 for additional data.
Contrasting total ion chromatogram of RAR and vinegar-processed RAR (See Additional file 1 for additional data)
Effect of the writhing reaction induced by vinegar-processed RAR in mice
As shown in Fig. 1b and Additional file 1, compared with the blank group, the use of vinegar-processed RAR significantly extended the latency of the writhing reaction as well as reduced the torture times in mice effectively.
Effect of the pain threshold in mice subjected to a hot plate
According to the observation of experimental phenomena, the data of pain thresholds fluctuated and some were repeated after 2.0 h of administration. It was deduced that the memory of mice caused by the chaos of the pain threshold might be formed after repeated stimulation. However, based on the consideration mentioned above, although there was a remarkable increase in the pain threshold in the vinegar-processed RAR group and positive control group after 1.0 h of administration, the data were neglected in this work.
Effect of ear swelling induced by vinegar-processed RAR in mice
As shown in Fig. 1c and Additional file 1, RAR could obviously reduce the swelling degree of the ears of mice with significant differences (P < 0.05) compared with that of the model group. A more prominent phenomenon of swelling degree could be observed in the vinegar-processed RAR group with a more significant difference (P < 0.01).
Effect on foot swelling induced by vinegar-processed RAR in rats
In Table 1, the degree of foot swelling of the model group increased to 80.32% compared with that of the blank group, indicating that the model was set up successfully. After 4 h, the foot swelling degree of the foot was significantly decreased in each drug group. Compared with the model group, the foot swelling degree of the vinegar-processed RAR group was decreased, which was significantly different from that of the model group at 2 h. The foot swelling degree of vinegar-processed RAR was much higher than that of the RAR Group, indicated that the vinegar-processed RAR is more effective in the foot swelling treatment.
Effects of the swelling of feet in rats induced by vinegar-processed RAR (\( \overline{\mathrm{x}} \) ± s, n = 10)
Blank group
Methotrexate group
RAR group
Vinegar-processed RAR group
Dose (mg/kg)
0.5 h Swell (mm)
4.32 ± 0.50
0.5 h Inhibition Rate (%)
4.92 ± 0. 83
6.87 ± 0.36*
6.32 ± 0.26**
Effect on the granuloma model reaction induced by vinegar-processed RAR in rats
In Fig. 1d and Additional file 1, compared with the blank group, the cotton ball weight of the vinegar-processed RAR group was reduced, displaying a significant difference. However, the difference in the cotton ball wet weight between the vinegar-progressed RAR group and positive control group was negligible, indicating that the vinegar-processed RAR could be utilized as an effective candidate to inhibit the inflammation proliferative phase.
Determination of inflammatory factors in inflammatory rats
Standard curves of IL-1β, IL-6 and TNF-α were established according to the specification of the ELISA Kit. Inflammatory factor contents were calculated and are summarized in Fig. 1e and Additional file 1.
Compared with the blank group, the contents of inflammatory factors were increased by 120 pg/mL, 49 pg/mL, and 72 pg/mL, corresponding to nearly 5 times, 1.5 times and 2 times, respectively. Compared with the model group, the contents of the inflammatory factors of IL-1β, IL-6 and TNF-α were significantly decreased in the vinegar-processed RAR group, which effectively reduced the inflammatory cytokine content and achieved the anti-inflammatory effect.
Mass spectrum results
The chemical structures of the major saponins in RAR were analysed by MS analysis [30, 31].
These saponins in RAR would interact with each other due to vinegar processing. The content of some saponins would increase, while others would decrease. As shown in Fig. 2, based on the information obtained from the total ion flow chart and mass spectrum, it can be calculated that the molecular weights of the components in RAR were 734, 750, 896, 912, 1204, 1220, 1236, 1336, 1366, 1382, 1498, and 1528. Due to only the first-order mass spectrometry being obtained, these components may be deduced as Eleutheroside K, Raddeanoside B, Saponin P E, Raddeanoside R12, Raddeanoside A, Raddeanoside R6, Raddeanoside R13, Raddeanoside D, Hederasaponin B, Raddeanoside R14, Leonloside D, Raddeanoside R15, Raddeanoside R8, Raddeanoside R9, R18, Hederacholichiside F, Raddeanoside R16, and Raddeanoside R17, R10.
Based on the retention time, it can also be concluded that the contents of Eleutheroside K, Raddeanoside B, Saponin P E, Raddeanoside R12, Raddeanoside R13, Raddeanoside D, Hederasaponin B, Raddeanoside R14, Raddeanoside R15, Raddeanoside R9, R18, and Hederacholichiside F were decreased after RAR vinegar processing. By contrast, the contents of Raddeanoside A, Raddeanoside R6, Leonloside D, Raddeanoside R16, and Raddeanoside R17, R10 were increased.
In recent studies, certain kinds of triterpenoid saponins which separated from RAR have been reported to be responsible for its bioactivities, such as analgesic and anti-inflammation effects [31]. Raddeanoside A, for example, has been identified as a classical triterpenoid saponin amongst these components with obvious analgesic and anti-inflammatory effects [16]. It can be deduced that the content of these components may influence the pharmacological effect of RAR. Vinegars, as traditional fermented foods, is not only used to provide health and therapeutic effects due to their bioactive components [32], but also utilized in traditional Chinese PaoZhi processing technique to enhance the pharmacological effect of medicinal materials. Li reported that the hepatoprotective effect of Radix Bupleuri was successfully strengthened after vinegar process, which changed the distribution of contents of the Radix Bupleuri's saponin components [33]. In this study, acute-subacute inflammatory models under non-specific inflammatory models (mouse ear swelling model and rat foot-swelling model) and the inflammatory proliferation phase (granuloma formation) were selected as animal models to perform more comprehensive anti-inflammatory exploration of vinegar-processed RAR [34, 35]. According to the results obtained in the determination of inflammatory factors in inflammatory rat experiments, it was suggested that vinegar-processed RAR significantly inhibited the expression of proinflammatory cytokines IL-1β, IL-6 and TNF-α. We deduced that vinegar processing can promote the elimination of inflammation by inhibiting the secretion of proinflammatory cytokines and provides the preliminary basis for the study of RAR anti-inflammatory basic research. The results of mouse ear swelling experiments and rat foot-swelling experiments showed that the anti-inflammatory effect of vinegar-processed RAR was much better than that of RAR. Vinegar processing could strengthen the anti-inflammatory effects of the original medicine. The analgesic experiments were carried out by chemical stimulation (writhing reaction induced by glacial acetic acid) and thermal stimulation experiments (hot plate experiment). The analgesic effects, including prolonging the latency of the writhing reaction and increase in the pain thresholds, were caused by vinegar-processed RAR. After vinegar processing, the response of the stimulated mice towards the outside was very slow, indicating that the vinegar processing can improve the effect of analgesia. Based on the results obtained from acute toxicity experiments, we deduced that RAR toxicity was reduced by vinegar processing.
The data in mass spectrometry analysis indicated that the content of some saponins increased, while others decreased after vinegar processing. The cause might be the interaction among the saponins in RAR utilized in the theory of Traditional Chinese medicine as the drug's tendency and nature, which would change the substance content and achieve the best drug effect. The triterpenoid saponins in the genus Anemone were mainly divided into C-3 monodesmosidic saponins and C-3, 28 bisdesmosidic saponins based on their carbohydrate chain structure. Regarding the monodesmosidic saponins, the fragment ions at C-3 were removed in sequence. Concerning the bisdesmosidic saponins, the subsequent cleavage of monosaccharide units in the C-28 position occurred preferentially. The ion peak formed by removing the entire carbohydrate chain was treated as the base peak. Multi-stage MS was further carried out taking the base peak as the parent ion followed by fracture of the carbohydrate chain at the C-3 position.
MS-MS analysis was used to determine the structure of several constituents. According to the data in Table 2, it was shown that the content of Raddeanoside A increased by 20.54% after vinegar process. This result might be caused by the ring-opening reaction of epoxy ring of other components in acid environment whose contents decreased. The increase of Raddeanoside A would be beneficial to obtain strong analgesic and anti-inflammatory effects mentioned above [36]. Besides, due to the existence of isomers, for instance, the isomers of Raddeanoside R3 and Raddeanoside R6 with the same molecular weight of 897, these mixtures should be separated before MS analysis. The fluctuation in the content of these components may have a certain improvement on their anti-inflammatory effect, which still needs to be further explored. The contents of R17, R10 and R16 in the vinegar-processed RAR were significantly increased, likely contributing to better control of the inflammation. To the Raddeanoside 17, R10 and R16, the substituent groups located in R2, R3 and R4 of them are CH3, −gic6-gic4-rha and CH3, respectively. R1 represented different types of substituents attached to various types of arabinose. Thus, a better anti-inflammatory effect might be obtained using saponins with the substituents listed above. In addition to that, the anti-inflammatory effect would improve further as the content of the substituents is increased.
RAR and vinegar-processed RAR retention time and peak area data statistics
tR/min
Peak area
Peak area Drop Ratio (%)
Vinegar-processed RAR
13.49%↑
Raddeanoside R9、R18、Hederacholichiside F
4.80%↑
78.90%↓
Leonloside D
632.20%↓
Raddeanoside R17、R10
Raddeanoside R16
Raddeanoside R8
Raddeanoside D、Hederasaponin B
Raddeanoside A、Raddeanoside R6
Saponin P E、Raddeanoside R12
Eleutheroside K、Raddeanoside B
Vinegar may react with the substituents on different saponins to transform R2, R3 and R4 to -CH3, −gic6-gic4-rha and CH3, respectively. Thus, the content of the anti-inflammatory active ingredient is increased accordingly. Another possible reason is that the combination of organic acid in vinegar with the alkaline substances in RAR in vinegar processing reduced the toxicity, improving the anti-inflammatory effect of vinegar-progressed RAR. As the main effective ingredient of RAR recorded in the pharmacopoeia, the content of RDA was increased by vinegar processing, contributing to the inflammation improvement. The improvement in the inflammation of other ingredients requires further study.
Vinegar-processed RAR could significantly prolong the latency of the writhing reaction in mice, reduce the times of mouse writhing as well as the severity of ear swelling and foot swelling. The secretion of IL-1β, IL-6 and TNF-α proinflammatory cytokines could be remarkably inhibited. Vinegar processing could improve the analgesic and anti-inflammatory effects of RAR and reduce its toxicity. The content of twelve saponins (e.g., Eleutheroside K) in RAR was decreased after vinegar processing but six other types (e.g., RDA) were increased.
Supplementary information accompanies this paper at https://doi.org/10.1186/s12906-019-2785-0.
Dr. GJY and Dr. CGZ conceived, designed the research and revised the manuscript; WSS participated in its design and performed experiments; ZL and ZSY drafted the manuscript. All authors have read and approved the final manuscript.
All sources of funding for the research reported were undertaken by Dr. Gong JY.
The studies were approved by the Animal Ethics Committee of Changchun University of Chinese Medicine.
12906_2019_2785_MOESM1_ESM.docx (21 kb)
Additional file 1: Supplementary material about pictures in the manuscript.
Zheng J, Xu JH, Lu JC. The primary exploration of the anti-inflammatory and analgesic effect on RAR extract [J]. Guide China Med. 2012;10(26):1–2.Google Scholar
Zhang XP, Cai GZ, An N, Wang YY, Gong JY. RAR extract of anti-adjuvant arthritis effect. Chinese Pharmacol Clin. 2016;32(02):131–4.Google Scholar
Gong JY, An N, Wang SS, Yun XL, Li ZM, Cai GZ. Study on anti-inflammatory effect of BU-6E, a two-point active component, on LPS-induced inflammation in vivo and in vivo. Chinese Pharmacol Clin. 2017;33(02):67–70.Google Scholar
Beijing Municipal Drug Administration Bureau. Beijing Chinese medicine Pieces processing standards. Beijing: Chemical Industry Press, 2008; 58–59.Google Scholar
Zhao Y, Wang YJ, Zhao RZ, Xiang FJ. Vinegar amount in the process affected the components of vinegar-baked Radix Bupleuri and its hepatoprotective effect. BMC Complem Altern M. 2016;346.Google Scholar
Abere TA, Okoye CJ, Agoreyo FO, Eze GI, Jesuorobo RI, Egharevba CO, Aimator PO. Antisickling and toxicological evaluation of the leaves of Scoparia dulcis Linn (Scrophulariaceae). BMC Complement Altern Med. 2015;15(1):1–7.CrossRefGoogle Scholar
Jeong S-J, Huh J-I, Shin H-K. Cytotoxicity and subacute toxicity in Crl: CD (SD) rats of traditional herbal formula Ojeok-san. BMC Complement Altern Med. 2015;15(1):38.CrossRefGoogle Scholar
Afsar1 T, Khan1 MR, Razak2 S, Ullah1 S, Mirza1 B. Antipyretic, anti-inflammatory and analgesic activity of Acacia hydaspica R. Parker and its phytochemical analysis. BMC Complement Altern Med. 2015;15(1).Google Scholar
Othman AR, Abdullah N, Ahmadm S, Ismail IS, Zakaria MP. Elucidation of in-vitro anti-inflammatory bioactive compounds isolated from Jatropha curcas L. plant root. BMC Complementary and Alternative Medicine. 2015;15(1):11.CrossRefGoogle Scholar
Xie YZ, Sun R, Zhang YN, Qian XL. Yishenwu hair Oral liquid on acute toxicity of mice. Chinese J Drug Adm. 2011;8(05):272–4.Google Scholar
Carneiroab MLB, Lopesb CAP, Miranda-Vilelacd AL, Joanittie GA, da Silvae ICR, Mortarif MR, de Souzag AR, Báob S n N. Acute and subchronic toxicity of the antitumor agent rhodium (II) citrate in Balb/c mice after intraperitoneal administration. Toxicol Rep. 2015:1086–100.CrossRefGoogle Scholar
Ke LJ, Gao GZ, Shen Y, Zhou JW, Rao PF. Encapsulation of Aconitine in Self-Assembled Licorice Protein Nanoparticles Reduces the Toxicity In Vivo. Nanoscale Res Lett. 2015;10(1):449.CrossRefGoogle Scholar
Evaluation of acute toxicity and gastroprotective activity of curcuma purpurascens BI. rhizome against ethanol-induced gastric mucosal injury in rats. BMC Complement Altern Med. 2014:378.Google Scholar
Gupta AK, Parasar D, Sagar A, Choudhary V, Chopra BS, Garg R, Ashish, Khatri N. Analgesic and Anti-Inflammatory Properties of Gelsolin in Acetic Acid Induced Writhing, Tail Immersion and Carrageenan Induced Paw Edema in Mice. PLoS One. 2015;10(8):e0135558.CrossRefGoogle Scholar
Oh YC, Jeong YH, Cho WK, Ha JH, Gu MJ, Ma JY. Anti-inflammatory and analgesic effects of pyeongwisan on LPS-stimulated murine macrophages and mouse models of acetic acid-induced writhing reaction and xylene-induced ear swelling. Int J Mol Sci. 2015;16(1):1232–51.CrossRefGoogle Scholar
Wang BX, Cui HC, Liu AJ. Studies on pharmacological action of saponin of the root of Anemone Raddeana. J Tradit Chin Med. 1985;5(1):61–4.PubMedGoogle Scholar
Nasser A, Bjerrum OJ, Heegaard A-M, ller ATM, Larsen M, Dalb ge LS, Dupont E, Jensen TS, ller LBM. Impaired behavioural pain responses in hph-1 mice with inherited deficiency in GTP cyclohydrolase 1 in models of inflammatory pain. Molecular Pain. 2013:5.Google Scholar
Masocha W1, Kombian SB1, Edafiogho IO2. Evaluation of the antinociceptive activities of enaminone compounds on the formalin and hot plate experiments in mice. Sci Rep. 2016:21582.Google Scholar
Jain AP, Bhandarkar S, Rai G, Yadav AK, Lodhi S. Evaluation of Parmotrema reticulatum Taylor for Antibacterial and Antiinflammatory Activities. Indian J Pharm Sci. 2016;78(1):94–102.CrossRefGoogle Scholar
Gou KJ, Zeng R, Dong Y, Hu QQ, Hu HWY, Maffucci KG, Dou QL, Yang QB, Qin XH, Qu Y. Anti-inflammatory and Analgesic Effects of Polygonum orientale L. Extracts. Front Pharmacol. 2017.Google Scholar
Kumar V, Bhatt PC, Rahman M, Patel DK, Sethi N, Kumar A, Sachan NK, Kaithwas G, Al-abbasi FA, Anwar F, Verma A. Melastoma malabathricum Linn attenuates complete freund's adjuvant-induced chronic inflammation in Wistar rats via inflammation response. BMC Complement Altern Med. 2016;16(1).Google Scholar
Zhang, Dong, Dong, Zhang, Li. Investigation of the effect of phlomisoside F on complete Freund's adjuvant-induced arthritis. Exp Ther Med. 2017;13(2):710–6.CrossRefGoogle Scholar
Chen Y, Wang QW, Zuo J, Chen JW, Li X. Anti-arthritic activity of ethanol extract of Claoxylon indicum on Freund's complete adjuvant-induced arthritis in mice. BMC Complement Altern Med. 2017;17(1):11.CrossRefGoogle Scholar
Ufimtseva E. Mycobacterium-host cell relationships in granulomatous lesions in a mouse model of latent Tuberculous infection. Biomed Res Int. 2015:948131.Google Scholar
Ufimtseva E. Investigation of Functional Activity of Cells in Granulomatous Inflammatory Lesions from Mice with Latent Tuberculous Infection in the New Ex Vivo Model. Clin Dev Immunol. 2013:371249.Google Scholar
National Pharmacopoeia Commission. People's Republic of China Pharmacopoeia (a). Beijing: Chemical Industry Press, 2015; 168–169.Google Scholar
Miao LX, Xun LJ, Yan CC, Yan HX, Lin FC, Fang PG. Fast Screening of 26 Highly Toxic and Poisitive Pesticides in Tea Using LC-Q-TOF / MS:394–7.Google Scholar
Guan R, Sun LL, Guan SY, Yang LW. Studies on the HPLC fingerprinting and the LC-Q-TOF-MS of Trichosanthes extract and Keteling capsules. 22(03):57–62.Google Scholar
Niu Y, Wang SF. LC-Q-TOF-MS and LC-IT-MS ~ n analysis of Danggui Shaoyao San chemical constituents Chinese herbal medicine. 2014;45(08):1056–62.Google Scholar
Li F, Xu KJ, Ding LS, Wang MK. Simultaneous analysis of triterpenoid saponins in the RAR by silica gel column chromatography-electrospray multistage mass spectrometry. Chin J Anal Chem. 2011;39(02):219–24.Google Scholar
Zhou HL, Shun YX, Li Y, Wang B, Liu DY. Progress in studies on chemical constituents and pharmacological effect of Anemone Raddeana regel. Li Shizhen Med Mater Med Res. 2007;18(5):1239–41.Google Scholar
Xia T, Yao JH, Wang JK, Zhang J, Wang M. Antioxidant activity and hepatoprotective activity of Shanxi aged vinegar in hydrogen peroxide-treated HepG-2 cells [C]. International Conference on Applied Biotechnology. Springer, Singapore, 2016.Google Scholar
Zhao Y, Wang YJ, Zhao RZ, Xiang FJ. Vinegar amount in the process affected the components of vinegar-baked Radix Bupleuri and its hepatoprotective effect. BMC Complement Altern Med. 2016;26(346).Google Scholar
Li SY, Li Z, Wang SM, Chen Y. Rapid identification of saponins from traditional Chinese medicine by high performance liquid chromatography-mass spectrometry-mass spectrometry. J Hubei Univ (Nat Sci Ed). 2000;04:382–6.Google Scholar
Xu SY. Editor. Pharmacological experimental method. Beijing: People's health publishing house, 1992; 201-206. 36. Bai ZL, Wang Y, Jia TZ. The exploration of components of saponin in Chaihu in different vinegar-processing products. Chinese Traditional Patent Medicine. 2008;30(7):1021–3.Google Scholar
© The Author(s). 2020
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
1.Changchun University of Chinese MedicineChangchunChina
Wang, SS., Zhou, SY., Xie, XY. et al. BMC Complement Med Ther (2020) 20: 7. https://doi.org/10.1186/s12906-019-2785-0
Received 13 April 2018
Accepted 02 December 2019
First Online 15 January 2020
DOI https://doi.org/10.1186/s12906-019-2785-0
Publisher Name BioMed Central
|
CommonCrawl
|
Concentration of ground state solutions for quasilinear Schrödinger systems with critical exponents
CPAA Home
Scattering in the weighted $ L^2 $-space for a 2D nonlinear Schrödinger equation with inhomogeneous exponential nonlinearity
September 2019, 18(5): 2717-2733. doi: 10.3934/cpaa.2019121
Singular Hardy-Trudinger-Moser inequality and the existence of extremals on the unit disc
Xumin Wang ,
School of Mathematical Sciences, Beijing Normal University, Beijing 100875, China
Received October 2018 Revised January 2019 Published April 2019
Fund Project: The author was partly supported by grant from the NNSF of China (No.11371056)
We present the singular Hardy-Trudinger-Moser inequality and the existence of their extremal functions on the unit disc
$ B $
$ \mathbb{R}^2 $
. As our first main result, we show that for any
$ 0<t<2 $
$ u \in C_0^\infty({B}) $
$ \int_{{B}}|\nabla u|^2 dx- \int_{{B}}\frac{u^2}{(1-|x|^2)^2}dx\leq1, $
there exists a constant
$ C_{0}>0 $
such that the following inequality holds
$ \int_{{B}}\frac{e^{4\pi(1-t/2)u^2}}{|x|^t} dx\leq C_{0}. $
Furthermore, by the method of blow-up analysis, we establish the existence of extremal functions in a suitable function space. Our results extend those in Wang and Ye [36] from the non-singular case
$ t = 0 $
to the singular case for
Keywords: Singular Hardy-Trudinger-Moser inequality, rearrangement, blow-up analysis.
Mathematics Subject Classification: Primary: 35J50; Secondary: 46E30, 46E35.
Citation: Xumin Wang. Singular Hardy-Trudinger-Moser inequality and the existence of extremals on the unit disc. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2717-2733. doi: 10.3934/cpaa.2019121
D. Adams, A sharp inequality of J. Moser for higher order derivatives, Ann. of Math., 128 (1988), 385-398. doi: 10.2307/1971445. Google Scholar
Adimurthi and K. Sandeep, A singular Moser-Trudinger embedding and its applications, Nolinear Differential Equations Application, 13 (2007), 585-603.Google Scholar
Adimurthi and Y. Yang, An interpolation of Hardy inequality and Trudinger-Moser inequality in $\mathbb{R}^N$ and its applications, Int. Math. Res. Not., 13 (2010), 2394-2426. Google Scholar
L. Carleson and A. Chang, On the existence of an extremal function for an inequality of J. Moser, Bull. Sci. Math., 110 (1986), 113-127. Google Scholar
L. Chen, J. Li, G. Lu and C. Zhang, Sharpened Adams inequality and ground state solutions to the bi-Laplacian equation in $R^4$, Adv. Nonlinear Stud., 18 (2018), 429-452. doi: 10.1515/ans-2018-2020. Google Scholar
M. Dong, N. Lam and G. Lu, Sharp weighted Trudinger-Moser and Caffarelli-Kohn-Nirenberg inequalities and their extremal functions, Nonlinear Anal., 173 (2018), 75-98. doi: 10.1016/j.na.2018.03.006. Google Scholar
M. Dong and G. Lu, Best constants and existence of maximizers for weighted Trudinger-Moser inequalities, Calc. Var. Partial Differential Equations, 55 (2016), Art. 88, 26 pp. doi: 10.1007/s00526-016-1014-7. Google Scholar
Y. Dong and Q. Yang, An interpolation of Hardy inequality and Moser-Trudinger inequality on Riemannian manifolds with negative curvature, Acta. Mathematica Sinica., English Series, 32 (2016), 856-866. doi: 10.1007/s10114-016-5129-8. Google Scholar
M. Flucher, Extremal functions for the Trudinger-Moser inequality in 2 dimensions, Comment. Math. Helv., 67 (1992), 471-497. doi: 10.1007/BF02566514. Google Scholar
N. Lam, Equivalence of sharp Trudinger-Moser-Adams inequalities, Commun. Pure Appl. Anal., 16 (2017), 973-997. doi: 10.3934/cpaa.2017047. Google Scholar
N. Lam and G. Lu, Sharp constants and optimizers for a class of Caffarelli-Kohn-Nirenberg inequalities, Adv. Nonlinear Stud., 17 (2017), 457-480. doi: 10.1515/ans-2017-0012. Google Scholar
N. Lam and G. Lu, Sharp singular Trudinger-Moser-Adams type inequalities with exact growth, Geometric methods in PDE's, 43–80, Springer INdAM Ser., 13, Springer, Cham, 2015. Google Scholar
N. Lam and G. Lu, A new approach to sharp Moser-Trudinger and Adams type inequalities: a rearrangement-free argument, J. Differential Equations, 255 (2013), 298-325. doi: 10.1016/j.jde.2013.04.005. Google Scholar
N. Lam and G. Lu, Sharp Moser-Trudinger inequality on the Heisenberg group at the critical case and applications, Adv. Math., 231 (2012), 3259-3287. doi: 10.1016/j.aim.2012.09.004. Google Scholar
J. Li, G. Lu and Q. Yang, Fourier analysis and optimal Hardy-Adams inequalities on hyperbolic spaces of any even dimension, Adv. Math., 33 (2018), 350-385. doi: 10.1016/j.aim.2018.05.035. Google Scholar
J. Li, G. Lu and M. Zhu, Concentration-compactness principle for Trudinger-Moser inequalities on Heisenberg groups and existence of ground state solutions, Calc. Var. Partial Differential Equations, (2018), 57-84. doi: 10.1007/s00526-018-1352-8. Google Scholar
Y. Li, Trudinger-Moser inequality on compact Riemannian manifolds of dimension two, J. Partial Differential Equations, 14 (2001), 163-192. Google Scholar
Y. Li, Extremal functions for the Moser-Trudinger inequalities on compact Riemannian manifolds, Sci. China Ser. A., 48 (2005), 618-648. doi: 10.1360/04ys0050. Google Scholar
Y. Li, Remarks on the extremal functions for the Moser-Trudinger inequality, Acta Math. Sin. (Engl. Ser.), 22 (2006), 545-550. doi: 10.1007/s10114-005-0568-7. Google Scholar
Y. Li and C. Ndiaye, Extremal functions for Moser-Trudinger type inequality on compact closed 4-manifolds, J. Geom. Anal., 17 (2007), 669-699. doi: 10.1007/BF02937433. Google Scholar
Y. Li and B. Ruf, A sharp Trudinger-Moser type inequality for unbounded domains in $R^n$, Indiana Univ. Math. J., 57 (2008), 451-480. doi: 10.1512/iumj.2008.57.3137. Google Scholar
K. Lin, Extremal functions for Moser's inequality, Trans. Amer. Math. Soc., 348 (1996), 2663-2671. doi: 10.1090/S0002-9947-96-01541-3. Google Scholar
G. Lu and H. Tang, Best constants for Moser-Trudinger inequalities on high dimensional hyperbolic spaces, Adv. Nonlinear Stud., 13 (2013), 1035-1052. doi: 10.1515/ans-2013-0415. Google Scholar
G. Lu and H. Tang, Sharp Moser-Trudinger inequalities on hyperbolic spaces with exact growth condition, J. Geom. Anal., 26 (2016), 837-857. doi: 10.1007/s12220-015-9573-y. Google Scholar
G. Lu and Q. Yang, A sharp Trudinger-Moser inequality on any bounded and convex plannar domain, Calc. Var. Partial Differential Equations, 55 (2016). doi: 10.1007/s00526-016-1077-5. Google Scholar
G. Lu and Q. Yang, Sharp Hardy-Adams inequalities for bi-laplacian on hyperbolic space of dimension four, Advances in Mathematics, 319 (2017), 567-598. doi: 10.1016/j.aim.2017.08.014. Google Scholar
G. Lu and Q. Yang, Paneitz operators on hyperbolic spaces and higher order Hardy-Sobolev-Maz'ya inequalities on half spaces, Amer. J. Math., to appear.Google Scholar
G. Lu and Y. Yang, Adams' inequalities for bi-Laplacian and extremal functions in dimension four, Adv. Math., 220 (2009), 1135-1170. doi: 10.1016/j.aim.2008.10.011. Google Scholar
G. Lu and Y. Yang, Sharp constant and extremal function for the improved Moser-Trudinger inequality involving $L^p$ norm in two dimension, Discrete Contin. Dyn. Syst., 25 (2009), 963-979. doi: 10.3934/dcds.2009.25.963. Google Scholar
G. Lu and M. Zhu, A sharp Trudinger-Moser type inequality involving $L ^n$ norm in the entire space $\mathbb{R}^n$.Google Scholar
G. Mancini and K. Sandeep, Moser-Trudinger inequality on conformal discs, Commum. Contemp. Math., 12 (2010), 1055-1068. doi: 10.1142/S0219199710004111. Google Scholar
G. Mancini, K. Sandeep and K. Tintarev, Trudinger-Moser inequality in the hyperbolic spaces $\mathbb{H}^N$, Adv. Nonlinear Anal., 2 (2013), 309-324. doi: 10.1515/anona-2013-0001. Google Scholar
J. Moser, A sharp form of an inequality by N. Trudinger, Indiana Univ. Math. J., 20 (1971), 1077-1092. doi: 10.1512/iumj.1971.20.20101. Google Scholar
S. I. Pohozaev, The Sobolev embedding in the case pl = n, Proceeding of the Technical Scientific Conference on Advances of Scientific Research, 1964–1965. Mathematics Section, Moskov. Energet. Inst., (1965), 158–170.Google Scholar
N. S. Trudinger, On embeddings into Orlicz spaces and some applications, J. Math. Mech., 17 (1967), 473-484. doi: 10.1512/iumj.1968.17.17028. Google Scholar
G. Wang and D. Ye, A Hardy-Moser-Trudinger inequality, Adv. Math., 230 (2012), 294-320. doi: 10.1016/j.aim.2011.12.001. Google Scholar
Q. Yang, D. Su and Y. Kong, Sharp Moser-Trudinger inequalities on Riemannian manifolds with negative curvature, Annali di Matematica Pura ed Applicata, 195 (2016), 459-471. doi: 10.1007/s10231-015-0472-4. Google Scholar
V. I. Yudovich, Some estimates connected with integral operators and with solutions of elliptic equations, Sov. Math. Docl., 2 (1961), 746-749. Google Scholar
C. Zhang and L. Chen, Concentration-compactness principle of singular Trudinger-Moser inequalities in $R^n$ and n-Laplace equations, Adv. Nonlinear Stud., 18 (2018), 567-585. doi: 10.1515/ans-2017-6041. Google Scholar
Xiaobao Zhu. Remarks on singular trudinger-moser type inequalities. Communications on Pure & Applied Analysis, 2020, 19 (1) : 103-112. doi: 10.3934/cpaa.2020006
Changliang Zhou, Chunqin Zhou. Extremal functions of Moser-Trudinger inequality involving Finsler-Laplacian. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2309-2328. doi: 10.3934/cpaa.2018110
Prosenjit Roy. On attainability of Moser-Trudinger inequality with logarithmic weights in higher dimensions. Discrete & Continuous Dynamical Systems - A, 2019, 39 (9) : 5207-5222. doi: 10.3934/dcds.2019212
Tomasz Cieślak. Trudinger-Moser type inequality for radially symmetric functions in a ring and applications to Keller-Segel in a ring. Discrete & Continuous Dynamical Systems - B, 2013, 18 (10) : 2505-2512. doi: 10.3934/dcdsb.2013.18.2505
Anouar Bahrouni. Trudinger-Moser type inequality and existence of solution for perturbed non-local elliptic operators with exponential nonlinearity. Communications on Pure & Applied Analysis, 2017, 16 (1) : 243-252. doi: 10.3934/cpaa.2017011
Guozhen Lu, Yunyan Yang. Sharp constant and extremal function for the improved Moser-Trudinger inequality involving $L^p$ norm in two dimension. Discrete & Continuous Dynamical Systems - A, 2009, 25 (3) : 963-979. doi: 10.3934/dcds.2009.25.963
Zhijun Zhang. Boundary blow-up for elliptic problems involving exponential nonlinearities with nonlinear gradient terms and singular weights. Communications on Pure & Applied Analysis, 2007, 6 (2) : 521-529. doi: 10.3934/cpaa.2007.6.521
Hua Chen, Nian Liu. Asymptotic stability and blow-up of solutions for semi-linear edge-degenerate parabolic equations with singular potentials. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 661-682. doi: 10.3934/dcds.2016.36.661
Philippe Souplet, Juan-Luis Vázquez. Stabilization towards a singular steady state with gradient blow-up for a diffusion-convection problem. Discrete & Continuous Dynamical Systems - A, 2006, 14 (1) : 221-234. doi: 10.3934/dcds.2006.14.221
Haitao Yang, Yibin Chang. On the blow-up boundary solutions of the Monge -Ampére equation with singular weights. Communications on Pure & Applied Analysis, 2012, 11 (2) : 697-708. doi: 10.3934/cpaa.2012.11.697
C. Y. Chan. Recent advances in quenching and blow-up of solutions. Conference Publications, 2001, 2001 (Special) : 88-95. doi: 10.3934/proc.2001.2001.88
Marina Chugunova, Chiu-Yen Kao, Sarun Seepun. On the Benilov-Vynnycky blow-up problem. Discrete & Continuous Dynamical Systems - B, 2015, 20 (5) : 1443-1460. doi: 10.3934/dcdsb.2015.20.1443
Alberto Bressan, Massimo Fonte. On the blow-up for a discrete Boltzmann equation in the plane. Discrete & Continuous Dynamical Systems - A, 2005, 13 (1) : 1-12. doi: 10.3934/dcds.2005.13.1
Marek Fila, Hiroshi Matano. Connecting equilibria by blow-up solutions. Discrete & Continuous Dynamical Systems - A, 2000, 6 (1) : 155-164. doi: 10.3934/dcds.2000.6.155
Victor A. Galaktionov, Juan-Luis Vázquez. The problem Of blow-up in nonlinear parabolic equations. Discrete & Continuous Dynamical Systems - A, 2002, 8 (2) : 399-433. doi: 10.3934/dcds.2002.8.399
W. Edward Olmstead, Colleen M. Kirk, Catherine A. Roberts. Blow-up in a subdiffusive medium with advection. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1655-1667. doi: 10.3934/dcds.2010.28.1655
Yukihiro Seki. A remark on blow-up at space infinity. Conference Publications, 2009, 2009 (Special) : 691-696. doi: 10.3934/proc.2009.2009.691
Petri Juutinen. Convexity of solutions to boundary blow-up problems. Communications on Pure & Applied Analysis, 2013, 12 (5) : 2267-2275. doi: 10.3934/cpaa.2013.12.2267
Guangyu Xu, Jun Zhou. Global existence and blow-up of solutions to a singular Non-Newton polytropic filtration equation with critical and supercritical initial energy. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1805-1820. doi: 10.3934/cpaa.2018086
José M. Arrieta, Raúl Ferreira, Arturo de Pablo, Julio D. Rossi. Stability of the blow-up time and the blow-up set under perturbations. Discrete & Continuous Dynamical Systems - A, 2010, 26 (1) : 43-61. doi: 10.3934/dcds.2010.26.43
Xumin Wang
|
CommonCrawl
|
Title: Archimedes
Subject: Euclidean geometry, History of physics, History of geometry, 210s BC, Hipparchus
Collection: 210S Bc Deaths, 212 Bc, 212 Bc Deaths, 280S Bc Births, 287 Bc, 287 Bc Births, 3Rd-Century Bc Greek People, 3Rd-Century Bc Writers, Ancient Greek Engineers, Ancient Greek Inventors, Ancient Greek Mathematicians, Ancient Greek Physicists, Ancient Greeks Who Were Murdered, Ancient Syracusians, Archimedes, Buoyancy, Doric Greek Writers, Fluid Dynamicists, Geometers, Hellenistic-Era Philosophers, Murdered Scientists, People from Syracuse, Sicily, Sicilian Greeks, Sicilian Mathematicians, Sicilian Scientists
Archimedes of Syracuse
Archimedes Thoughtful by Fetti (1620)
Ἀρχιμήδης
c. 287 BC
Syracuse, Sicily
c. 212 BC (aged around 75)
Archimedes' screw
hydrostatics
infinitesimals
Archimedes of Syracuse (;[1] Greek: Ἀρχιμήδης; c. 287 BC – c. 212 BC) was an Ancient Greek mathematician, physicist, engineer, inventor, and astronomer.[2] Although few details of his life are known, he is regarded as one of the leading scientists in classical antiquity. Generally considered the greatest mathematician of antiquity and one of the greatest of all time,[3][4] Archimedes anticipated modern calculus and analysis by applying concepts of infinitesimals and the method of exhaustion to derive and rigorously prove a range of geometrical theorems, including the area of a circle, the surface area and volume of a sphere, and the area under a parabola.[5]
Other mathematical achievements include deriving an accurate approximation of pi, defining and investigating the spiral bearing his name, and creating a system using exponentiation for expressing very large numbers. He was also one of the first to apply mathematics to physical phenomena, founding hydrostatics and statics, including an explanation of the principle of the lever. He is credited with designing innovative machines, such as his screw pump, compound pulleys, and defensive war machines to protect his native Syracuse from invasion.
Archimedes died during the Siege of Syracuse when he was killed by a Roman soldier despite orders that he should not be harmed. Cicero describes visiting the tomb of Archimedes, which was surmounted by a sphere and a cylinder, which Archimedes had requested to be placed on his tomb, representing his mathematical discoveries.
Unlike his inventions, the mathematical writings of Archimedes were little known in antiquity. Mathematicians from Alexandria read and quoted him, but the first comprehensive compilation was not made until c. 530 AD by Isidore of Miletus in Byzantine Constantinople, while commentaries on the works of Archimedes written by Eutocius in the sixth century AD opened them to wider readership for the first time. The relatively few copies of Archimedes' written work that survived through the Middle Ages were an influential source of ideas for scientists during the Renaissance,[6] while the discovery in 1906 of previously unknown works by Archimedes in the Archimedes Palimpsest has provided new insights into how he obtained mathematical results.[7]
Discoveries and inventions 2
Archimedes' principle 2.1
Archimedes' screw 2.2
Claw of Archimedes 2.3
Heat ray 2.4
Other discoveries and inventions 2.5
Writings 4
Surviving works 4.1
Apocryphal works 4.2
Archimedes Palimpsest 5
Further reading 10
The Works of Archimedes online 10.1
Archimedes was born c. 287 BC in the seaport city of Syracuse, Sicily, at that time a self-governing colony in Magna Graecia, located along the coast of Southern Italy. The date of birth is based on a statement by the Byzantine Greek historian John Tzetzes that Archimedes lived for 75 years.[8] In The Sand Reckoner, Archimedes gives his father's name as Phidias, an astronomer about whom nothing is known. Plutarch wrote in his Parallel Lives that Archimedes was related to King Hiero II, the ruler of Syracuse.[9] A biography of Archimedes was written by his friend Heracleides but this work has been lost, leaving the details of his life obscure.[10] It is unknown, for instance, whether he ever married or had children. During his youth, Archimedes may have studied in Alexandria, Egypt, where Conon of Samos and Eratosthenes of Cyrene were contemporaries. He referred to Conon of Samos as his friend, while two of his works (The Method of Mechanical Theorems and the Cattle Problem) have introductions addressed to Eratosthenes.[a]
Archimedes died c. 212 BC during the Second Punic War, when Roman forces under General Marcus Claudius Marcellus captured the city of Syracuse after a two-year-long siege. According to the popular account given by Plutarch, Archimedes was contemplating a mathematical diagram when the city was captured. A Roman soldier commanded him to come and meet General Marcellus but he declined, saying that he had to finish working on the problem. The soldier was enraged by this, and killed Archimedes with his sword. Plutarch also gives a lesser-known account of the death of Archimedes which suggests that he may have been killed while attempting to surrender to a Roman soldier. According to this story, Archimedes was carrying mathematical instruments, and was killed because the soldier thought that they were valuable items. General Marcellus was reportedly angered by the death of Archimedes, as he considered him a valuable scientific asset and had ordered that he not be harmed.[11] Marcellus called Archimedes "a geometrical Briareus".[12]
Cicero Discovering the Tomb of Archimedes by Benjamin West (1805)
The last words attributed to Archimedes are "Do not disturb my circles", a reference to the circles in the mathematical drawing that he was supposedly studying when disturbed by the Roman soldier. This quote is often given in Latin as "Noli turbare circulos meos," but there is no reliable evidence that Archimedes uttered these words and they do not appear in the account given by Plutarch. Valerius Maximus, writing in Memorable Doings and Sayings in the 1st century AD, gives the phrase as "...sed protecto manibus puluere 'noli' inquit, 'obsecro, istum disturbare'" - "... but protecting the dust with his hands, said 'I beg of you, do not disturb this.'" The phrase is also given in Katharevousa Greek as "μὴ μου τοὺς κύκλους τάραττε!" (Mē mou tous kuklous taratte!).[11]
The tomb of Archimedes carried a sculpture illustrating his favorite mathematical proof, consisting of a sphere and a cylinder of the same height and diameter. Archimedes had proven that the volume and surface area of the sphere are two thirds that of the cylinder including its bases. In 75 BC, 137 years after his death, the Roman orator Cicero was serving as quaestor in Sicily. He had heard stories about the tomb of Archimedes, but none of the locals was able to give him the location. Eventually he found the tomb near the Agrigentine gate in Syracuse, in a neglected condition and overgrown with bushes. Cicero had the tomb cleaned up, and was able to see the carving and read some of the verses that had been added as an inscription.[13] A tomb discovered in the courtyard of the Hotel Panorama in Syracuse in the early 1960s was claimed to be that of Archimedes, but there was no compelling evidence for this and the location of his tomb today is unknown.[14]
The standard versions of the life of Archimedes were written long after his death by the historians of Ancient Rome. The account of the siege of Syracuse given by Polybius in his Universal History was written around seventy years after Archimedes' death, and was used subsequently as a source by Plutarch and Livy. It sheds little light on Archimedes as a person, and focuses on the war machines that he is said to have built in order to defend the city.[15]
Discoveries and inventions
The most widely known Eureka!" (Greek: "εὕρηκα,heúrēka!", meaning "I have found [it]!"). The test was conducted successfully, proving that silver had indeed been mixed in.[18]
The story of the golden crown does not appear in the known works of Archimedes. Moreover, the practicality of the method it describes has been called into question, due to the extreme accuracy with which one would have to measure the water displacement.[19] Archimedes may have instead sought a solution that applied the principle known in hydrostatics as Archimedes' principle, which he describes in his treatise On Floating Bodies. This principle states that a body immersed in a fluid experiences a buoyant force equal to the weight of the fluid it displaces.[20] Using this principle, it would have been possible to compare the density of the golden crown to that of solid gold by balancing the crown on a scale with a gold reference sample, then immersing the apparatus in water. The difference in density between the two samples would cause the scale to tip accordingly. Galileo considered it "probable that this method is the same that Archimedes followed, since, besides being very accurate, it is based on demonstrations found by Archimedes himself."[21] In a 12th-century text titled Mappae clavicula there are instructions on how to perform the weighings in the water in order to calculate the percentage of silver used, and thus solve the problem.[22][23] The Latin poem Carmen de ponderibus et mensuris of the 4th or 5th century describes the use of a hydrostatic balance to solve the problem of the crown, and attributes the method to Archimedes.[22]
The Archimedes' screw can raise water efficiently.
A large part of Archimedes' work in engineering arose from fulfilling the needs of his home city of Syracuse. The Greek writer Athenaeus of Naucratis described how King Hiero II commissioned Archimedes to design a huge ship, the Syracusia, which could be used for luxury travel, carrying supplies, and as a naval warship. The Syracusia is said to have been the largest ship built in classical antiquity.[24] According to Athenaeus, it was capable of carrying 600 people and included garden decorations, a gymnasium and a temple dedicated to the goddess Aphrodite among its facilities. Since a ship of this size would leak a considerable amount of water through the hull, the Archimedes' screw was purportedly developed in order to remove the bilge water. Archimedes' machine was a device with a revolving screw-shaped blade inside a cylinder. It was turned by hand, and could also be used to transfer water from a low-lying body of water into irrigation canals. The Archimedes' screw is still in use today for pumping liquids and granulated solids such as coal and grain. The Archimedes' screw described in Roman times by Vitruvius may have been an improvement on a screw pump that was used to irrigate the Hanging Gardens of Babylon.[25][26][27] The world's first seagoing steamship with a screw propeller was the SS Archimedes, which was launched in 1839 and named in honor of Archimedes and his work on the screw.[28]
Claw of Archimedes
The Claw of Archimedes is a weapon that he is said to have designed in order to defend the city of Syracuse. Also known as "the ship shaker," the claw consisted of a crane-like arm from which a large metal grappling hook was suspended. When the claw was dropped onto an attacking ship the arm would swing upwards, lifting the ship out of the water and possibly sinking it. There have been modern experiments to test the feasibility of the claw, and in 2005 a television documentary entitled Superweapons of the Ancient World built a version of the claw and concluded that it was a workable device.[29][30]
Heat ray
Archimedes may have used mirrors acting collectively as a parabolic reflector to burn ships attacking Syracuse.
Artistic interpretation of Archimedes' mirror used to burn Roman ships. Painting by Giulio Parigi.
Archimedes may have used mirrors acting collectively as a parabolic reflector to burn ships attacking Syracuse. The 2nd century AD author Lucian wrote that during the Siege of Syracuse (c. 214–212 BC), Archimedes destroyed enemy ships with fire. Centuries later, Anthemius of Tralles mentions burning-glasses as Archimedes' weapon.[31] The device, sometimes called the "Archimedes heat ray", was used to focus sunlight onto approaching ships, causing them to catch fire.
This purported weapon has been the subject of ongoing debate about its credibility since the Renaissance. René Descartes rejected it as false, while modern researchers have attempted to recreate the effect using only the means that would have been available to Archimedes.[32] It has been suggested that a large array of highly polished bronze or copper shields acting as mirrors could have been employed to focus sunlight onto a ship. This would have used the principle of the parabolic reflector in a manner similar to a solar furnace.
A test of the Archimedes heat ray was carried out in 1973 by the Greek scientist Ioannis Sakkas. The experiment took place at the Skaramagas naval base outside Athens. On this occasion 70 mirrors were used, each with a copper coating and a size of around five by three feet (1.5 by 1 m). The mirrors were pointed at a plywood mock-up of a Roman warship at a distance of around 160 feet (50 m). When the mirrors were focused accurately, the ship burst into flames within a few seconds. The plywood ship had a coating of tar paint, which may have aided combustion.[33] A coating of tar would have been commonplace on ships in the classical era.[d]
In October 2005 a group of students from the Massachusetts Institute of Technology carried out an experiment with 127 one-foot (30 cm) square mirror tiles, focused on a mock-up wooden ship at a range of around 100 feet (30 m). Flames broke out on a patch of the ship, but only after the sky had been cloudless and the ship had remained stationary for around ten minutes. It was concluded that the device was a feasible weapon under these conditions. The MIT group repeated the experiment for the television show MythBusters, using a wooden fishing boat in San Francisco as the target. Again some charring occurred, along with a small amount of flame. In order to catch fire, wood needs to reach its autoignition temperature, which is around 300 °C (570 °F).[34][35]
When MythBusters broadcast the result of the San Francisco experiment in January 2006, the claim was placed in the category of "busted" (or failed) because of the length of time and the ideal weather conditions required for combustion to occur. It was also pointed out that since Syracuse faces the sea towards the east, the Roman fleet would have had to attack during the morning for optimal gathering of light by the mirrors. MythBusters also pointed out that conventional weaponry, such as flaming arrows or bolts from a catapult, would have been a far easier way of setting a ship on fire at short distances.[36]
In December 2010, MythBusters again looked at the heat ray story in a special edition entitled "President's Challenge". Several experiments were carried out, including a large scale test with 500 schoolchildren aiming mirrors at a mock-up of a Roman sailing ship 400 feet (120 m) away. In all of the experiments, the sail failed to reach the 210 °C (410 °F) required to catch fire, and the verdict was again "busted". The show concluded that a more likely effect of the mirrors would have been blinding, dazzling, or distracting the crew of the ship.[37]
Other discoveries and inventions
While Archimedes did not invent the lever, he gave an explanation of the principle involved in his work On the Equilibrium of Planes. Earlier descriptions of the lever are found in the Peripatetic school of the followers of Aristotle, and are sometimes attributed to Archytas.[38][39] According to Pappus of Alexandria, Archimedes' work on levers caused him to remark: "Give me a place to stand on, and I will move the Earth." (Greek: δῶς μοι πᾶ στῶ καὶ τὰν γᾶν κινάσω)[40] Plutarch describes how Archimedes designed block-and-tackle pulley systems, allowing sailors to use the principle of leverage to lift objects that would otherwise have been too heavy to move.[41] Archimedes has also been credited with improving the power and accuracy of the catapult, and with inventing the odometer during the First Punic War. The odometer was described as a cart with a gear mechanism that dropped a ball into a container after each mile traveled.[42]
Cicero (106–43 BC) mentions Archimedes briefly in his dialogue De re publica, which portrays a fictional conversation taking place in 129 BC. After the capture of Syracuse c. 212 BC, General Marcus Claudius Marcellus is said to have taken back to Rome two mechanisms, constructed by Archimedes and used as aids in astronomy, which showed the motion of the Sun, Moon and five planets. Cicero mentions similar mechanisms designed by Thales of Miletus and Eudoxus of Cnidus. The dialogue says that Marcellus kept one of the devices as his only personal loot from Syracuse, and donated the other to the Temple of Virtue in Rome. Marcellus' mechanism was demonstrated, according to Cicero, by Gaius Sulpicius Gallus to Lucius Furius Philus, who described it thus:
Hanc sphaeram Gallus cum moveret, fiebat ut soli luna totidem conversionibus in aere illo quot diebus in ipso caelo succederet, ex quo et in caelo sphaera solis fieret eadem illa defectio, et incideret luna tum in eam metam quae esset umbra terrae, cum sol e regione. — When Gallus moved the globe, it happened that the Moon followed the Sun by as many turns on that bronze contrivance as in the sky itself, from which also in the sky the Sun's globe became to have that same eclipse, and the Moon came then to that position which was its shadow on the Earth, when the Sun was in line.[43][44]
This is a description of a planetarium or orrery. Pappus of Alexandria stated that Archimedes had written a manuscript (now lost) on the construction of these mechanisms entitled On Sphere-Making. Modern research in this area has been focused on the Antikythera mechanism, another device built c. 100 BC that was probably designed for the same purpose.[45] Constructing mechanisms of this kind would have required a sophisticated knowledge of differential gearing.[46] This was once thought to have been beyond the range of the technology available in ancient times, but the discovery of the Antikythera mechanism in 1902 has confirmed that devices of this kind were known to the ancient Greeks.[47][48]
Archimedes used Pythagoras' Theorem to calculate the side of the 12-gon from that of the hexagon and for each subsequent doubling of the sides of the regular polygon.
While he is often regarded as a designer of mechanical devices, Archimedes also made contributions to the field of mathematics. Plutarch wrote: "He placed his whole affection and ambition in those purer speculations where there can be no reference to the vulgar needs of life."[49] Archimedes was able to use infinitesimals in a way that is similar to modern integral calculus. Through proof by contradiction (reductio ad absurdum), he could give answers to problems to an arbitrary degree of accuracy, while specifying the limits within which the answer lay. This technique is known as the method of exhaustion, and he employed it to approximate the value of π. In Measurement of a Circle he did this by drawing a larger regular hexagon outside a circle and a smaller regular hexagon inside the circle, and progressively doubling the number of sides of each regular polygon, calculating the length of a side of each polygon at each step. As the number of sides increases, it becomes a more accurate approximation of a circle. After four such steps, when the polygons had 96 sides each, he was able to determine that the value of π lay between 31⁄7 (approximately 3.1429) and 310⁄71 (approximately 3.1408), consistent with its actual value of approximately 3.1416.[50] He also proved that the area of a circle was equal to π multiplied by the square of the radius of the circle (πr2). In On the Sphere and Cylinder, Archimedes postulates that any magnitude when added to itself enough times will exceed any given magnitude. This is the Archimedean property of real numbers.[51]
As proven by Archimedes, the area of the parabolic segment in the upper figure is equal to 4/3 that of the inscribed triangle in the lower figure.
In Measurement of a Circle, Archimedes gives the value of the square root of 3 as lying between 265⁄153 (approximately 1.7320261) and 1351⁄780 (approximately 1.7320512). The actual value is approximately 1.7320508, making this a very accurate estimate. He introduced this result without offering any explanation of how he had obtained it. This aspect of the work of Archimedes caused John Wallis to remark that he was: "as it were of set purpose to have covered up the traces of his investigation as if he had grudged posterity the secret of his method of inquiry while he wished to extort from them assent to his results."[52] It is possible that he used an iterative procedure to calculate these values.[53]
In The Quadrature of the Parabola, Archimedes proved that the area enclosed by a parabola and a straight line is 4⁄3 times the area of a corresponding inscribed triangle as shown in the figure at right. He expressed the solution to the problem as an infinite geometric series with the common ratio 1⁄4:
\sum_{n=0}^\infty 4^{-n} = 1 + 4^{-1} + 4^{-2} + 4^{-3} + \cdots = {4\over 3}. \;
If the first term in this series is the area of the triangle, then the second is the sum of the areas of two triangles whose bases are the two smaller secant lines, and so on. This proof uses a variation of the series 1/4 + 1/16 + 1/64 + 1/256 + · · · which sums to 1⁄3.
In The Sand Reckoner, Archimedes set out to calculate the number of grains of sand that the universe could contain. In doing so, he challenged the notion that the number of grains of sand was too large to be counted. He wrote: "There are some, King Gelo (Gelo II, son of Hiero II), who think that the number of the sand is infinite in multitude; and I mean by the sand not only that which exists about Syracuse and the rest of Sicily but also that which is found in every region whether inhabited or uninhabited." To solve the problem, Archimedes devised a system of counting based on the myriad. The word is from the Greek μυριάς murias, for the number 10,000. He proposed a number system using powers of a myriad of myriads (100 million) and concluded that the number of grains of sand required to fill the universe would be 8 vigintillion, or 8×1063.[54]
The works of Archimedes were written in Doric Greek, the dialect of ancient Syracuse.[55] The written work of Archimedes has not survived as well as that of Euclid, and seven of his treatises are known to have existed only through references made to them by other authors. Pappus of Alexandria mentions On Sphere-Making and another work on polyhedra, while Theon of Alexandria quotes a remark about refraction from the now-lost Catoptrica.[b] During his lifetime, Archimedes made his work known through correspondence with the mathematicians in Alexandria. The writings of Archimedes were first collected by the Byzantine Greek architect Isidore of Miletus (c. 530 AD), while commentaries on the works of Archimedes written by Eutocius in the sixth century AD helped to bring his work a wider audience. Archimedes' work was translated into Arabic by Thābit ibn Qurra (836–901 AD), and Latin by Gerard of Cremona (c. 1114–1187 AD). During the Renaissance, the Editio Princeps (First Edition) was published in Basel in 1544 by Johann Herwagen with the works of Archimedes in Greek and Latin.[56] Around the year 1586 Galileo Galilei invented a hydrostatic balance for weighing metals in air and water after apparently being inspired by the work of Archimedes.[57]
Surviving works
On the Equilibrium of Planes (two volumes)
The first book is in fifteen propositions with seven postulates, while the second book is in ten propositions. In this work Archimedes explains the Law of the Lever, stating, "Magnitudes are in equilibrium at distances reciprocally proportional to their weights."
Archimedes uses the principles derived to calculate the areas and centers of gravity of various geometric figures including triangles, parallelograms and parabolas.[58]
On the Measurement of a Circle
This is a short work consisting of three propositions. It is written in the form of a correspondence with Dositheus of Pelusium, who was a student of Conon of Samos. In Proposition II, Archimedes gives an approximation of the value of pi (π), showing that it is greater than 223⁄71 and less than 22⁄7.
On Spirals
This work of 28 propositions is also addressed to Dositheus. The treatise defines what is now called the Archimedean spiral. It is the locus of points corresponding to the locations over time of a point moving away from a fixed point with a constant speed along a line which rotates with constant angular velocity. Equivalently, in polar coordinates (r, θ) it can be described by the equation
\, r=a+b\theta
with real numbers a and b. This is an early example of a mechanical curve (a curve traced by a moving point) considered by a Greek mathematician.
On the Sphere and the Cylinder (two volumes)
A sphere has 2/3 the volume and surface area of its circumscribing cylinder including its bases. A sphere and cylinder were placed on the tomb of Archimedes at his request. (see also: Equiareal map)
In this treatise addressed to Dositheus, Archimedes obtains the result of which he was most proud, namely the relationship between a sphere and a circumscribed cylinder of the same height and diameter. The volume is 4⁄3πr3 for the sphere, and 2πr3 for the cylinder. The surface area is 4πr2 for the sphere, and 6πr2 for the cylinder (including its two bases), where r is the radius of the sphere and cylinder. The sphere has a volume two-thirds that of the circumscribed cylinder. Similarly, the sphere has an area two-thirds that of the cylinder (including the bases). A sculpted sphere and cylinder were placed on the tomb of Archimedes at his request.
On Conoids and Spheroids
This is a work in 32 propositions addressed to Dositheus. In this treatise Archimedes calculates the areas and volumes of sections of cones, spheres, and paraboloids.
On Floating Bodies (two volumes)
In the first part of this treatise, Archimedes spells out the law of equilibrium of fluids, and proves that water will adopt a spherical form around a center of gravity. This may have been an attempt at explaining the theory of contemporary Greek astronomers such as Eratosthenes that the Earth is round. The fluids described by Archimedes are not self-gravitating, since he assumes the existence of a point towards which all things fall in order to derive the spherical shape.
In the second part, he calculates the equilibrium positions of sections of paraboloids. This was probably an idealization of the shapes of ships' hulls. Some of his sections float with the base under water and the summit above water, similar to the way that icebergs float. Archimedes' principle of buoyancy is given in the work, stated as follows:
Any body wholly or partially immersed in a fluid experiences an upthrust equal to, but opposite in sense to, the weight of the fluid displaced.
The Quadrature of the Parabola
In this work of 24 propositions addressed to Dositheus, Archimedes proves by two methods that the area enclosed by a parabola and a straight line is 4/3 multiplied by the area of a triangle with equal base and height. He achieves this by calculating the value of a geometric series that sums to infinity with the ratio 1⁄4.
Stomachion is a dissection puzzle in the Archimedes Palimpsest.
(O)stomachion
This is a dissection puzzle similar to a Tangram, and the treatise describing it was found in more complete form in the Archimedes Palimpsest. Archimedes calculates the areas of the 14 pieces which can be assembled to form a square. Research published by Dr. Reviel Netz of Stanford University in 2003 argued that Archimedes was attempting to determine how many ways the pieces could be assembled into the shape of a square. Dr. Netz calculates that the pieces can be made into a square 17,152 ways.[59] The number of arrangements is 536 when solutions that are equivalent by rotation and reflection have been excluded.[60] The puzzle represents an example of an early problem in combinatorics.
The origin of the puzzle's name is unclear, and it has been suggested that it is taken from the Ancient Greek word for throat or gullet, stomachos (στόμαχος).[61] Ausonius refers to the puzzle as Ostomachion, a Greek compound word formed from the roots of ὀστέον (osteon, bone) and μάχη (machē – fight). The puzzle is also known as the Loculus of Archimedes or Archimedes' Box.[62]
Archimedes' cattle problem
This work was discovered by Gotthold Ephraim Lessing in a Greek manuscript consisting of a poem of 44 lines, in the Herzog August Library in Wolfenbüttel, Germany in 1773. It is addressed to Eratosthenes and the mathematicians in Alexandria. Archimedes challenges them to count the numbers of cattle in the Herd of the Sun by solving a number of simultaneous Diophantine equations. There is a more difficult version of the problem in which some of the answers are required to be square numbers. This version of the problem was first solved by A. Amthor[63] in 1880, and the answer is a very large number, approximately 7.760271×10206544.[64]
The Sand Reckoner
In this treatise, Archimedes counts the number of grains of sand that will fit inside the universe. This book mentions the heliocentric theory of the solar system proposed by Aristarchus of Samos, as well as contemporary ideas about the size of the Earth and the distance between various celestial bodies. By using a system of numbers based on powers of the myriad, Archimedes concludes that the number of grains of sand required to fill the universe is 8×1063 in modern notation. The introductory letter states that Archimedes' father was an astronomer named Phidias. The Sand Reckoner or Psammites is the only surviving work in which Archimedes discusses his views on astronomy.[65]
The Method of Mechanical Theorems
This treatise was thought lost until the discovery of the Archimedes Palimpsest in 1906. In this work Archimedes uses infinitesimals, and shows how breaking up a figure into an infinite number of infinitely small parts can be used to determine its area or volume. Archimedes may have considered this method lacking in formal rigor, so he also used the method of exhaustion to derive the results. As with The Cattle Problem, The Method of Mechanical Theorems was written in the form of a letter to Eratosthenes in Alexandria.
Apocryphal works
Archimedes' Book of Lemmas or Liber Assumptorum is a treatise with fifteen propositions on the nature of circles. The earliest known copy of the text is in Arabic. The scholars T. L. Heath and Marshall Clagett argued that it cannot have been written by Archimedes in its current form, since it quotes Archimedes, suggesting modification by another author. The Lemmas may be based on an earlier work by Archimedes that is now lost.[66]
It has also been claimed that Heron's formula for calculating the area of a triangle from the length of its sides was known to Archimedes.[c] However, the first reliable reference to the formula is given by Heron of Alexandria in the 1st century AD.[67]
Archimedes Palimpsest
In 1906, The Archimedes Palimpsest revealed works by Archimedes thought to have been lost.
The foremost document containing the work of Archimedes is the Archimedes Palimpsest. In 1906, the Danish professor Johan Ludvig Heiberg visited Constantinople and examined a 174-page goatskin parchment of prayers written in the 13th century AD. He discovered that it was a palimpsest, a document with text that had been written over an erased older work. Palimpsests were created by scraping the ink from existing works and reusing them, which was a common practice in the Middle Ages as vellum was expensive. The older works in the palimpsest were identified by scholars as 10th century AD copies of previously unknown treatises by Archimedes.[68] The parchment spent hundreds of years in a monastery library in Constantinople before being sold to a private collector in the 1920s. On October 29, 1998 it was sold at auction to an anonymous buyer for $2 million at Christie's in New York.[69] The palimpsest holds seven treatises, including the only surviving copy of On Floating Bodies in the original Greek. It is the only known source of The Method of Mechanical Theorems, referred to by Suidas and thought to have been lost forever. Stomachion was also discovered in the palimpsest, with a more complete analysis of the puzzle than had been found in previous texts. The palimpsest is now stored at the Walters Art Museum in Baltimore, Maryland, where it has been subjected to a range of modern tests including the use of ultraviolet and x-ray light to read the overwritten text.[70]
The treatises in the Archimedes Palimpsest are: On the Equilibrium of Planes, On Spirals, Measurement of a Circle, On the Sphere and the Cylinder, On Floating Bodies, The Method of Mechanical Theorems and Stomachion.
The Fields Medal carries a portrait of Archimedes.
Galileo praised Archimedes many times, and referred to him as a "superhuman".[71] Leibniz said "He who understands Archimedes and Apollonius will admire less the achievements of the foremost men of later times."[72]
There is a crater on the Moon named Archimedes (29.7° N, 4.0° W) in his honor, as well as a lunar mountain range, the Montes Archimedes (25.3° N, 4.6° W).[73]
The asteroid 3600 Archimedes is named after him.[74]
The Fields Medal for outstanding achievement in mathematics carries a portrait of Archimedes, along with a carving illustrating his proof on the sphere and the cylinder. The inscription around the head of Archimedes is a quote attributed to him which reads in Latin: "Transire suum pectus mundoque potiri" (Rise above oneself and grasp the world).[75]
Archimedes has appeared on postage stamps issued by East Germany (1973), Greece (1983), Italy (1983), Nicaragua (1971), San Marino (1982), and Spain (1963).[76]
The exclamation of Eureka! attributed to Archimedes is the state motto of California. In this instance the word refers to the discovery of gold near Sutter's Mill in 1848 which sparked the California Gold Rush.[77]
Arbelos
Archimedes' axiom
Archimedes number
Archimedes paradox
Archimedes principle of buoyancy
Archimedean solid
Archimedes' twin circles
Archimedes' use of infinitesimals
Archytas
Diocles
List of things named after Archimedes
Methods of computing square roots
Pseudo-Archimedes
Salinon
Steam cannon
Syracusia
Zhang Heng
a. ^ In the preface to On Spirals addressed to Dositheus of Pelusium, Archimedes says that "many years have elapsed since Conon's death." Conon of Samos lived c. 280–220 BC, suggesting that Archimedes may have been an older man when writing some of his works.
b. ^ The treatises by Archimedes known to exist only through references in the works of other authors are: On Sphere-Making and a work on polyhedra mentioned by Pappus of Alexandria; Catoptrica, a work on optics mentioned by Theon of Alexandria; Principles, addressed to Zeuxippus and explaining the number system used in The Sand Reckoner; On Balances and Levers; On Centers of Gravity; On the Calendar. Of the surviving works by Archimedes, T. L. Heath offers the following suggestion as to the order in which they were written: On the Equilibrium of Planes I, The Quadrature of the Parabola, On the Equilibrium of Planes II, On the Sphere and the Cylinder I, II, On Spirals, On Conoids and Spheroids, On Floating Bodies I, II, On the Measurement of a Circle, The Sand Reckoner.
c. ^ Boyer, Carl Benjamin A History of Mathematics (1991) ISBN 0-471-54397-7 "Arabic scholars inform us that the familiar area formula for a triangle in terms of its three sides, usually known as Heron's formula — k = √(s(s − a)(s − b)(s − c)), where s is the semiperimeter — was known to Archimedes several centuries before Heron lived. Arabic scholars also attribute to Archimedes the 'theorem on the broken chord' ... Archimedes is reported by the Arabs to have given several proofs of the theorem."
d. ^ "It was usual to smear the seams or even the whole hull with pitch or with pitch and wax". In Νεκρικοὶ Διάλογοι (Dialogues of the Dead), Lucian refers to coating the seams of a skiff with wax, a reference to pitch (tar) or wax.[78]
^ Heath, T. L., Works of Archimedes, 1897
^ Mary Jaeger. Archimedes and the Roman Imagination, p. 113.
^ a b O. A. W. Dilke. Gnomon. 62. Bd., H. 8 (1990), pp. 697-699 Published by: Verlag C.H.Beck
^ Marcel Berthelot - Sur l histoire de la balance hydrostatique et de quelques autres appareils et procédés scientifiques, Annales de Chimie et de Physique [série 6], 23 / 1891, pp. 475-485
^ An animation of an Archimedes' screw
^ Hippias, 2 (cf. Galen, On temperaments 3.2, who mentions pyreia, "torches"); Anthemius of Tralles, On miraculous engines 153 [Westerman].
^ Fuels and Chemicals – Auto Ignition Temperatures
^ Quoted by Pappus of Alexandria in Synagoge, Book VIII
^ Quoted in Heath, T. L. Works of Archimedes, Dover Publications, ISBN 0-486-42084-1.
^ Encyclopedia of ancient Greece By Wilson, Nigel Guy p. 77 ISBN 0-7945-0225-3 (2006)
^ Krumbiegel, B. and Amthor, A. Das Problema Bovinum des Archimedes, Historisch-literarische Abteilung der Zeitschrift Für Mathematik und Physik 25 (1880) pp. 121–136, 153–171.
^ Michael Matthews. Time for Science Education: How Teaching the History and Philosophy of Pendulum Motion Can Contribute to Science Literacy, p. 96.
^ Carl B. Boyer, Uta C. Merzbach. A History of Mathematics, chapter 7.
Republished translation of the 1938 study of Archimedes and his works by an historian of science.
Complete works of Archimedes in English.
The Works of Archimedes online
Text in Classical Greek: PDF scans of Heiberg's edition of the Works of Archimedes, now in the public domain
In English translation: The Works of Archimedes, trans. T.L. Heath; supplemented by The Method of Mechanical Theorems, trans. L.G. Robinson
Archimedes on In Our Time at the BBC. (listen now)
Works by Archimedes at Project Gutenberg
Works by or about Archimedes at Internet Archive
Archimedes at the Indiana Philosophy Ontology Project
Archimedes at PhilPapers
The Archimedes Palimpsest project at The Walters Art Museum in Baltimore, Maryland
The Mathematical Achievements and Methodologies of Archimedes
"Archimedes and the Square Root of 3" at MathPages.com.
"Archimedes on Spheres and Cylinders" at MathPages.com.
Photograph of the Sakkas experiment in 1973
Testing the Archimedes steam cannon
Stamps of Archimedes
Eureka! 1,000-year-old text by Greek maths genius Archimedes goes on display Daily Mail, October 18, 2011.
Ancient Greek mathematics
Anthemius
Aristaeus the Elder
Aristarchus
Apollonius
Callippus
Chrysippus
Ctesibius
Dicaearchus
Dinostratus
Dionysodorus
Domninus
Eudemus
Eudoxus
Geminus
Hippasus
Hippias
Hypsicles
Isidore of Miletus
Menaechmus
Menelaus
Nicomachus
Nicomedes
Nicoteles
Oenopides
Pappus
Philolaus
Philon
Posidonius
Serenus
Sosigenes
Sporus
Theaetetus
Theon of Alexandria
Theon of Smyrna
Thymaridas
Xenocrates
Zeno of Elea
Zeno of Sidon
Zenodorus
Treatises
Almagest
Arithmetica
Conics (Apollonius)
Elements (Euclid)
On the Sizes and Distances (Aristarchus)
On Sizes and Distances (Hipparchus)
On the Moving Sphere (Autolycus)
Problem of Apollonius
Squaring the circle
Doubling the cube
Angle trisection
Cyrene
Platonic Academy
Timeline of Ancient Greek mathematicians
Cycladic civilization
Minoan civilization
Mycenaean civilization
Greek Dark Ages
Archaic period
Hellenistic Greece
Hellespont
Peloponnesus
Ancient Greek colonies
Boeotarch
Koinon
Proxeny
Strategos
Tagus
Amphictyonic League
Areopagus
Graphē paranómōn
Heliaia
Gerousia
Harmost
Antigonid Macedonian army
Army of Macedon
Cretan archers
Hellenistic armies
Hippeis
Hetairoi
Macedonian phalanx
Phalanx formation
Peltast
Pezhetairos
Sarissa
Sacred Band of Thebes
Sciritae
Seleucid army
Spartan army
Toxotai
Xyston
List of ancient Greeks
Kings of Argos
Archons of Athens
Kings of Athens
Kings of Commagene
Diadochi
Kings of Lydia
Kings of Macedonia
Kings of Paionia
Attalid kings of Pergamon
Kings of Pontus
Kings of Sparta
Tyrants of Syracuse
Anaximander
Leucippus
Protagoras
Alcaeus
Bacchylides
Hipponax
Ibycus
Mimnermus
Timocreon
Tyrtaeus
Agesilaus II
Agis II
Alcibiades
Aratus
Epaminondas
Lycurgus
Milo of Croton
Philip of Macedon
Philopoemen
Praxiteles
Ancient Greek tribes
Thracian Greeks
Ancient Macedonians
Funeral and burial practices
Pederasty
Architecture (Greek Revival architecture)
Music (Musical system)
mythological figures
Twelve Olympians
Dodona
Athenian Treasury
Lion Gate
Long Walls
Philippeion
Theatre of Dionysus
Tunnel of Eupalinos
Aphaea
Athena Nike
Erechtheion
Hera (Olympia)
Zeus (Olympia)
Mycenaean
Homeric
Aeolic
Arcadocypriot
Pamphylian
Koine
Linear A
Linear B
Cypriot syllabary
Greek numerals
Attic numerals
in Epirus
Sroae
WorldHeritage pages with incorrect protection templates
Articles containing Ancient Greek-language text
Spoken articles
3rd-century BC Greek people
3rd-century BC writers
People from Syracuse, Sicily
Ancient Greek engineers
Ancient Greek inventors
Ancient Greek mathematicians
Ancient Greek physicists
Hellenistic-era philosophers
Doric Greek writers
Sicilian Greeks
Sicilian mathematicians
Sicilian scientists
Murdered scientists
Geometers
Ancient Greeks who were murdered
Ancient Syracusians
Fluid dynamicists
210s BC deaths
Sicily, Province of Syracuse, Catania, Gela, Unesco
Biography, Alexander the Great, Julius Caesar, Ethics, Greek mythology
Egypt, Ancient Egypt, Rome, Armenia, Governorates of Egypt
Calabria, Italy, Apulia, Ancient Rome, Greeks
Greek alphabet, Greece, Cyprus, Armenia, Christianity
Euclid, Non-Euclidean geometry, Archimedes, Analytic geometry, Set theory
Isaac Newton, Nobel Prize in Physics, Solar system, Energy, Galileo Galilei
History of geometry
Euclid, Isaac Newton, Pi, Algebraic geometry, Archimedes
210s BC
Roman Republic, Hannibal, Spain, Carthage, Ancient Egypt
Ptolemy, Archimedes, Almagest, Orbit, Pi
|
CommonCrawl
|
Why is the concept of the Null hypothesis associated with the student's t distribution?
There are dozens of continuous probability distributions like Gaussian (normal), Variance-gamma, Holtsmark, etc. Yet, the concept of the Null hypothesis is basically associated with Student's t-distribution. Any idea why. Tanks
probability hypothesis-testing distributions t-test p-value
$\begingroup$ what do you mean by "basically associated?" Are you asking why the t distribution appears to be the most common null distribution in hypothesis testing? The answer might be opinion-based. $\endgroup$ – Taylor Jun 10 '19 at 23:28
$\begingroup$ Yes, I am wondering why t-test is the most used to study the hypothesis test. $\endgroup$ – Ahmed Jun 10 '19 at 23:52
$\begingroup$ It isn't. Since hypothesis testing is taught immediately after the t-test in Stats 101, it is usually the first time the "null hypothesis" is taught. And for more than a few people it is also the last time they see it. $\endgroup$ – Peter Leopold Jun 11 '19 at 2:04
There are a dozen of continues probability distributions
There are an infinite number of continuous probability distributions. The ones that have been discussed enough to be named and included in the space of a couple of pages are nevertheless sufficient to fill numerous books (and indeed they do - see, for example, the many books by Johnson, Kotz and other co-authors).
Yet, the concept of the Null hypothesis is basically associated with Student's t-distribution.
This is not the case. If you take a look at either the writing of Neyman and Pearson or that of Fisher on hypothesis testing (the two main approaches to hypothesis testing), the t-distribution is not a necessary nor in any way a major part of either.
Neither is it "the most used to study the hypothesis test" (if you're studying the theory of hypothesis testing you might well only look at it in passing - perhaps as part of one chapter, for example), but it is one of the first examples of hypothesis tests many students learn about.
There are hundreds of hypothesis tests that are used (at least; more probably well into the thousands) and new ones are easy enough to construct. Some situations you may have heard of include: testing independence in contingency tables, testing multinomial goodness of fit, testing equality of means in one way analysis of variance, testing equality of variance, rank based tests of location, or omnibus tests of distributional goodness of fit. None of these are likely to involve t-distributions (and there are many, many more that you probably haven't heard of).
I'd have said the chi-squared distribution and the normal distribution are much more fundamental to hypothesis testing (in particular, as approximations in large samples), but even there, hypothesis tests would still exist even if they didn't come into it at all.
If you look at the Neyman-Pearson lemma, at Fisher exact tests/permutation/randomization testing, and at bootstrap tests, you might instead wonder if the t-distribution would really come up all that much.
Now a substantial subset of tests that are done in applications do involve the t-distribution, but that's in no way an essential property of null hypotheses.
It occurs for a pretty simple reason - it comes up when dealing with inference (tests and intervals) about sample means of normally distributed population quantities (and some other circumstances) under the case where the population variance is unknown.
Consequently the t-distribution (through one-sample/paired t-tests, two sample t-tests, tests of single regression coefficients, and tests of 0 correlation) may be the bulk of your exposure to hypothesis tests but that's not an overwhelming fraction of hypothesis testing more generally.
...the concept of the null hypothesis is basically associated with Student's t-distribution.
Not really. The null hypothesis is associated with a corresponding null distribution, which varies depending on the model and test statistic. In classical hypothesis tests for unknown linear coefficients or mean values, one generally uses a test statistic that is some kind of studentised mean estimator, and this leads to a null distribution which is the Student's T distribution. In other tests, one obtains a different null distribution. It seems that you are associating the two concepts more strongly than they are actually associated, and then wondering why this is.
Reinstate MonicaReinstate Monica
When we want to test a hypothesis, we need a test statistic with a known probability distribution. This usually involves standardisation of the data. For example, if we collect a random sample $X_1, \dots, X_n$ with mean $\mu$ and variance $\sigma^2$, and the data is assumed to be normally distributed. Then we would standardise it as
$$Z_n = \frac{\bar{X}_n-\mu}{\sigma/\sqrt{n}}$$
$Z_n$ has a standard normal $N(0,1)$ distribution, and so it's values can be used to test whether our hypothesised mean $\mu$ is true. Even if the data is not normal, the central limit theorem says that it will be asymptotically provided the variance exists (ie. $EX^2 < \infty$).
The problem is that while we are normally interested in the mean $\mu$, the variance $\sigma^2$ is also unknown. This is called a nuisance parameter. Thus we need to approximate $Z_n$ by substituting in an estimate for $\sigma^2$, which is the sample variance
$$s^2 = \frac{1}{n-1}\sum_{i=1}^n (X_i - \bar{X}_n)^2$$
But in doing so we have a new test statistic
$$T_n = \frac{\bar{X}_n-\mu_0}{s/\sqrt{n}}$$
This turns out to have $t$ distribution with $n-1$ degrees of freedom if the null hypothesis is true (ie. if the true mean is used). Thus, even though $\sigma^2$ is unknown, we have obtained a test statistic with a well known distribution for which to make inferences.
The reason it follows a $t$ distribution is that the above can be expressed as a normal random variable divided by the square root of an independent chisquared random variable, which gives a $t$ distribution.
$\begingroup$ Thanks for your answer. I am trying to digest the answer, Could you give me a quick example? $\endgroup$ – Ahmed Jun 10 '19 at 23:53
It isn't, but it would probably seem so to a non-statistician who is just learning it while trying to do some basic inference in the context of a science class or something like that. Because the sorts of things in science experiments you want to do inference/hyp testing on have the characteristics of a t-test: the variance is not known, the samples are small, and you are dealing with something that is continuous in nature. A stats student will almost certainly be introduced to a z-test first, through population proportion testing.
The trick is to realize that the transition from z to t test for population mean inference and hypothesis testing comes from the addition of another parameter that needs to be estimated -- the variance -- and in the vast majority of situations you'll encounter the population variance is not known.
I would guess most people who associate hypothesis testing with the T test do so because it's by far the most common one encountered in the sciences and humanities, at least at the lower levels.
epseps
Not the answer you're looking for? Browse other questions tagged probability hypothesis-testing distributions t-test p-value or ask your own question.
Is the p-value still uniformly distributed when the null hypothesis is composite?
Why use Student's t distribution rather than Student's z distribution
Hypothesis testing. Why center the sampling distribution on H0?
Role of p-value in ruling out null hypothesis
What is the rationale behind using the t-distribution?
Distribution of "p-value-like" quantities under null hypothesis
The distribution of the p-values under the null hypothesis is uniform(0, 1)
|
CommonCrawl
|
Charge exchange in the ultraviolet : implication for interacting clouds in the core of NGC 1275
Gu, Liyi and Mao, Junjie and O'Dea, Christopher P. and Baum, Stefi A. and Mehdipour, Missagh and Kaastra, Jelle S. (2017) Charge exchange in the ultraviolet : implication for interacting clouds in the core of NGC 1275. Astronomy and Astrophysics, 601. A45. ISSN 0004-6361 (https://doi.org/10.1051/0004-6361/201730596)
Text. Filename: Gu_etal_AAA_2018_Charge_exchange_in_the_ultraviolet_implication.pdf
Charge exchange emission is known to provide a key diagnostic to the interface between hot and cold matter in many astrophysical environments. Most of the recent charge exchange studies focus on its emission in the X-ray band, but few on the UV part, although the latter can also provide a powerful probe of the charge exchange process. An atomic calculation, as well as an application to observed data, are presented to explore and describe the potential use of the UV data for the study of cosmic charge exchange. Using the newest charge exchange model in the SPEX code v3.03, we re-analyze an archival Hubble STIS data of the central region of NGC 1275. The NGC 1275 spectrum shows hints for three possible weak lines at about 1223.6~{\AA}, 1242.4~{\AA}, and 1244.0~{\AA}, each with a significance of about $2-3\sigma$. The putative features are best explained by charge exchange between highly ionized hydrogen, neon, and sulfur with neutral matter. The wavelengths of the charge exchange lines are found robustly with uncertainties $\leq 0.3$~{\AA}. The possible charge exchange emission shows a line-of-sight velocity offset of about $-3400$ km s$^{-1}$ with respect to the NGC 1275 nucleus, which resembles one of the Ly$\alpha$ absorbers reported in Baum et al. (2005). This indicates that the charge exchange lines might be emitted as the same position of the absorber, which could be ascribed to outflowing gas from the nucleus.
Gu, Liyi, Mao, Junjie ORCID: https://orcid.org/0000-0001-7557-9713, O'Dea, Christopher P., Baum, Stefi A., Mehdipour, Missagh and Kaastra, Jelle S.;
atomic processes, galaxies, X-rays, ultraviolet, Physics, Astronomy and Astrophysics
Science > Physics
Faculty of Science > Physics
|
CommonCrawl
|
•https://doi.org/10.1364/BOE.443652
Real-time calibrating polarization-sensitive diffuse reflectance handheld probe characterizes clinically relevant anatomical locations of oral tissue in vivo
Jianfeng Wang
Jianfeng Wang1,2,*
1Key Laboratory of Photoelectronic Imaging Technology and System of Ministry of Education of China, School of Optics and Photonics, Beijingy Institute of Technology, Beijing 100081, China
2Institute of Engineering Medicine, Beijing Institute of Technology, Beijing 100081, China
*Corresponding author: [email protected]
Jianfeng Wang https://orcid.org/0000-0003-3358-181X
J Wang
Jianfeng Wang, "Real-time calibrating polarization-sensitive diffuse reflectance handheld probe characterizes clinically relevant anatomical locations of oral tissue in vivo," Biomed. Opt. Express 13, 105-116 (2022)
In vivo wide-field reflectance/fluorescence imaging and polarization-sensitive optical coherence...
Yeoreum Yoon, et al.
Biomed. Opt. Express 6(2) 524-535 (2015)
Diffuse reflectance spectroscopy of epithelial tissue with a smart fiber-optic probe
Bing Yu, et al.
Autofluorescence and diffuse reflectance spectroscopy of oral epithelial tissue using a...
Richard A. Schwarz, et al.
Tissue Optics and Spectroscopy
Diffuse reflectance
Hyperspectral imaging
Photoacoustic tomography
Original Manuscript: September 17, 2021
Revised Manuscript: November 15, 2021
Manuscript Accepted: November 29, 2021
We report on the development of a unique real-time calibrating polarization-sensitive diffuse reflectance (rcPS-DR) handheld probe, and demonstrate its diagnostic potential through in-depth characterization and differentiation of clinically relevant anatomical locations of the oral cavity (i.e., alveolar process, lateral tongue and floor of mouth that account for 80% of all cases of oral squamous cell carcinoma) in vivo. With an embedded calibrating polytetrafluoroethylene (PTFE) optical diffuser, the PS-DR spectra bias arising from instrument response, time-dependent intensity fluctuation and fiber bending is calibrated through real-time measurement of the PS-DR system response function. A total of 554 in vivo rcPS-DR spectra were acquired from different oral tissue sites (alveolar process, n = 226, lateral tongue, n = 150 and floor of mouth, n = 178) of 14 normal subjects. Significantly (P<0.05, unpaired 2-sided Student's t-test) different spectral ratio (I540/I575) representing oxygenated hemoglobin contents were found among the alveolar process, lateral tongue and floor of mouth. Further partial least squares discriminant analysis (PLS-DA) and leave-one-out, cross validation (LOOCV) show that, synergizing the complementary information of the two real-time calibrated orthogonal-polarized PS-DR spectra, the rcPS-DR technique is found to better differentiate alveolar process, lateral tongue, and the floor of mouth (accuracies of 88.2%, 83.9%, 84.4%, sensitivities of 80.5%, 75.8%, 78% and specificities of 93.5%, 87.7%, 86.8%) than standard DR (accuracies of 80.8%, 72.9%, 68.5%, sensitivities of 63.2%, 41.5%, 81.3% and specificities of 92.9%, 87.7%, 63.8%) without PS detection. This work showed the feasibility of the rcPS-DR probe as a tool for studying oral cavity lesions in real clinical applications.
© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
With an annual incidence of 377 713 cases and 177 757 deaths, oral cancer is one most common malignancy worldwide [1]. Patients diagnosed with advanced stage oral cancer have a 5-year survival rate of only 30%, while the patient's survival can be improved up to 80% to 90% if the oral cancer can be detected and diagnosed early for appropriate treatments [2]. However, current routine screening of oral cancer relaying on medical, social and familial history documentation, risk factor (i.e., tobacco and alcohol usage) evaluation, visual inspection and palpation were deemed insufficient to detect oral cancer early. Invasive random biopsy followed by H&E histopathology are therefore recommended and remain the gold standard for oral cancer detection, but found with limitations. For instance, the H&E slide preparation may distort the oral tissue features, whilst the slide interpretation is highly subjective and depends heavily on the experiences of the pathologists. In particular, the biopsy strategy may not be suitable for patients having multiple suspicious lesions. Therefore, there exists unmet clinical need to develop more advanced diagnostic techniques for rapid, objective and enhanced diagnosis of cancer and early cancer in the oral cavity.
Several techniques have been developed and implemented to fill the unmet need, including the established imaging modalities (e.g., computed tomography, magnetic resonance imaging, and positron emission tomography etc. [3]), ultrasound imaging [4], together with a range of different optical imaging/spectroscopy techniques (e.g., OCT: optical coherence tomography [5], photoacoustic tomography [6], hyperspectral imaging [7], auto-fluorescence spectroscopy [8], Raman spectroscopy [9], diffuse reflectance (DR) spectroscopy [10] etc.) under development. Compared with established imaging modalities and/or ultrasound imaging, the optics based oral cancer detection techniques are found with multi advantages. First, optics-based techniques reduce patient exposure to harmful radiation. Second, they offer high resolution (on the order of micrometers or/and sub-micrometers). Third, they are good at visualizing soft tissues [11].
Among the various optics-based techniques developed for cancer detection, DR spectroscopy analyzing the sample optical (including scattering and absorption coefficients [12]), biochemical (water, lipid, protein, glucose [13], hemoglobin and blood oxygen concentration changes [14], etc.) properties and the changes thereof is gaining popularity for tissue characteristion in a number of organs including oral cavity [15–18]. In particular, polarization-sensitive (PS) DR spectroscopy, including that is used in combination with Mie theory calculations to further extract morphological information about epithelial tissue [19,20], that is used to enable depth-selective DR spectroscopy [21], that is used to extract full polarization properties (i.e., depolarization, diattenuation, and retardation et.al.) [22,23] and that is used with machine learning for detection of skin complications caused by diabetes [24], show promise for enhancing the performance of standard DR spectroscopy (without PS detection). Routine application of DR spectroscopy requires only one-time calibration to compensate for lamp intensity fluctuations, wavelength-dependent instrument response, inter-device variations, and fiber bending losses. However, clinical DR spectroscopy utilizes optical fiber that will be unavoidably and frequently twisted by the clinicians, resulting in changing DR system-response and introducing DR spectra bias. To tackle this challenge, Bing Yu et. al. [25–28] innovates a 'self-calibrating' DR probe capable of measuring and calibrating the DR spectroscopy system-response in real-time, resulting in better consistency among DR spectra measured across variant DR spectroscopy systems. In addition, the reported PS-DR probe is inherently capable of generating system-response independent spectroscopic 'depolarization ratio' [20]. Nevertheless, the two raw orthogonal polarized DR spectra are not system independent. To maximize the diagnostic advantages of PS-DR spectroscopy [23], system-response independent PS-DR spectra carrying more comprehensive diagnostic information are highly needed. To fill the need, we report, in this work, on the development of a unique real-time calibrating PS-DR (rcPS-DR) handheld probe. With an embedded calibrating polytetrafluoroethylene (PTFE) optical diffuser, the PS-DR spectra bias arising from instrument response and time-dependent intensity fluctuation is removed through real-time measurement of the PS-DR system-response function. The diagnostic advantages of rcPS-DR probe are confirmed through its enhanced differentiation of the three clinically relevant anatomical locations (i.e., alveolar process, lateral tongue and floor of mouth that account for 80% of all cases of oral squamous cell carcinoma [29]) of the oral cavity.
2.1 Real-time calibrating polarization-sensitive diffuse reflectance spectroscopy (rcPS-DR) system
Figure 1 shows the schematic of the rcPS-DR system developed for tissue measurements. The rcPS-DR system consists of an LED source (Solis-3C, Thorlabs, NJ, USA) for illumination, customized spectrometer array for PS-DR spectra measurements, and a unique rcPS-DR handheld probe. The three spectrometers (SunShine, CNILaser, ChangChun, China) of the customized spectrometer array are of the same specs, resulting in minimum inter-spectrometer variation and resultant interferences on the PS-DR spectra measured. Further, the spectra acquisition of the three spectrometers were synchronized by an external trigger box (EX-TA-Box, CNILaser, ChangChun, China). The rcPS-DR probe features five multimode fibers (FG200LEA, Thorlabs, NJ, USA) for light delivery. Three of the fibers (Exc. Fiber, Det. Fiber ($ \| $), and Det. Fiber (⊥), as shown in Fig. 1) consisted of PS-DR spectra acquisition channel and the other two fibers (Calib. Exc. Fiber, Calib. Det. Fiber, as shown in Fig. 1) calibration channel used for real-time PS-DR spectra calibration. Along the PS-DR spectra acquisition channel, there exists paired and custom cut polarizing films (Polarizer ($ \| $), and Polarizer (⊥), as shown in Fig. 1. #86-178, EdmundOptics, NJ, USA) with their fast axes positioned orthogonal to each other, enabling polarization-sensitive DR excitation and collection. The excitation-collection fiber spacing of both channels were the same and kept as 1 mm, resulting in a PS-DR spectra interrogation diameter of 0.5 mm and a predicted (by Monte Carlo simulations [30]) interrogation depth ∼ 0.4-0.7 mm, as consistent with previous reports [17]. To enable real-time calibrated PS-DR spectra measurements, a 10 mm thick PTFE diffuser (Calib. PTFE, as shown in Fig. 1) was embedded in the calibration channel of the probe. The thickness (∼10 mm) of the diffuser was the same as the DR standard (WS-1 diffuse reflectance standard, Ocean Insight, FL, USA), enabling PS-DR system response approaching to that of the DR standard (as will be shown below). One notes that the polarization films incorporated are custom cut into 1 mm by 1 mm, and the relative positioning of the probe components (i.e., beam delivery fibers, polarization films, and PTFE diffuser, etc.) were ensured through a 3D-printed component (Boston Micro Fabrication, Shenzhen, China). The overall probe tip diameter is 8 mm.
Fig. 1. (a) Schematic of the rcPS-DR system developed for tissue measurements. (b) distal tip of the probe. Light emission diode (LED); spectrometer (Spec); excitation Fiber (Exc. Fiber); detection fiber (Det. Fiber); excitation fiber in the calibration channel (Calib. Exc. Fiber); detection fiber in the calibration channel (Calib. Det. Fiber); polytetrafluoroethylene for calibration (Calib. PTFE).
To acquire the PS-DR spectra, light output from the LED is first coupled into the excitation fibers (Exc. Fiber, Calib. Exc. Fiber, as shown in Fig. 1) of both the acquisition and the calibration channel. On the one hand, the light input to the acquisition channel is further linearly-polarized (by Polarizer ($ \| $), as shown in Fig. 1) and shined onto the sample. Part of the backscattered DR spectra from the sample passed through the same polarizer enabling polarized illumination, whist the remaining backscattered DR spectra passed through the other orthogonally-polarized polarizer. All backscattered PS-DR spectra were sent to the spectrometer array through the detection fibers in the acquisition channel. On the other hand, the light input to the calibration channel was backscattered by the embedded PTFE diffuser and passed to the spectrometer array. One notes that since the PS-DR system response function is measured through the calibration channel of the rcPS-DR probe, in real-time and in synergy with the PS-DR spectra, the PS-DR spectra bias arising from instrument response, time-dependent LED intensity fluctuation and fiber bending can be removed (as will be shown below).
To quantify the polarization property of the sample under investigation, the spectroscopic degree of linear polarization (DLP) was further extracted from the two orthogonal-polarized and real-time calibrated PS-DR spectra as below [31]:
(1)$$DLP = \; \frac{{{I_{||}}(\lambda )- {I_ \bot }(\lambda )}}{{{I_{||}}(\lambda )+ {I_ \bot }(\lambda )}}$$
where ${I_{||}}(\lambda )$ and ${I_ \bot }(\lambda )$ are the back-scattered PS-DR spectra with polarization parallel and perpendicular to the excitation light, respectively, and $\lambda $ represents the light wavelength.
2.2 Statistical analysis
The unpaired two-sided Student's t-test was used to evaluate the rcPS-DR spectra differences among alveolar process, lateral tongue and floor of mouth [32]. A criterion of P value less than 0.05 was used to consider differences as statistically significant. Partial least squares (PLS) - discriminant analysis (DA) was applied on the rcPS-DR spectra to classify the different sites of the oral cavity [33]. Leave-one-out, cross-validation (LOOCV) was further used to assess and optimize the PLS-DA model complexity, while reducing the risk of over-fitting [33]. The adopted cross validation strategy was leave one tissue site out [33]. One notes that one-way analysis of variance (ANOVA) with a Fisher post hoc least significant difference (LSD) test was used to evaluate which wavelength range of the PS-DR spectra contributes most to the PLS-DA analysis before PLS-DA model development [34]. The above multivariate statistical analysis was performed using both in-house written scripts and open-source PLS-DA tool [33] in the Matlab programming environment (Mathworks. Inc., Natick, MA
2.3 Subjects
A total of 14 normal healthy subjects (10 females and 4 males, median age of 26) were recruited for in vivo rcPS-DR spectra measurements from the oral cavity. Informed consent forms were obtained from all participating subjects. Exclusion criteria included smokers, regular alcohol consumers and subjects suffering from systemic or oral mucosal diseases. Before in vivo rcPS-DR spectra measurements, all subjects underwent extensive mouthwash to reduce confounding factors (eg, food debris, microbial coatings). A total of 6 anatomic locations (i.e., left and right sides of alveolar process: left and right sides of lateral tongue; left and right sides of the floor of mouth, as illustrated in https://visualsonline.cancer.gov/details.cfm?imageid=9259) were predefined for rcPS-DR spectra measurements; and a total of 554 in vivo rcPS-DR spectra (alveolar process, n = 226, lateral tongue, n = 150 and floor of mouth, n = 178, as summarized in Table 1) were acquired from different oral tissue sites (alveolar process, n = 84, lateral tongue, n = 70 and floor of mouth, n = 84) of the recruited subjects.
Table 1. The detailed tissue types break down and sample distribution. AP: alveolar process. FM: floor of mouth; LT: lateral tongue.
View Table | View all tables in this article
3.1 Real-time calibrating capability evaluation of the rcPS-DR probe
To evaluate the real-time calibrating capability of the rcPS-DR probe, the system response functions (SRFs) determined by the rcPS-DR embedded diffuser and the DR standard (WS-1 diffuse reflectance standard, Ocean Insight, FL, USA) were measured and compared (Fig. 2(a)). As shown in Fig. 2(a), both SRFs were consistent and close to each other, suggesting that the rcPS-DR probe is capable of measuring the PS-DR SRF accurately. Further, we mimicked the fiber bending that is usually encountered and unavoidable during the clinical applications of the fiber-optic DR spectroscopy system. Using mirror (PF10-03-G01, Thorlabs, NJ, USA) positioned 20 µm away from the rcPS-DR probe as sample, different fiber bending radius were tested (5, 10, 15, 20 mm, Fig. 2(b)) with corresponding SRFs measured, demonstrating the different SRFs that would otherwise be neglected by the generally adopted one-time calibration strategy. Figure 2(c) shows the calibrated and therefore consistent PS-DR spectra in the detection channel, suggesting the real-time calibrating capability of the rcPS-DR probe developed. We also calculated the DLP (degree of linear polarization) when using the mirror (PF10-03-G01, Thorlabs, NJ, USA) as sample (Fig. 2(d)). We found the DLP measured deviates from but close to 1, validating the polarization detection capability of the rcPS-DR probe. One notes that the deviation is likely caused by the imperfection of the linear polarizer films used (Polarizer ($ \| $), Polarizer (⊥), as shown in Fig. 1).
Fig. 2. (a) PS-DR system response function measured by both the DR standard and the rcPS-DR embedded diffuser. (b) rcPS-DR system response function changes under different fiber bending radiuses. (c) The rcPS-DR spectra under the same fiber bending conditions as in (b). (d) The calculated DLP (degree of linear polarization). Note: the results in (c, d) both uses flat reflection mirror (PF10-03-G01, Thorlabs, NJ, USA) as sample.
In a separate experiment, we further investigate how the rcPS-DR probe functions for real-time calibration of the PS-DR SRFs. The intensity variations of the LED source were simulated through gradual reduction of the LED intensity through driving current adjustment using the LED driver (DC20, Thorlabs, NJ, USA). The raw PS-DR spectra in both polarization detection channels were measured (Figs. 3 a-b) in concurrence with the SRF of the calibration channel. While the raw PS-DR spectra were found with intensity variations arising from the LED intensity variations, the calibrated PS-DR spectra were shown (Figs. 3(c-d)) with intensity variation less than ± 5%, confirming the real-time calibrating capability of the rcPS-DR probe developed. The remaining ± 5% variation could be attributed to the variant coupling between the LED source and the rcPS-DR probe when tuning the LED intensity. The DLP measured was calculated to be close to 1 (Fig. 3(e)), revalidating the polarization detection capability of the rcPS-DR probe.
Fig. 3. (a, b) Raw PS-DR spectra reflected by the mirror under different levels of LED illumination of parallel (a, $ \| $), and perpendicular (b, ⊥) polarization detection channels, before calibration. (c, d) PS-DR spectra after calibration (i.e., ratio of the mirror-reflected intensity and the real-time calibrated PS-DR system response function). (e) DLP. The different mirror-reflected intensities in (a-b) are generated through adjustment of the LED driver current (10A, 6A, 3A, and 1.5A).
3.2 Polarization functionality validation of the rcPS-DR probe
Whilst the results in Figs. 2–3 confirm the real-time calibrating performance and validate the polarization detection capability of the rcPS-DR probe, further experiment was conducted to determine the polarization functionality of the rcPS-DR probe. External polarized light was launched into the rcPS-DR probe. As shown in Fig. 4(a), light delivered from an external multimode fiber was firstly collimated, linear polarized, and then was incident onto the rcPS-DR probe after passing through a half waveplate (HWP, AHWP20-VIS, LBTEK, Shenzhen, China). The HWP was rotated from 0 to 360 degrees, changing the polarization by 360 degrees. According to Malus Law, the intensities of HWP-tuned polarized light (${I_0}(\lambda )$) that passes through the acquisition channel of the rcPS-DR probe (${I_{||}}(\lambda )$ and ${I_ \bot }(\lambda )$) vary as the square of the cosine of the angle between the HWP fast axis and those of both polarizer films (ϕ and ϕ + π/2) within rcPS-DR probe, i.e., ${I_{||}}(\lambda )\; $=${I_0}(\lambda )\textrm{cos}{(\phi )^2}$, and ${I_ \bot }(\lambda )\; $=${I_0}(\lambda )\textrm{cos}{({\phi + \mathrm{\pi }/2} )^2}$. Besides, the DLP changes were calculated to be: $\textrm{cos}{(\phi )^2} - \; \textrm{sin}{(\phi )^2}$. The experimentally collected PS-DR spectra in both polarization acquisition channels (Figs. 4 b-d) and the relevant DLP (Fig. 4(e)) versus the HWP rotation angles were also shown, which were consistent with the predicted intensities by Malus Law. The results in Fig. 4 confirm the polarization detection capability of the rcPS-DR probe developed.
Fig. 4. (a) Schematic setup to determine the polarization detection capability of the rcPS-DR probe with external polarized light. (b, c) Raw PS-DR spectra for parallel (b, $ \| $), and perpendicular (c, ⊥) polarization detection channels, under different (0 to 360 degrees) rotation angles of the HWP as in (a). (d) Typical PS-DR spectra changes versus HWP rotation angles, and the resultant DLP. The wavelength was selected at 575 nm of (d, e). Multimode fiber (MMF); Collimator (Col); Linear polarizer (LP); Half waveplate (HWP).
3.3 Clinical performance evaluation of the rcPS-DR probe
In light of the above-confirmed real-time calibration (Figs. 2–3) and polarization detection (Fig. 4) capabilities of the rcPS-DR probe, we further sought to investigate the potential benefits of the rcPS-DR probe for real clinical applications. Figure 5(a) shows the real-time calibrated PS-DR spectra (mean ± SE, standard error) of alveolar process, lateral tongue and floor of mouth. The measured PS-DR spectra were consistent with previous studies [15–17], showing clearly identified dips around 540 nm and 575 nm that could be attributed to oxygenated hemoglobin absorption. Further look into the PS-DR spectra reveals subtle width and amplitude differences of the dips among the three clinically relevant sites. Quantitatively, the intensity ratios of the two dips were also shown (Fig. 5(b)), demonstrating significant (P<0.05, unpaired 2-sided Student's t-test) differences. In addition, the resulting DLP (Fig. 5(c)) differs among the three sites. We also found that the DLP (Fig. 5(c)) does not show a change corresponding to the dips of oxygenated hemoglobin absorption as in Fig. 5(a). This observation is consistent with previous reports [20], and warrants further investigations. The PS-DR spectra differences observed (Fig. 5) are likely caused by the significant structural differences among the alveolar process, lateral tongue and floor of mouth as revealed by our previous OCT study [35]. For instance, the alveolar processes investigated consists of ∼200 μm thick gingival layer on top of the underlying bone. The floor of mouth is comprised of clearly identified ∼240 μm thick non-keratinized epithelium above the lamina propria rich in collagen fibers, resulting in a significantly higher DLP (Fig. 5(c)) compared with the other two anatomical locations. Unlike the alveolar processes and floor of mouth, the lateral tongue is lacking of layering structure. However, how the structural differences cause the PS-DR spectra differences observed warrant further investigations. We are currently developing co-registered PS-DR spectroscopy and OCT imaging systems (i.e., PS-OCT and OCT angiography) to correlate and explain the PS-DR spectra findings of the current work.
Fig. 5. (a) Real-time calibrated PS-DR spectra (mean ± SE) measured from alveolar process, floor of mouth, and lateral tongues. (b). Box plot of the intensity ratios (mean ± SE) between I540 and I575 owing to oxygenated hemoglobin absorption. ○ Par and ◊ Perp correspond to the parallel- and perpendicular- polarization detection channels of the rcPS-DR probe. * (c) DLP (mean ± SE, standard error) of the alveolar process, floor of mouth, and lateral tongues. * P<0.05.
To elucidate the diagnostically important PS-DR wavelength range, Fig. 6(a) shows a logarithmic plot of the calculated P-values (ANOVA Fisher post hoc LSD test at the 0.05 level.) for each of the PS-DR spectra intensities in the entire wavelength range. We find the PS-DR spectra over the entire wavelength range shows statistically different differences (P < 1e-4), PLS-DA and LOOCV were therefore implemented on the entire PS-DR spectra measured (Fig. 5(a)), allowing quantitative evaluation of the inter-anatomical PS-DR spectra differences of the oral cavity, Fig. 6(b) shows the 2-dimensional ternary plot of the posterior probabilities of each PS-DR prediction using PLSDA-LOSCV. The prediction results are also summarized in Table 2. Figure 6 and Table 2 elucidates that the 3 tissue sites can generally be well separated with varying sensitivities (alveolar process: 80.5%, floor of mouth: 75.8%, lateral tongue: 78%), and specificities (alveolar process: 93.5%, floor of mouth: 87.7%, lateral tongue:86.8%) (Table 2) by rcPS-DR, which are superior than those standard DR without PS detection (sensitivities of 63.2%, 41.5%, 81.3% and specificities of 92.9%, 87.7%, 63.8%). One notes that four lateral variables were used for the PLS-DA model with PS-DR spectra, while the number is three for that with DR spectra. The enhanced separation capability can be explained by the real-time calibration (Figs. 2–3) and polarization-sensitive detection (Fig. 4) capability of rcPS-DR probe developed. Overall, the results of this study indicate the potential of the rcPS-DR probe as a tool for studying oral cavity lesions in real clinical applications, and the PS-DR spectra variations of different oral tissue sites should be taken into account in algorithm development for accurate tissue diagnosis and characterization in the oral cavity by using rcPS-DR probe. We are currently seeking collaborations with clinicians to assess how the rcPS-DR probe developed could aid to differentiate oral malignant lesions from normal tissue and determine oral cavity tumor margins for surgical operations.
Fig. 6. (a) ANOVA of the three tissue categories over the entire PS-DR spectra wavelength. (b) posterior probabilities of 554 statistically different PS-DR spectra measured by the rcPS-DR handheld probe belonging to alveolar process (n = 226), floor of mouth (n = 178), and lateral tongue (n = 150).
Table 2. Confusion matrix detailing the multiclass classification results of PS-DR spectra of different oral tissues using PLS-DA and LOSCV. AP: alveolar process. FM: floor of mouth; LT: lateral tongue.
Some limitations of the current work should be pointed out. First, the DR spectroscopy is in general sensitive to the pressure applied, and stepwise changes could occur in tissue DR spectra even at subtle pressure [36]. When in use, the rcPS-DR probe developed is in gentle contact with the tissue sites measured, minimizing the pressure-introduced PS-DR spectra variation. However, for real clinical use of the rcPS-DR probe, further integration of a pressure sensor at the rcPS-DR probe tip is needed [27], allowing real-time pressure monitoring and assuring consistent pressure applied. Second, the current rcPS-DR probe is lacking depth-resolved DR spectra interrogation capability as required for epithelial precancer detection [37]. Pressure-sensitive depth-resolved rcPS-DR probe through integration of pressure sensor and focusing optics (i.e., ball lens, or GRIN lens) at the current rcPS-DR probe tip is under development within our lab.
In summary, we have developed a unique rcPS-DR probe. Significant different oxygenated hemoglobin contents were observed among different anatomic locations of the oral cavity (i.e., alveolar process, lateral tongue and the floor of mouth). Synergizing the complementary information of the two real-time calibrated orthogonal-polarized PS-DR spectra, the rcPS-DR probe is found to better differentiate alveolar process, lateral tongue, and the floor of mouth. This work demonstrates the potential of using rcPS-DR probe as a clinically useful tool for enhancing real-time in vivo detection and diagnosis of oral disease in the oral cavity.
Beijing Institute of Technology Research Fund Program for Young Scholars.
1. H. Sung, J. Ferlay, R. L. Siegel, M. Laversanne, I. Soerjomataram, A. Jemal, and F. Bray, "Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries," CA A Cancer J Clin. 71(3), 209–249 (2021). [CrossRef]
2. S. Tiziani, V. Lopes, and U. L. Günther, "Early stage diagnosis of oral cancer using 1H NMR-based metabolomics," Neoplasia 11(3), 269–IN10 (2009). [CrossRef]
3. P. Pałasz, Ł. Adamski, M. Górska-Chrząstek, A. Starzyńska, and M. Studniarek, "Contemporary diagnostic imaging of oral squamous cell carcinoma - a review of literature," Pol J Radiol 82, 193–202 (2017). [CrossRef]
4. T. J. W. Klein Nulent, R. Noorlag, E. M. Van Cann, F. A. Pameijer, S. M. Willems, A. Yesuratnam, A. Rosenberg, R. de Bree, and R. J. J. van Es, "Intraoral ultrasonography to measure tumor thickness of oral cancer: A systematic review and meta-analysis," Oral Oncol. 77, 29–36 (2018). [CrossRef]
5. C. K. Lee, T. T. Chi, C. T. Wu, M. T. Tsai, C. P. Chiang, and C. C. Yang, "Diagnosis of oral precancer with optical coherence tomography," Biomed. Opt. Express 3(7), 1632–1646 (2012). [CrossRef]
6. W. Qin, W. Qi, T. Jin, H. Guo, and L. Xi, "In vivo oral imaging with integrated portable photoacoustic microscopy and optical coherence tomography," Appl. Phys. Lett. 111(26), 263704 (2017). [CrossRef]
7. H. Martin, V. L. James, W. Xu, Y. C. Amy, and F. Baowei, "Optical biopsy of head and neck cancer using hyperspectral imaging and convolutional neural networks," J. Biomed. Opt. 24, 1–9 (2019). [CrossRef]
8. C. Y. Wang, H. K. Chiang, C. T. Chen, C. P. Chiang, Y. S. Kuo, and S. N. Chow, "Diagnosis of oral cancer by light-induced autofluorescence spectroscopy using double excitation wavelengths," Oral Oncol. 35(2), 144–150 (1999). [CrossRef]
9. E. M. Barroso, R. W. H. Smits, T. C. Bakker Schut, I. ten Hove, J. A. Hardillo, E. B. Wolvius, R. J. Baatenburg de Jong, S. Koljenović, and G. J. Puppels, "Discrimination between oral cancer and healthy tissue based on water content determined by Raman spectroscopy," Anal. Chem. 87(4), 2419–2426 (2015). [CrossRef]
10. R. S. Brouwer de Koning, E. J. M. Baltussen, M. B. Karakullukcu, B. Dashtbozorg, L. A. Smit, R. Dirven, B. H. W. Hendriks, H. Sterenborg, and T. J. M. Ruers, "Toward complete oral cavity cancer resection using a handheld diffuse reflectance spectroscopy probe," J. Biomed. Opt. 23, 1–8 (2018). [CrossRef]
11. J. Wang, Y. Xu, and S.A. Boppart, "Review of optical coherence tomography in oncology," J. Biomed. Opt. 22, 1–23 (2017). [CrossRef]
12. A. J. Moy and J. W. Tunnell, JW: Chapter 17 - Diffuse Reflectance Spectroscopy and Imaging. In: Imaging in Dermatology. edn. MR Hamblin, P Avci, and GK. Gupta, eds. (Academic Press, 2016), pp. 203–215.
13. S. F. Malin, T. L. Ruchti, T. B. Blank, S. N. Thennadil, and S. L. Monfre, "Noninvasive prediction of glucose by near-infrared diffuse reflectance spectroscopy," Clin. Chem. 45(9), 1651–1658 (1999). [CrossRef]
14. A. I. Mundo, G. J. Greening, M. J. Fahr, L. N. Hale, E. A. Bullard EA, N. Rajaram, and T. J. Muldoon, "Diffuse reflectance spectroscopy to monitor murine colorectal tumor progression and therapeutic response," J. Biomed. Opt. 25, 1–16 (2020). [CrossRef]
15. D. Fabila, J. M. de la Rosa, E. Stolik, S. Moreno, K. Suárez-Álvarez, G. López-Navarrete, C. Guzmán, J. Aguirre-García, C. Acevedo-García, and D. Kershenobich, "In vivo assessment of liver fibrosis using diffuse reflectance and fluorescence spectroscopy: a proof of concept," Photodiagnosis and Photodynamic Therapy 9(4), 376–382 (2012). [CrossRef]
16. J. L. Jayanthi, G. U. Nisha, S. Manju, E. K. Philip, P. Jeemon, K. V. Baiju, V. Y. Beena, and N. Subhash, "Diffuse reflectance spectroscopy: diagnostic accuracy of a non-invasive screening technique for early detection of malignant changes in the oral cavity," BMJ Open1(1), e000071 (2011). [CrossRef]
17. S. Brouwer de Koning, E. J. Baltussen, M. B. Karakullukcu, B. Dashtbozorg, L. Smit, R. Dirven, B. H. Hendriks, H. J. C. Sterenborg, and T. J. Ruers, "Toward complete oral cavity cancer resection using a handheld diffuse reflectance spectroscopy probe," J. Biomed. Opt. 12, 121611 (2018).
18. K. Lin, W. Zheng, and Z. Huang, "Integrated autofluorescence endoscopic imaging and point-wise spectroscopy for real-time in vivo tissue measurements," J. Biomed. Opt. 15(4), 040507 (2010). [CrossRef]
19. K. Sokolov, R. Drezek, K. Gossage, and R. Richards-Kortum, "Reflectance spectroscopy with polarized light: is it sensitive to cellular and nuclear morphology," Opt. Express 5(13), 302–317 (1999). [CrossRef]
20. A. Myakov, L. Nieman, L. Wicky, U. Utzinger, R. Richards-Kortum, and K. Sokolov, "Fiber optic probe for polarized reflectance spectroscopy in vivo: design and performance," J. Biomed. Opt. 7(3), 388–397 (2002). [CrossRef]
21. M. Jimenez, S. Lam, C. Poh, and K. Sokolov, "Depth sensitive oblique polarized reflectance spectroscopy of oral epithelial tissue," Proc. SPIE 9155, 20 (2014). [CrossRef]
22. R. S. Gurjar, V. Backman, L. T. Perelman, I. Georgakoudi, K. Badizadegan, I. Itzkan, R. R. Dasari, and M. S. Feld, "Imaging human epithelial properties with polarized light-scattering spectroscopy," Nat. Med. 7(11), 1245–1248 (2001). [CrossRef]
23. J. Wang, W. Zheng, K. Lin, and Z. Huang, "Integrated Mueller-matrix near-infrared imaging and point-wise spectroscopy improves colonic cancer detection," Biomed. Opt Express 7(4), 1116–1126 (2016). [CrossRef]
24. V. Dremin, Z. Marcinkevics, E. Zherebtsov, A. Popov, A. Grabovskis, H. Kronberga, K. Geldnere, A. Doronin, I. Meglinski, and A. Bykov A, "Skin complications of diabetes mellitus revealed by polarized hyperspectral imaging and machine learning," IEEE Trans. Med. Imaging 40(4), 1207–1216 (2021). [CrossRef]
25. B. Yu B, H. Fu H, T. Bydlon, J. E. Bender, and N. Ramanujam, "Diffuse reflectance spectroscopy with a self-calibrating fiber optic probe," Opt. Lett. 33(16), 1783–1785 (2008). [CrossRef]
26. B. Yu, H. L. Fu, and N. Ramanujam, "Instrument independent diffuse reflectance spectroscopy," J. Biomed. Opt. 16(1), 011010 (2011). [CrossRef]
27. B. Yu, A. Shah, V. K. Nagarajan, and D. G. Ferris, "Diffuse reflectance spectroscopy of epithelial tissue with a smart fiber-optic probe," Biomed. Opt. Express 5(3), 675–689 (2014). [CrossRef]
28. V. K. Nagarajan, J. M. Ward, and B. Yu, "Association of liver tissue optical properties and thermal damage," Lasers Surg. Med. 52(8), 779–787 (2020). [CrossRef]
29. R. W. H. Smits, S. Koljenović, J. A. Hardillo, I. ten Hove, C. A. Meeuwis, A. Sewnaik, E. A. C. Dronkers, T. C. Bakker Schut, T. P. M. Langeveld, and J. Molenaar J, "Resection margins in oral cancer surgery: room for improvement," Head Neck 38(S1), E2197–E2203 (2016). [CrossRef]
30. J. Wang, M. S. Bergholt, W. Zheng, and Z. Huang, "Development of a beveled fiber-optic confocal Raman probe for enhancing in vivo epithelial tissue Raman measurements at endoscopy," Opt. Lett. 38(13), 2321–2323 (2013). [CrossRef]
31. V. Tuchin, "Polarized light interaction with tissues," J. Biomed. Opt. 21(7), 071114 (2016). [CrossRef]
32. J. Wang, K. Lin, W. Zheng, K. Y. Ho, M. Teh, K. G. Yeoh, and Z. Huang, "Simultaneous fingerprint and high-wavenumber fiber-optic Raman spectroscopy improves in vivo diagnosis of esophageal squamous cell carcinoma at endoscopy," Sci. Rep. 5(1), 12957 (2015). [CrossRef]
33. Y. V. Zontov, O. Y. Rodionova, S. V. Kucheryavskiy, and A. L. Pomerantsev, "PLS-DA –a MATLAB GUI tool for hard and soft approaches to partial least squares discriminant analysis," Chemometrics and Intelligent Laboratory Systems 203, 104064 (2020). [CrossRef]
34. J. Wang, K. Lin, W. Zheng, K. Y. Ho, M. Teh, K. G. Yeoh, and Z. Huang, "Fiber-optic Raman spectroscopy for in vivo diagnosis of gastric dysplasia," Faraday Discuss. 187, 377–392 (2016). [CrossRef]
35. J. Wang J, W. Zheng W, K. Lin K, and Z. Huang, "Characterizing biochemical and morphological variations of clinically relevant anatomical locations of oral tissue in vivo with hybrid Raman spectroscopy and optical coherence tomography technique," J. Biophotonics. 11(3), e201700113 (2018). [CrossRef]
36. A. Popov, A. Bykov, and I. Meglinski, "Influence of probe pressure on diffuse reflectance spectra of human skin measured in vivo," J. Biomed. Opt. 11, 110504 (2017). [CrossRef]
37. R. A. Schwarz, D. Arifler, S. K. Chang, I. Pavlova, I. A. Hussain, V. Mack, B. Knight, R. Richards-Kortum, and A. M. Gillenwater, "Ball lens coupled fiber-optic probe for depth-resolved spectroscopy of epithelial tissue," Opt. Lett. 30(10), 1159–1161 (2005). [CrossRef]
H. Sung, J. Ferlay, R. L. Siegel, M. Laversanne, I. Soerjomataram, A. Jemal, and F. Bray, "Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries," CA A Cancer J Clin. 71(3), 209–249 (2021).
S. Tiziani, V. Lopes, and U. L. Günther, "Early stage diagnosis of oral cancer using 1H NMR-based metabolomics," Neoplasia 11(3), 269–IN10 (2009).
P. Pałasz, Ł. Adamski, M. Górska-Chrząstek, A. Starzyńska, and M. Studniarek, "Contemporary diagnostic imaging of oral squamous cell carcinoma - a review of literature," Pol J Radiol 82, 193–202 (2017).
T. J. W. Klein Nulent, R. Noorlag, E. M. Van Cann, F. A. Pameijer, S. M. Willems, A. Yesuratnam, A. Rosenberg, R. de Bree, and R. J. J. van Es, "Intraoral ultrasonography to measure tumor thickness of oral cancer: A systematic review and meta-analysis," Oral Oncol. 77, 29–36 (2018).
C. K. Lee, T. T. Chi, C. T. Wu, M. T. Tsai, C. P. Chiang, and C. C. Yang, "Diagnosis of oral precancer with optical coherence tomography," Biomed. Opt. Express 3(7), 1632–1646 (2012).
W. Qin, W. Qi, T. Jin, H. Guo, and L. Xi, "In vivo oral imaging with integrated portable photoacoustic microscopy and optical coherence tomography," Appl. Phys. Lett. 111(26), 263704 (2017).
H. Martin, V. L. James, W. Xu, Y. C. Amy, and F. Baowei, "Optical biopsy of head and neck cancer using hyperspectral imaging and convolutional neural networks," J. Biomed. Opt. 24, 1–9 (2019).
C. Y. Wang, H. K. Chiang, C. T. Chen, C. P. Chiang, Y. S. Kuo, and S. N. Chow, "Diagnosis of oral cancer by light-induced autofluorescence spectroscopy using double excitation wavelengths," Oral Oncol. 35(2), 144–150 (1999).
E. M. Barroso, R. W. H. Smits, T. C. Bakker Schut, I. ten Hove, J. A. Hardillo, E. B. Wolvius, R. J. Baatenburg de Jong, S. Koljenović, and G. J. Puppels, "Discrimination between oral cancer and healthy tissue based on water content determined by Raman spectroscopy," Anal. Chem. 87(4), 2419–2426 (2015).
R. S. Brouwer de Koning, E. J. M. Baltussen, M. B. Karakullukcu, B. Dashtbozorg, L. A. Smit, R. Dirven, B. H. W. Hendriks, H. Sterenborg, and T. J. M. Ruers, "Toward complete oral cavity cancer resection using a handheld diffuse reflectance spectroscopy probe," J. Biomed. Opt. 23, 1–8 (2018).
J. Wang, Y. Xu, and S.A. Boppart, "Review of optical coherence tomography in oncology," J. Biomed. Opt. 22, 1–23 (2017).
A. J. Moy and J. W. Tunnell, JW: Chapter 17 - Diffuse Reflectance Spectroscopy and Imaging. In: Imaging in Dermatology. edn. MR Hamblin, P Avci, and GK. Gupta, eds. (Academic Press, 2016), pp. 203–215.
S. F. Malin, T. L. Ruchti, T. B. Blank, S. N. Thennadil, and S. L. Monfre, "Noninvasive prediction of glucose by near-infrared diffuse reflectance spectroscopy," Clin. Chem. 45(9), 1651–1658 (1999).
A. I. Mundo, G. J. Greening, M. J. Fahr, L. N. Hale, E. A. Bullard EA, N. Rajaram, and T. J. Muldoon, "Diffuse reflectance spectroscopy to monitor murine colorectal tumor progression and therapeutic response," J. Biomed. Opt. 25, 1–16 (2020).
D. Fabila, J. M. de la Rosa, E. Stolik, S. Moreno, K. Suárez-Álvarez, G. López-Navarrete, C. Guzmán, J. Aguirre-García, C. Acevedo-García, and D. Kershenobich, "In vivo assessment of liver fibrosis using diffuse reflectance and fluorescence spectroscopy: a proof of concept," Photodiagnosis and Photodynamic Therapy 9(4), 376–382 (2012).
J. L. Jayanthi, G. U. Nisha, S. Manju, E. K. Philip, P. Jeemon, K. V. Baiju, V. Y. Beena, and N. Subhash, "Diffuse reflectance spectroscopy: diagnostic accuracy of a non-invasive screening technique for early detection of malignant changes in the oral cavity," BMJ Open1(1), e000071 (2011).
S. Brouwer de Koning, E. J. Baltussen, M. B. Karakullukcu, B. Dashtbozorg, L. Smit, R. Dirven, B. H. Hendriks, H. J. C. Sterenborg, and T. J. Ruers, "Toward complete oral cavity cancer resection using a handheld diffuse reflectance spectroscopy probe," J. Biomed. Opt. 12, 121611 (2018).
K. Lin, W. Zheng, and Z. Huang, "Integrated autofluorescence endoscopic imaging and point-wise spectroscopy for real-time in vivo tissue measurements," J. Biomed. Opt. 15(4), 040507 (2010).
K. Sokolov, R. Drezek, K. Gossage, and R. Richards-Kortum, "Reflectance spectroscopy with polarized light: is it sensitive to cellular and nuclear morphology," Opt. Express 5(13), 302–317 (1999).
A. Myakov, L. Nieman, L. Wicky, U. Utzinger, R. Richards-Kortum, and K. Sokolov, "Fiber optic probe for polarized reflectance spectroscopy in vivo: design and performance," J. Biomed. Opt. 7(3), 388–397 (2002).
M. Jimenez, S. Lam, C. Poh, and K. Sokolov, "Depth sensitive oblique polarized reflectance spectroscopy of oral epithelial tissue," Proc. SPIE 9155, 20 (2014).
R. S. Gurjar, V. Backman, L. T. Perelman, I. Georgakoudi, K. Badizadegan, I. Itzkan, R. R. Dasari, and M. S. Feld, "Imaging human epithelial properties with polarized light-scattering spectroscopy," Nat. Med. 7(11), 1245–1248 (2001).
J. Wang, W. Zheng, K. Lin, and Z. Huang, "Integrated Mueller-matrix near-infrared imaging and point-wise spectroscopy improves colonic cancer detection," Biomed. Opt Express 7(4), 1116–1126 (2016).
V. Dremin, Z. Marcinkevics, E. Zherebtsov, A. Popov, A. Grabovskis, H. Kronberga, K. Geldnere, A. Doronin, I. Meglinski, and A. Bykov A, "Skin complications of diabetes mellitus revealed by polarized hyperspectral imaging and machine learning," IEEE Trans. Med. Imaging 40(4), 1207–1216 (2021).
B. Yu B, H. Fu H, T. Bydlon, J. E. Bender, and N. Ramanujam, "Diffuse reflectance spectroscopy with a self-calibrating fiber optic probe," Opt. Lett. 33(16), 1783–1785 (2008).
B. Yu, H. L. Fu, and N. Ramanujam, "Instrument independent diffuse reflectance spectroscopy," J. Biomed. Opt. 16(1), 011010 (2011).
B. Yu, A. Shah, V. K. Nagarajan, and D. G. Ferris, "Diffuse reflectance spectroscopy of epithelial tissue with a smart fiber-optic probe," Biomed. Opt. Express 5(3), 675–689 (2014).
V. K. Nagarajan, J. M. Ward, and B. Yu, "Association of liver tissue optical properties and thermal damage," Lasers Surg. Med. 52(8), 779–787 (2020).
R. W. H. Smits, S. Koljenović, J. A. Hardillo, I. ten Hove, C. A. Meeuwis, A. Sewnaik, E. A. C. Dronkers, T. C. Bakker Schut, T. P. M. Langeveld, and J. Molenaar J, "Resection margins in oral cancer surgery: room for improvement," Head Neck 38(S1), E2197–E2203 (2016).
J. Wang, M. S. Bergholt, W. Zheng, and Z. Huang, "Development of a beveled fiber-optic confocal Raman probe for enhancing in vivo epithelial tissue Raman measurements at endoscopy," Opt. Lett. 38(13), 2321–2323 (2013).
V. Tuchin, "Polarized light interaction with tissues," J. Biomed. Opt. 21(7), 071114 (2016).
J. Wang, K. Lin, W. Zheng, K. Y. Ho, M. Teh, K. G. Yeoh, and Z. Huang, "Simultaneous fingerprint and high-wavenumber fiber-optic Raman spectroscopy improves in vivo diagnosis of esophageal squamous cell carcinoma at endoscopy," Sci. Rep. 5(1), 12957 (2015).
Y. V. Zontov, O. Y. Rodionova, S. V. Kucheryavskiy, and A. L. Pomerantsev, "PLS-DA –a MATLAB GUI tool for hard and soft approaches to partial least squares discriminant analysis," Chemometrics and Intelligent Laboratory Systems 203, 104064 (2020).
J. Wang, K. Lin, W. Zheng, K. Y. Ho, M. Teh, K. G. Yeoh, and Z. Huang, "Fiber-optic Raman spectroscopy for in vivo diagnosis of gastric dysplasia," Faraday Discuss. 187, 377–392 (2016).
J. Wang J, W. Zheng W, K. Lin K, and Z. Huang, "Characterizing biochemical and morphological variations of clinically relevant anatomical locations of oral tissue in vivo with hybrid Raman spectroscopy and optical coherence tomography technique," J. Biophotonics. 11(3), e201700113 (2018).
A. Popov, A. Bykov, and I. Meglinski, "Influence of probe pressure on diffuse reflectance spectra of human skin measured in vivo," J. Biomed. Opt. 11, 110504 (2017).
R. A. Schwarz, D. Arifler, S. K. Chang, I. Pavlova, I. A. Hussain, V. Mack, B. Knight, R. Richards-Kortum, and A. M. Gillenwater, "Ball lens coupled fiber-optic probe for depth-resolved spectroscopy of epithelial tissue," Opt. Lett. 30(10), 1159–1161 (2005).
Acevedo-García, C.
Adamski, L.
Aguirre-García, J.
Amy, Y. C.
Arifler, D.
Backman, V.
Badizadegan, K.
Baiju, K. V.
Bakker Schut, T. C.
Baltussen, E. J.
Baltussen, E. J. M.
Baowei, F.
Barroso, E. M.
Beena, V. Y.
Bender, J. E.
Bergholt, M. S.
Blank, T. B.
Boppart, S.A.
Bray, F.
Bullard EA, E. A.
Bydlon, T.
Bykov, A.
Bykov A, A.
Chang, S. K.
Chen, C. T.
Chi, T. T.
Chiang, C. P.
Chiang, H. K.
Chow, S. N.
Dasari, R. R.
Dashtbozorg, B.
de Bree, R.
de Jong, R. J. Baatenburg
de Koning, R. S. Brouwer
de Koning, S. Brouwer
de la Rosa, J. M.
Dirven, R.
Doronin, A.
Dremin, V.
Drezek, R.
Dronkers, E. A. C.
Fabila, D.
Fahr, M. J.
Feld, M. S.
Ferlay, J.
Ferris, D. G.
Fu, H. L.
Fu H, H.
Geldnere, K.
Georgakoudi, I.
Gillenwater, A. M.
Górska-Chrzastek, M.
Gossage, K.
Grabovskis, A.
Greening, G. J.
Günther, U. L.
Guo, H.
Gurjar, R. S.
Guzmán, C.
Hale, L. N.
Hardillo, J. A.
Hendriks, B. H.
Hendriks, B. H. W.
Ho, K. Y.
Huang, Z.
Hussain, I. A.
Itzkan, I.
James, V. L.
Jayanthi, J. L.
Jeemon, P.
Jemal, A.
Jimenez, M.
Jin, T.
Karakullukcu, M. B.
Kershenobich, D.
Klein Nulent, T. J. W.
Knight, B.
Koljenovic, S.
Kronberga, H.
Kucheryavskiy, S. V.
Kuo, Y. S.
Lam, S.
Langeveld, T. P. M.
Laversanne, M.
Lee, C. K.
Lin, K.
Lin K, K.
Lopes, V.
López-Navarrete, G.
Mack, V.
Malin, S. F.
Manju, S.
Marcinkevics, Z.
Martin, H.
Meeuwis, C. A.
Meglinski, I.
Molenaar J, J.
Monfre, S. L.
Moreno, S.
Moy, A. J.
Muldoon, T. J.
Mundo, A. I.
Myakov, A.
Nagarajan, V. K.
Nieman, L.
Nisha, G. U.
Noorlag, R.
Palasz, P.
Pameijer, F. A.
Pavlova, I.
Perelman, L. T.
Philip, E. K.
Poh, C.
Pomerantsev, A. L.
Puppels, G. J.
Qi, W.
Qin, W.
Rajaram, N.
Ramanujam, N.
Richards-Kortum, R.
Rodionova, O. Y.
Rosenberg, A.
Ruchti, T. L.
Ruers, T. J.
Ruers, T. J. M.
Schwarz, R. A.
Sewnaik, A.
Shah, A.
Siegel, R. L.
Smit, L.
Smit, L. A.
Smits, R. W. H.
Soerjomataram, I.
Sokolov, K.
Starzynska, A.
Sterenborg, H.
Sterenborg, H. J. C.
Stolik, E.
Studniarek, M.
Suárez-Álvarez, K.
Subhash, N.
Sung, H.
Teh, M.
ten Hove, I.
Thennadil, S. N.
Tiziani, S.
Tsai, M. T.
Tuchin, V.
Tunnell, J. W.
Utzinger, U.
Van Cann, E. M.
van Es, R. J. J.
Wang, C. Y.
Wang J, J.
Ward, J. M.
Wicky, L.
Willems, S. M.
Wolvius, E. B.
Wu, C. T.
Xi, L.
Xu, W.
Xu, Y.
Yang, C. C.
Yeoh, K. G.
Yesuratnam, A.
Yu, B.
Yu B, B.
Zheng, W.
Zheng W, W.
Zherebtsov, E.
Zontov, Y. V.
Anal. Chem. (1)
Biomed. Opt Express (1)
Biomed. Opt. Express (2)
CA A Cancer J Clin. (1)
Chemometrics and Intelligent Laboratory Systems (1)
Clin. Chem. (1)
Faraday Discuss. (1)
Head Neck (1)
IEEE Trans. Med. Imaging (1)
J. Biomed. Opt. (10)
J. Biophotonics. (1)
Lasers Surg. Med. (1)
Nat. Med. (1)
Neoplasia (1)
Oral Oncol. (2)
Photodiagnosis and Photodynamic Therapy (1)
Pol J Radiol (1)
Proc. SPIE (1)
Sci. Rep. (1)
(1) D L P = I | | ( λ ) − I ⊥ ( λ ) I | | ( λ ) + I ⊥ ( λ )
Ruikang (Ricky) Wang, Editor-in-Chief
The detailed tissue types break down and sample distribution. AP: alveolar process. FM: floor of mouth; LT: lateral tongue.
AP 112 114 226
LT 78 72 150
FM 88 90 178
Confusion matrix detailing the multiclass classification results of PS-DR spectra of different oral tissues using PLS-DA and LOSCV. AP: alveolar process. FM: floor of mouth; LT: lateral tongue.
TRUE AP 182 24 20
FM 10 135 33
Sensitivity (%) 80.5 75.8 78
Specificity (%) 93.5 87.7 86.8
Accuracy (%) 88.2 83.9 84.4
|
CommonCrawl
|
Journal of Intelligent & Fuzzy Systems - Volume 24, issue 4
The purpose of the Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology is to foster advancements of knowledge and help disseminate results concerning recent applications and case studies in the areas of fuzzy logic, intelligent systems, and web-based applications among working professionals and professionals in education and research, covering a broad cross-section of technical disciplines.
The journal will publish original articles on current and potential applications, case studies, and education in intelligent systems, fuzzy systems, and web-based systems for engineering and other technical fields in science and technology. The journal focuses on the disciplines of computer science, electrical engineering, manufacturing engineering, industrial engineering, chemical engineering, mechanical engineering, civil engineering, engineering management, bioengineering, and biomedical engineering. The scope of the journal also includes developing technologies in mathematics, operations research, technology management, the hard and soft sciences, and technical, social and environmental issues.
Recommend this journal Editorial board Submissions Subscribe Sign up for newsletters
Fuzzy logic control implementation considerations and complexity analyses
Authors: Wang, Dali | Bai, Ying
Abstract: In this paper, a number of implementation and algorithmic options for fuzzy logic control applications are presented. The emphases are on the analyses of the computational load and memory requirements of all the processing stages of fuzzy logic control algorithms. Comparisons are made among commonly used techniques and recommendations are provided. The results could be used to guide the design of fuzzy logic controllers for an embedded or hardware implementation.
DOI: 10.3233/IFS-2012-0587
Citation: Journal of Intelligent & Fuzzy Systems, vol. 24, no. 4, pp. 677-683, 2013
L-fuzzy topogenous orders and L-fuzzy topologies
Authors: El-Dardery, M. | Ramadan, A.A. | Kim, Y.C.
Abstract: In this paper, we introduce the notions of L-fuzzy topoenous orders and investigate some of properties. We investigate the relationships among L-fuzzy topoenous orders, L-fuzzy topologies and L-fuzzy interior operators.
Keywords: Quantales, L-fuzzy topologies, L-fuzzy topoenous orders, L-fuzzy interior operators
A fast learnt fuzzy neural network for huge scale discrete data function approximation and prediction
Authors: Khayat, Omid | Razjouyan, Javad | Rahatabad, Fereidoun Nowshiravan | Nejad, Hadi Chahkandi
Abstract: In real world dataset, there are often large amount of discrete data that the concern is the interpolation and/or extrapolation by an approximation tool. Therefore, a training process will be actually used for definition and construction of the approximator parameters. Huge amount of data may lead to high computation time and a time consuming training process. To this concern a fast learnt fuzzy neural network as a robust function approximator and predictor is proposed in this paper. The learning procedure and the structure of the network is described in detail. Simplicity and fast learning process are the main features of …the proposing Self-Organizing Fuzzy Neural Network (SOFNN), which automates structure and parameter identification simultaneously based on input-target samples. First, without need of clustering, the initial structure of the network with the specified number of rules is established, and then a training process based on the error of other training samples is applied to obtain a more precision model. After the network structure is identified, an optimization process based on the known error criteria is performed to optimize the obtained parameter set of the premise parts and the consequent parts. At the end, comprehensive comparisons are made with other approaches to demonstrate that the proposed algorithm is superior in term of compact structure, convergence speed, memory usage and learning efficiency. Show more
Keywords: Self-organizing fuzzy neural network, hybrid learning algorithm, function approximation, prediction, chaotic time series
Power electronics converter control based on rule based algorithm
Authors: Deperlioglu, Omer
Abstract: The exact modeling of power converter circuit that includes several semiconductor switching devices is not easy due to the non-linear and time varying characteristics of the switching devices. Thus, controlling the system effectively without exact mathematical model is very important. The rule based controller (RBC) can easily be used in the control of any systems when an exact mathematical model of the system cannot be obtained. In this paper, a RBC for DC-DC converter is proposed for output voltage control of DC-DC converter. As compared to conventional fuzzy logic control (FLC), it provides improved performances in terms of overshoot limitation …and sensitivity to load and line voltage variations. Simulation and experimental results of buck converter confirm the validity of proposed control technique. Show more
Keywords: Electronic switching systems, DC-DC power conversion, fuzzy systems, rule based systems
Ant Colony System algorithm solving a Thermal Generator Maintenance Scheduling Problem
Authors: Vlachos, Aristidis
Abstract: The maintenance scheduling problem of thermal generators is a large-scale combinatorial optimization with constraints. In this paper an Ant Colony System (ACS) algorithm, one of the Ant Colony Optimization (ACO) algorithms, is proposed for the maintenance scheduling problem. This ant colony optimization method allows the "agents" of an ant colony to deposit a small amount of pheromone trail to every path that has been explored, thus passing on to the other agents the information concerning the best solution. With the iterations we construct the final solution. This method is called "positive feedback". The basic optimization routine is reinforced with the …introduction of elitist ants who make the best solution stronger. The algorithm is applied to a real-scale system, and further experimenting leads to results that are commented. Show more
Keywords: Thermal Generator Maintenance Scheduling Problem, Ant Colony Optimization, Ant Colony System
λ-ideal convergence in intuitionistic fuzzy 2-normed linear space
Authors: Esi, Ayhan | Hazarika, Bipan
Abstract: An ideal I is a family of subsets of positive integers $\mathbb{N}$ which is closed under taking finite unions and subsets of its elements. In [8], Kostyrko et al., introduced the concept of ideal convergence as a sequence (xk ) of real numbers is said to be I-convergent to a real number $\ell$, if for each ϵ > 0 the set $\{k\in\mathbb{N}:|x_{k}-\ell|\geq\varepsilon\}$ belongs to I. The aim of this paper is to introduce and study the notion of λ-ideal convergence in intuitionistic fuzzy 2-normed space as a variant of the notion of ideal convergence. Also Iλ -limit points and Iλ …-cluster points have been defined and the relation between them has been establish. Furthermore, Cauchy and Iλ -Cauchy sequences are introduced and studied. Show more
Keywords: Ideal convergence, intuitionistic fuzzy normed space, λ-convergence
Two new time-variant methods for fuzzy time series forecasting
Authors: Kamali, Hamid Reza | Shahnazari-Shahrezaei, Parisa | Kazemipoor, Hamed
Abstract: Nowadays, time series are widely used in forecasting. With the advent of fuzzy sets, a new gate in time series has been opened up as fuzzy time series. Basically, more information of future is being examined in fuzzy time series forecasting. Fuzzy time series methods have been extensively considered in articles and researches, especially in forecasting the historical data of statistics of Alabama University's enrollments. In this paper, two different methods are presented to accurately forecast fuzzy time series and achieve more information. To verify and validate the performance of proposed methods, four different time series including, time series with …cyclic variations, a combination of linear trend and cyclic variations, exponential trend, and real values of statistics of Alabama University's enrollments are considered, too. At the end of this paper, the performance of proposed methods and existing methods in the literature are compared with each other. Show more
Keywords: Fuzzy sets, fuzzy time series, time-variant, forecasting
Possibility mean and variance based method for multi-attribute decision making with triangular intuitionistic fuzzy numbers
Authors: Wan, Shu-Ping | Li, Deng-Feng
Abstract: Triangular intuitionistic fuzzy numbers (TIFNs) are useful to deal with ill-known quantities in decision making problems. The focus of this paper is on multi-attribute decision making (MADM) problems in which the attribute values are expressed with TIFNs and the information on attribute weights is incomplete, which are solved by developing a new decision method based on possibility mean and variance of TIFNs. The notions of possibility mean and variance for TIFNs are introduced as well as the possibility standard deviation. A new ranking approach for TIFNs is developed according to the ratio of the possibility mean to the possibility standard …deviation. Hereby we construct a bi-objective programming model, which maximizes the ratios of the possibility mean to the possibility standard deviation for membership and non-membership functions on alternative's overall attribute values. Using the lexicographic approach, the bi-objective programming model is transformed into two non-linear programming models, which are further transformed into the linear programming models by using the variable transformation. Thus, we can obtain the maximum ratios of the possibility mean to the possibility standard deviation, s are used to rank the alternatives. A numerical example is examined to demonstrate applicability and implementation process of the proposed method. Show more
Keywords: Multi-attribute decision making, triangular intuitionistic fuzzy number, possibility mean, possibility variance
Mathematical programming methodology for multiattribute decision making using interval-valued intuitionistic fuzzy sets
Authors: Wang, Li-Ling | Li, Deng-Feng | Zhang, Shu-Shen
Abstract: Interval-valued intuitionistic fuzzy (IVIF) sets are a useful tool to deal with fuzziness inherent in decision data and decision making process. The aim of this paper is to develop a methodology for solving multiattribute decision making (MADM) with both ratings of alternatives on attributes and weights being expressed with IVIF sets. In this methodology, a weighted Euclidean distance between IF sets is defined using weights of IF sets. A pair of nonlinear programming models is constructed based on the concept of the relative closeness coefficients and the distance defined. Two simpler auxiliary nonlinear programming models are further derived to calculate …the relative closeness coefficient intervals of alternatives to the IVIF positive ideal solution, which can be used to generate ranking order of alternatives based on the concept of likelihood of interval numbers. The method proposed in this paper is illustrated with a real example. Show more
Keywords: Intuitionistic fuzzy set, multiattribute decision making, mathematical programming, uncertainty, fuzzy system
Risk analysis of combustion system using vague ranking method
Authors: Verma, Manjit | Kumar, Amit | Singh, Pushpinder | Singh, Yaduvir
Abstract: A new approach for vague risk analysis based on the ranking of trapezoidal vague sets is proposed. Firstly, a new method for ranking of vague sets is presented. Then, the proposed method is applied to developed a new method for dealing with vague risk analysis problems. This analysis helps us to find out the probability of failure of each components of combustion system, which could be used for managerial decision making and future system maintenance strategy. The proposed method provides a useful way for handling vague risk analysis problems.
Keywords: Ranking function, vague sets, fuzzy sets, vague risk analysis
Issue 1, 2
Issue ifs00201
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office [email protected]
[email protected]
如果您在出版方面需要帮助或有任何建, 件至: [email protected]
|
CommonCrawl
|
EnvStats
Package for Environmental Statistics, Including US EPA Guidance
Search the EnvStats package
ACE.13.TCE.df: Trichloroethylene Concentrations Before and After Remedation
anovaPE: Compute Lack-of-Fit and Pure Error Anova Table for a Linear...
aovN: Compute Sample Size Necessary to Achieve Specified Power for...
aovPower: Compute the Power of a One-Way Fixed-Effects Analysis of...
base: Base b Representation of a Number
Benthic.df: Benthic Data from Monitoring Program in Chesapeake Bay
boxcox: Boxcox Power Transformation
boxcoxCensored: Boxcox Power Transformation for Type I Censored Data
boxcoxCensored.object: S3 Class "boxcoxCensored"
boxcoxLm.object: S3 Class "boxcoxLm"
boxcox.object: S3 Class "boxcox"
boxcoxTransform: Apply a Box-Cox Power Transformation to a Set of Data
calibrate: Fit a Calibration Line or Curve
calibrate.object: S3 Class "calibrate"
CastilloAndHadi1994: Abstract: Castillo and Hadi (1994)
cdfCompare: Plot Two Cumulative Distribution Functions
cdfCompareCensored: Plot Two Cumulative Distribution Functions Based on Censored...
cdfPlot: Plot Cumulative Distribution Function
chenTTest: Chen's Modified One-Sided t-test for Skewed Distributions
Chi: The Chi Distribution
ciBinomHalfWidth: Half-Width of Confidence Interval for Binomial Proportion or...
ciBinomN: Sample Size for Specified Half-Width of Confidence Interval...
ciNormHalfWidth: Half-Width of Confidence Interval for Normal Distribution...
ciNormN: Sample Size for Specified Half-Width of Confidence Interval...
ciNparConfLevel: Compute Confidence Level Associated with a Nonparametric...
ciNparN: Sample Size for Nonparametric Confidence Interval for a...
ciTableMean: Table of Confidence Intervals for Mean or Difference Between...
ciTableProp: Table of Confidence Intervals for Proportion or Difference...
cv: Sample Coefficient of Variation.
detectionLimitCalibrate: Determine Detection Limit
distChoose: Choose Best Fitting Distribution Based on Goodness-of-Fit...
distChooseCensored: Choose Best Fitting Distribution Based on Goodness-of-Fit...
distChooseCensored.object: S3 Class "distChooseCensored"
distChoose.object: S3 Class "distChoose"
Distribution.df: Data Frame Summarizing Available Probability Distributions...
ebeta: Estimate Parameters of a Beta Distribution
ebinom: Estimate Parameter of a Binomial Distribution
ecdfPlot: Empirical Cumulative Distribution Function Plot
ecdfPlotCensored: Empirical Cumulative Distribution Function Plot Based on Type...
eevd: Estimate Parameters of an Extreme Value (Gumbel) Distribution
eexp: Estimate Rate Parameter of an Exponential Distribution
egamma: Estimate Parameters of Gamma Distribution
egammaAltCensored: Estimate Mean and Coefficient of Variation for a Gamma...
egammaCensored: Estimate Shape and Scale Parameters for a Gamma Distribution...
egeom: Estimate Probability Parameter of a Geometric Distribution
egevd: Estimate Parameters of a Generalized Extreme Value...
ehyper: Estimate Parameter of a Hypergeometric Distribution
elnorm: Estimate Parameters of a Lognormal Distribution (Log-Scale)
elnorm3: Estimate Parameters of a Three-Parameter Lognormal...
elnormAlt: Estimate Parameters of a Lognormal Distribution (Original...
In EnvStats: Package for Environmental Statistics, Including US EPA Guidance
egammaAltCensored R Documentation
Estimate Mean and Coefficient of Variation for a Gamma Distribution Based on Type I Censored Data
Estimate the mean and coefficient of variation of a gamma distribution given a sample of data that has been subjected to Type I censoring, and optionally construct a confidence interval for the mean.
egammaAltCensored(x, censored, method = "mle", censoring.side = "left",
ci = FALSE, ci.method = "profile.likelihood", ci.type = "two-sided",
conf.level = 0.95, n.bootstraps = 1000, pivot.statistic = "z",
ci.sample.size = sum(!censored))
numeric vector of observations. Missing (NA), undefined (NaN), and infinite (Inf, -Inf) values are allowed but will be removed.
numeric or logical vector indicating which values of x are censored. This must be the same length as x. If the mode of censored is "logical", TRUE values correspond to elements of x that are censored, and FALSE values correspond to elements of x that are not censored. If the mode of censored is "numeric", it must contain only 1's and 0's; 1 corresponds to TRUE and 0 corresponds to FALSE. Missing (NA) values are allowed but will be removed.
character string specifying the method of estimation. Currently, the only available method is maximum likelihood (method="mle").
censoring.side
character string indicating on which side the censoring occurs. The possible values are "left" (the default) and "right".
logical scalar indicating whether to compute a confidence interval for the mean. The default value is ci=FALSE.
ci.method
character string indicating what method to use to construct the confidence interval for the mean. The possible values are "profile.likelihood" (profile likelihood; the default), "normal.approx" (normal approximation), and "bootstrap" (based on bootstrapping). See the DETAILS section for more information. This argument is ignored if ci=FALSE.
ci.type
character string indicating what kind of confidence interval to compute. The possible values are "two-sided" (the default), "lower", and "upper". This argument is ignored if ci=FALSE.
conf.level
a scalar between 0 and 1 indicating the confidence level of the confidence interval. The default value is conf.level=0.95. This argument is ignored if ci=FALSE.
n.bootstraps
numeric scalar indicating how many bootstraps to use to construct the confidence interval for the mean when ci.type="bootstrap". This argument is ignored if ci=FALSE and/or ci.method does not equal "bootstrap".
pivot.statistic
character string indicating which pivot statistic to use in the construction of the confidence interval for the mean when ci.method="normal.approx" or ci.method="normal.approx.w.cov" (see the DETAILS section). The possible values are pivot.statistic="z" (the default) and pivot.statistic="t". When pivot.statistic="t" you may supply the argument ci.sample size (see below). The argument pivot.statistic is ignored if ci=FALSE.
ci.sample.size
numeric scalar indicating what sample size to assume to construct the confidence interval for the mean if pivot.statistic="t" and
ci.method="normal.approx". The default value is the number of uncensored observations.
If x or censored contain any missing (NA), undefined (NaN) or infinite (Inf, -Inf) values, they will be removed prior to performing the estimation.
Let \underline{x} denote a vector of N observations from a gamma distribution with parameters shape=κ and scale=θ. The relationship between these parameters and the mean μ and coefficient of variation τ of this distribution is given by:
κ = τ^{-2} \;\;\;\;\;\; (1)
θ = μ/κ \;\;\;\;\;\; (2)
μ = κ \; θ \;\;\;\;\;\; (3)
τ = κ^{-1/2} \;\;\;\;\;\; (4)
Assume n (0 < n < N) of these observations are known and c (c=N-n) of these observations are all censored below (left-censored) or all censored above (right-censored) at k fixed censoring levels
T_1, T_2, …, T_k; \; k ≥ 1 \;\;\;\;\;\; (5)
For the case when k ≥ 2, the data are said to be Type I multiply censored. For the case when k=1, set T = T_1. If the data are left-censored and all n known observations are greater than or equal to T, or if the data are right-censored and all n known observations are less than or equal to T, then the data are said to be Type I singly censored (Nelson, 1982, p.7), otherwise they are considered to be Type I multiply censored.
Let c_j denote the number of observations censored below or above censoring level T_j for j = 1, 2, …, k, so that
∑_{i=1}^k c_j = c \;\;\;\;\;\; (6)
Let x_{(1)}, x_{(2)}, …, x_{(N)} denote the "ordered" observations, where now "observation" means either the actual observation (for uncensored observations) or the censoring level (for censored observations). For right-censored data, if a censored observation has the same value as an uncensored one, the uncensored observation should be placed first. For left-censored data, if a censored observation has the same value as an uncensored one, the censored observation should be placed first.
Note that in this case the quantity x_{(i)} does not necessarily represent the i'th "largest" observation from the (unknown) complete sample.
Finally, let Ω (omega) denote the set of n subscripts in the "ordered" sample that correspond to uncensored observations.
Maximum Likelihood Estimation (method="mle")
For Type I left censored data, the likelihood function is given by:
L(μ, τ | \underline{x}) = {N \choose c_1 c_2 … c_k n} ∏_{j=1}^k [F(T_j)]^{c_j} ∏_{i \in Ω} f[x_{(i)}] \;\;\;\;\;\; (7)
where f and F denote the probability density function (pdf) and cumulative distribution function (cdf) of the population (Cohen, 1963; Cohen, 1991, pp.6, 50). That is,
f(t) = \frac{t^{κ-1} e^{-t/θ}}{θ^κ Γ(κ)} \;\;\;\;\;\; (8)
(Johnson et al., 1994, p.343), where κ and θ are defined in terms of μ and τ by Equations (1) and (2) above.
For left singly censored data, Equation (7) simplifies to:
L(μ, τ | \underline{x}) = {N \choose c} [F(T)]^{c} ∏_{i = c+1}^n f[x_{(i)}] \;\;\;\;\;\; (9)
Similarly, for Type I right censored data, the likelihood function is given by:
L(μ, τ | \underline{x}) = {N \choose c_1 c_2 … c_k n} ∏_{j=1}^k [1 - F(T_j)]^{c_j} ∏_{i \in Ω} f[x_{(i)}] \;\;\;\;\;\; (10)
and for right singly censored data this simplifies to:
L(κ, θ | \underline{x}) = {N \choose c} [1 - F(T)]^{c} ∏_{i = 1}^n f[x_{(i)}] \;\;\;\;\;\; (11)
The maximum likelihood estimators are computed by minimizing the negative log-likelihood function.
This section explains how confidence intervals for the mean μ are computed.
Likelihood Profile (ci.method="profile.likelihood")
This method was proposed by Cox (1970, p.88), and Venzon and Moolgavkar (1988) introduced an efficient method of computation. This method is also discussed by Stryhn and Christensen (2003) and Royston (2007). The idea behind this method is to invert the likelihood-ratio test to obtain a confidence interval for the mean μ while treating the coefficient of variation τ as a nuisance parameter. Equation (7) above shows the form of the likelihood function L(μ, τ | \underline{x}) for multiply left-censored data, where μ and τ are defined by Equations (3) and (4), and Equation (10) shows the function for multiply right-censored data.
Following Stryhn and Christensen (2003), denote the maximum likelihood estimates of the mean and coefficient of variation by (μ^*, τ^*). The likelihood ratio test statistic (G^2) of the hypothesis H_0: μ = μ_0 (where μ_0 is a fixed value) equals the drop in 2 log(L) between the "full" model and the reduced model with μ fixed at μ_0, i.e.,
G^2 = 2 \{log[L(μ^*, τ^*)] - log[L(μ_0, τ_0^*)]\} \;\;\;\;\;\; (12)
where τ_0^* is the maximum likelihood estimate of τ for the reduced model (i.e., when μ = μ_0). Under the null hypothesis, the test statistic G^2 follows a chi-squared distribution with 1 degree of freedom.
Alternatively, we may express the test statistic in terms of the profile likelihood function L_1 for the mean μ, which is obtained from the usual likelihood function by maximizing over the parameter τ, i.e.,
L_1(μ) = max_{τ} L(μ, τ) \;\;\;\;\;\; (13)
Then we have
G^2 = 2 \{log[L_1(μ^*)] - log[L_1(μ_0)]\} \;\;\;\;\;\; (14)
A two-sided (1-α)100\% confidence interval for the mean μ consists of all values of μ_0 for which the test is not significant at level alpha:
μ_0: G^2 ≤ χ^2_{1, {1-α}} \;\;\;\;\;\; (15)
where χ^2_{ν, p} denotes the p'th quantile of the chi-squared distribution with ν degrees of freedom. One-sided lower and one-sided upper confidence intervals are computed in a similar fashion, except that the quantity 1-α in Equation (15) is replaced with 1-2α.
Normal Approximation (ci.method="normal.approx")
This method constructs approximate (1-α)100\% confidence intervals for μ based on the assumption that the estimator of μ is approximately normally distributed. That is, a two-sided (1-α)100\% confidence interval for μ is constructed as:
[\hat{μ} - t_{1-α/2, m-1}\hat{σ}_{\hat{μ}}, \; \hat{μ} + t_{1-α/2, m-1}\hat{σ}_{\hat{μ}}] \;\;\;\; (16)
where \hat{μ} denotes the estimate of μ, \hat{σ}_{\hat{μ}} denotes the estimated asymptotic standard deviation of the estimator of μ, m denotes the assumed sample size for the confidence interval, and t_{p,ν} denotes the p'th quantile of Student's t-distribuiton with ν degrees of freedom. One-sided confidence intervals are computed in a similar fashion.
The argument ci.sample.size determines the value of m and by default is equal to the number of uncensored observations. This is simply an ad-hoc method of constructing confidence intervals and is not based on any published theoretical results.
When pivot.statistic="z", the p'th quantile from the standard normal distribution is used in place of the p'th quantile from Student's t-distribution.
The standard deviation of the mle of μ is estimated based on the inverse of the Fisher Information matrix.
Bootstrap and Bias-Corrected Bootstrap Approximation (ci.method="bootstrap")
The bootstrap is a nonparametric method of estimating the distribution (and associated distribution parameters and quantiles) of a sample statistic, regardless of the distribution of the population from which the sample was drawn. The bootstrap was introduced by Efron (1979) and a general reference is Efron and Tibshirani (1993).
In the context of deriving an approximate (1-α)100\% confidence interval for the population mean μ, the bootstrap can be broken down into the following steps:
Create a bootstrap sample by taking a random sample of size N from the observations in \underline{x}, where sampling is done with replacement. Note that because sampling is done with replacement, the same element of \underline{x} can appear more than once in the bootstrap sample. Thus, the bootstrap sample will usually not look exactly like the original sample (e.g., the number of censored observations in the bootstrap sample will often differ from the number of censored observations in the original sample).
Estimate μ based on the bootstrap sample created in Step 1, using the same method that was used to estimate μ using the original observations in \underline{x}. Because the bootstrap sample usually does not match the original sample, the estimate of μ based on the bootstrap sample will usually differ from the original estimate based on \underline{x}.
Repeat Steps 1 and 2 B times, where B is some large number. For the function
egammaAltCensored, the number of bootstraps B is determined by the argument n.bootstraps (see the section ARGUMENTS above). The default value of n.bootstraps is 1000.
Use the B estimated values of μ to compute the empirical cumulative distribution function of this estimator of μ (see ecdfPlot), and then create a confidence interval for μ based on this estimated cdf.
The two-sided percentile interval (Efron and Tibshirani, 1993, p.170) is computed as:
[\hat{G}^{-1}(\frac{α}{2}), \; \hat{G}^{-1}(1-\frac{α}{2})] \;\;\;\;\;\; (17)
where \hat{G}(t) denotes the empirical cdf evaluated at t and thus \hat{G}^{-1}(p) denotes the p'th empirical quantile, that is, the p'th quantile associated with the empirical cdf. Similarly, a one-sided lower confidence interval is computed as:
[\hat{G}^{-1}(α), \; ∞] \;\;\;\;\;\; (18)
and a one-sided upper confidence interval is computed as:
[0, \; \hat{G}^{-1}(1-α)] \;\;\;\;\;\; (19)
The function egammaAltCensored calls the R function quantile to compute the empirical quantiles used in Equations (17)-(19).
The percentile method bootstrap confidence interval is only first-order accurate (Efron and Tibshirani, 1993, pp.187-188), meaning that the probability that the confidence interval will contain the true value of μ can be off by k/√{N}, where kis some constant. Efron and Tibshirani (1993, pp.184-188) proposed a bias-corrected and accelerated interval that is second-order accurate, meaning that the probability that the confidence interval will contain the true value of μ may be off by k/N instead of k/√{N}. The two-sided bias-corrected and accelerated confidence interval is computed as:
[\hat{G}^{-1}(α_1), \; \hat{G}^{-1}(α_2)] \;\;\;\;\;\; (20)
α_1 = Φ[\hat{z}_0 + \frac{\hat{z}_0 + z_{α/2}}{1 - \hat{a}(z_0 + z_{α/2})}] \;\;\;\;\;\; (21)
α_2 = Φ[\hat{z}_0 + \frac{\hat{z}_0 + z_{1-α/2}}{1 - \hat{a}(z_0 + z_{1-α/2})}] \;\;\;\;\;\; (22)
\hat{z}_0 = Φ^{-1}[\hat{G}(\hat{μ})] \;\;\;\;\;\; (23)
\hat{a} = \frac{∑_{i=1}^N (\hat{μ}_{(\cdot)} - \hat{μ}_{(i)})^3}{6[∑_{i=1}^N (\hat{μ}_{(\cdot)} - \hat{μ}_{(i)})^2]^{3/2}} \;\;\;\;\;\; (24)
where the quantity \hat{μ}_{(i)} denotes the estimate of μ using all the values in \underline{x} except the i'th one, and
\hat{μ}{(\cdot)} = \frac{1}{N} ∑_{i=1}^N \hat{μ_{(i)}} \;\;\;\;\;\; (25)
A one-sided lower confidence interval is given by:
[\hat{G}^{-1}(α_1), \; ∞] \;\;\;\;\;\; (26)
and a one-sided upper confidence interval is given by:
[0, \; \hat{G}^{-1}(α_2)] \;\;\;\;\;\; (27)
where α_1 and α_2 are computed as for a two-sided confidence interval, except α/2 is replaced with α in Equations (21) and (22).
The constant \hat{z}_0 incorporates the bias correction, and the constant \hat{a} is the acceleration constant. The term "acceleration" refers to the rate of change of the standard error of the estimate of μ with respect to the true value of μ (Efron and Tibshirani, 1993, p.186). For a normal (Gaussian) distribution, the standard error of the estimate of μ does not depend on the value of μ, hence the acceleration constant is not really necessary.
When ci.method="bootstrap", the function egammaAltCensored computes both the percentile method and bias-corrected and accelerated method bootstrap confidence intervals.
a list of class "estimateCensored" containing the estimated parameters and other information. See estimateCensored.object for details.
A sample of data contains censored observations if some of the observations are reported only as being below or above some censoring level. In environmental data analysis, Type I left-censored data sets are common, with values being reported as "less than the detection limit" (e.g., Helsel, 2012). Data sets with only one censoring level are called singly censored; data sets with multiple censoring levels are called multiply or progressively censored.
Statistical methods for dealing with censored data sets have a long history in the field of survival analysis and life testing. More recently, researchers in the environmental field have proposed alternative methods of computing estimates and confidence intervals in addition to the classical ones such as maximum likelihood estimation. Helsel (2012, Chapter 6) gives an excellent review of past studies of the properties of various estimators for parameters of a normal or lognormal distribution based on censored environmental data.
In practice, it is better to use a confidence interval for the mean or a joint confidence region for the mean and standard deviation (or coefficient of variation), rather than rely on a single point-estimate of the mean. Few studies have been done to evaluate the performance of methods for constructing confidence intervals for the mean or joint confidence regions for the mean and coefficient of variation of a gamma distribution when data are subjected to single or multiple censoring. See, for example, Singh et al. (2006).
Steven P. Millard ([email protected])
Cohen, A.C. (1963). Progressively Censored Samples in Life Testing. Technometrics 5, 327–339
Cohen, A.C. (1991). Truncated and Censored Samples. Marcel Dekker, New York, New York, 312pp.
Cox, D.R. (1970). Analysis of Binary Data. Chapman & Hall, London. 142pp.
Efron, B. (1979). Bootstrap Methods: Another Look at the Jackknife. The Annals of Statistics 7, 1–26.
Efron, B., and R.J. Tibshirani. (1993). An Introduction to the Bootstrap. Chapman and Hall, New York, 436pp.
Forbes, C., M. Evans, N. Hastings, and B. Peacock. (2011). Statistical Distributions, Fourth Edition. John Wiley and Sons, Hoboken, NJ.
Helsel, D.R. (2012). Statistics for Censored Environmental Data Using Minitab and R, Second Edition. John Wiley \& Sons, Hoboken, New Jersey.
Johnson, N.L., S. Kotz, and N. Balakrishnan. (1994). Continuous Univariate Distributions, Volume 1. Second Edition. John Wiley and Sons, New York, Chapter 17.
Millard, S.P., P. Dixon, and N.K. Neerchal. (2014; in preparation). Environmental Statistics with R. CRC Press, Boca Raton, Florida.
Nelson, W. (1982). Applied Life Data Analysis. John Wiley and Sons, New York, 634pp.
Royston, P. (2007). Profile Likelihood for Estimation and Confdence Intervals. The Stata Journal 7(3), pp. 376–387.
Singh, A., R. Maichle, and S. Lee. (2006). On the Computation of a 95% Upper Confidence Limit of the Unknown Population Mean Based Upon Data Sets with Below Detection Limit Observations. EPA/600/R-06/022, March 2006. Office of Research and Development, U.S. Environmental Protection Agency, Washington, D.C.
Stryhn, H., and J. Christensen. (2003). Confidence Intervals by the Profile Likelihood Method, with Applications in Veterinary Epidemiology. Contributed paper at ISVEE X (November 2003, Chile). https://gilvanguedes.com/wp-content/uploads/2019/05/Profile-Likelihood-CI.pdf.
Venzon, D.J., and S.H. Moolgavkar. (1988). A Method for Computing Profile-Likelihood-Based Confidence Intervals. Journal of the Royal Statistical Society, Series C (Applied Statistics) 37(1), pp. 87–94.
egammaCensored, GammaDist, egamma, estimateCensored.object.
# Chapter 15 of USEPA (2009) gives several examples of estimating the mean
# and standard deviation of a lognormal distribution on the log-scale using
# manganese concentrations (ppb) in groundwater at five background wells.
# In EnvStats these data are stored in the data frame
# EPA.09.Ex.15.1.manganese.df.
# Here we will estimate the mean and coefficient of variation
# ON THE ORIGINAL SCALE using the MLE and
# assuming a gamma distribution.
# First look at the data:
#-----------------------
EPA.09.Ex.15.1.manganese.df
# Sample Well Manganese.Orig.ppb Manganese.ppb Censored
#1 1 Well.1 <5 5.0 TRUE
#2 2 Well.1 12.1 12.1 FALSE
#23 3 Well.5 3.3 3.3 FALSE
#25 5 Well.5 <2 2.0 TRUE
longToWide(EPA.09.Ex.15.1.manganese.df,
"Manganese.Orig.ppb", "Sample", "Well",
paste.row.name = TRUE)
# Well.1 Well.2 Well.3 Well.4 Well.5
#Sample.1 <5 <5 <5 6.3 17.9
#Sample.2 12.1 7.7 5.3 11.9 22.7
#Sample.3 16.9 53.6 12.6 10 3.3
#Sample.4 21.6 9.5 106.3 <2 8.4
#Sample.5 <2 45.9 34.5 77.2 <2
# Now estimate the mean and coefficient of variation
# using the MLE, and compute a confidence interval
# for the mean using the profile-likelihood method.
#---------------------------------------------------
with(EPA.09.Ex.15.1.manganese.df,
egammaAltCensored(Manganese.ppb, Censored, ci = TRUE))
#Results of Distribution Parameter Estimation
#Based on Type I Censored Data
#Assumed Distribution: Gamma
#Censoring Side: left
#Censoring Level(s): 2 5
#Estimated Parameter(s): mean = 19.664797
# cv = 1.252936
#Estimation Method: MLE
#Data: Manganese.ppb
#Censoring Variable: Censored
#Sample Size: 25
#Percent Censored: 24%
#Confidence Interval for: mean
#Confidence Interval Method: Profile Likelihood
#Confidence Interval Type: two-sided
#Confidence Level: 95%
#Confidence Interval: LCL = 12.25151
# UCL = 34.35332
#----------
# Compare the confidence interval for the mean
# based on assuming a lognormal distribution versus
elnormAltCensored(Manganese.ppb, Censored,
ci = TRUE))$interval$limits
# LCL UCL
#12.37629 69.87694
egammaAltCensored(Manganese.ppb, Censored,
EnvStats documentation built on March 18, 2022, 5:39 p.m.
Related to egammaAltCensored in EnvStats...
EnvStats index
Package overview README.md
|
CommonCrawl
|
Home Journals RIA Deep Convolution Features in Non-linear Embedding Space for Fundus Image Classification
Source Normalized Impact per Paper (SNIP) 2019: 0.256 ℹSource Normalized Impact per Paper(SNIP):
Deep Convolution Features in Non-linear Embedding Space for Fundus Image Classification
Venkatesulu Dondeti* | Jyostna Devi Bodapati | Shaik Nagur Shareef | Veeranjaneyulu Naralasetti
Department of CSE, Vignan's Foundation for Science Technology and Research, Vadlamudi 522213, India
Department of IT, Vignan's Foundation for Science Technology and Research, Vadlamudi 522213, India
Corresponding Author Email:
[email protected]
https://doi.org/10.18280/ria.340308
| Citation
34.03_08.pdf
A machine learning model is introduced to recognize the severity level of the Diabetic Retinopathy (DR), a disease observed in the people suffering from diabetes for a long time and is one of the causes of vision loss and blindness. Major objective of this approach is to generate an effective feature representation of the fundus images so that the level of severity can be identified with less effort and using limited number of samples for training. Color fundus images of the retina are collected, preprocessed and deep features are extracted by feeding them to a deep Convolutional Network, Neural Architecture Search Network (NASNet) which searches for the best convolutional layer (or "cell") in NASNet search space. The representations of retinal images in deep space are given as input to the classification model to get the severity level of the disease. The proposed model is applied on the benchmark APTOS 2019 retinal fundus image dataset to evaluate the performance of the proposed model. Our experimental studies indicate that ⱱ-Support Vector Machine (ⱱ-SVM) when trained using the projected deep features leads to an improvement in accuracy compared to other machine learning models for fundus image classification. In addition, from the experimental studies we understand that deep features from NASNet give better representation compared to the handcrafted features and features obtained using other projections. We observe that deep features transformed using t-distributed stochastic neighbor embedding (t-SNE) gives more discriminative representations of retinal images and help to achieve an accuracy of 77.90%.
Diabetic Retinopathy (DR), Radial Basis Kernel (RBF), Neural Architecture Search Network (NASNet) features, deep features, ⱱ-Support Vector Machine (SVM), t-SNE
Diabetic Retinopathy (DR) is one of the prevalent diseases found in Asian countries in humans who maintain high blood glucose levels for more than 10 years. Diabetic Mellitus (DM) is a group of metabolic disorders mostly seen in the people who suffer from high blood sugar levels for a long time [1]. As per the statistics [2], DR is the leading cause of global blindness in the recent past. According to a published by WHO in 2013, around 382 million people are suffering from DR and by 2015 this number will increase to 592 million. So, it should be detected in the early stages to prevent blindness. Small lesions are observed in the eyes of affected people which leads to irreversible blindness. Depending on the type of lesion, the severity of the disease varies, including irregular retinal blood vessels, MicroAneurysms (MAs), cotton wool spots, Hard Exudates (HE), and haemorrhages. Figure 1 indicates the types of lesions that could be developed in the eyes of people affected by DR.
DR is not found in all the people with diabetes and is commonly observed in those who suffer from diabetes for more than ten years [3]. Depending on the extent of the disease, DR can be classified into one of the four types: mild (R1), moderate (R1), moderately severe (R2), severe (R2), and proliferative (R3). Micro aneurysms can be observed in the eyes during the early stages of DR. This early stage is known to be mild DR. As the disease progresses to a moderate level, it is possible to experience swelling in the blood vessels that causes blurred vision. Again, extreme stages are graded into Non-Proliferative (NPDR) and Proliferative (PDR). Abnormal growth of blood vessels may be observed during the NPDR stage. This stage is serious due to the severe blockage of blood vessels. In addition to the wide retinal rupture leading to complete vision loss, PDR is the advanced stage of DR retinal detachment that can be observed [4]. Figure 1 shows the retinal samples with different severity levels of DR.
Before the implementation of automation in DR diagnosis tools, the patient's retinal scan is taken during the treatment of the disease and is manually examined by experts to ascertain if the person is affected by DR. If the patient is affected by DR, further tests must be carried out to determine the extent of the disease. Before the advent of AI and ML algorithms, all this process used to be done manually and it is very helpful to ophthalmologists if the process is automated.
In recent years, several tools have been developed to automate DR detection based on algorithms for machine learning. The first level of DR detection automation focuses on detecting the hard exudates (HE) provided with retinal scan images of diabetic patients. Long [5] has developed a method for automating the HE spots by using SVM and dynamic thresholding methods. An approach that uses a combination of fuzzy C-means and SVM is used to detect HE from the given retinal photographs in question. In addition to detecting HE from the retinal image, the severity level of the disease is also incorporated to make the system more sophisticated [6], SVM based classifiers are adapted to find cotton wool spots on the retinal image.
Figure 1. Sample retinal images effected by diabetic retinopathy representing different types of lesions
Another class of methods focuses on the identification of microaneurysms (MAs) in the retinal images. The presence of MA indicates the severity of the DR disease, a lot of work has been done to identify MA from the given retinal images. Eftekheri [7] has implemented a deep learning-based approach to spot MA in the retinal scan images of diabetic patients.
Van Grinsven et al. [8] suggest another deep learning approach to hemorrhage detection. In their work, they have introduced a sampling method to speed up training of Convolution neural network (CNN) as an application with the aid of DR. Srivastava et al. [9] introduced a bounding box method to identify the region of interest in the retinal image. A deep neural-based method for finding MA is introduced in [10], a max-out activation method is introduced that enhances the efficiency of the model. According to the study [11], among the deep architectures, Neural Architecture Search network (NASNet) outperforms other deep convolution networks.
All current methods for DR use hand-crafted features that are domain dependent. The efficiency of these models depends largely on the knowledge of the domain. We establish a machine learning based approach to identify severity level of DR in diabetic patients by leveraging transfer learning approaches for feature extraction and existing machine learning algorithms for severity level identification. Prominent features of retinal images are obtained in the proposed approach by sending the fundus images via a deep CNN model. Then, to obtain a compact representation of the retinal images, these deep representations are transformed into a different dimensional space. The final representations are transferred to the classification algorithms to obtain the severity levels of DR. NASNet, a pre-trained CNN is used to extract features in the deep space and the directions of projection are obtained using t-SNE. We obtain the most discriminatory features as t-SNE projects the deep features in the most discriminatory directions. As t-SNE projects the deep features in the most discriminatory directions, the proposed produces the most discriminatory features. A machine learning model is trained on these projected features to get the severity level of DR. Our experimental studies on Asia Pacific Tele-Ophthalmology Society (APTOS) 2019 dataset shows that ⱱ-SVM trained using the t-SNE features leads to an improved accuracy of 77.90%.
Our model is different from the existing in the way how the features are provided. As we know, none of the existing models explore the projection of deep features in a lower-dimensional space using t-SNE to get an improved representation of the fundus images. The proposed model produces non-linear features suitable for DR classification.
The principal contributions of the work proposed are as follows:
Established a robust model for fundus image classification
Use of deep features to obtain essential image properties
Use t-SNE to transform the deep features in the reduced dimensions to obtain features in the reduced dimension without any loss of discriminatory information
This paper is organized as follows: Section 2 gives a brief review of various methods used for diabetic retinopathy with special focus on the models that use feature projection techniques; The proposed model is described in Section 3; The dataset used to evaluate the performance of the model along with the discussion on Experimental results is provided in Section4; We give a conclusion with few insights on the future directions at the end of this article.
2. Related Work
This section reviews the traditional models for the task of DR severity identification. An easy to remember the scientific approach for DR severity identification has been implemented [12]. Akram et al. [13] introduced a hybrid classifier by using both GMM and SVM as an ensemble model to boost the accuracy of the model. The same approach has been modified by augmenting the feature set with shape, intensity, and statistics of the affected region [14]. A random forest-based solution is proposed in the studies [15, 16]. Segmentation based approaches were proposed by Welikala [17]. A genetic algorithm-based feature extraction method is introduced by Roychowdhury et al. [18]. Numerous shallow Classifiers, such as the Gaussian Mixture Models (GMM), K-Nearest Neighborhood (KNN), Support Vector Machine (SVM), and AdaBoost are analyzed [19] to differentiate lesions from non-lesions. A hybrid feature extraction-based approach has been proposed in the study [20].
Recently the research focus has shifted to deep learning models for the identification of DR severity tasks. A large dataset consisting of 1,28,175 retinal images is used and trained using deep CNN [21]. The data augmentation approach is used to generate data on CNN architecture [22]. Fuzzy models are used as part of their framework [23], a hybrid model based on fuzzy logic, Hough transform, and other techniques for dimension reduction. A mixture of fuzzy C-means and deep CNN architectures are used in the study [24].
Feature reduction plays a significant role in reducing the feature dimensions of the input data while preserving details on discrimination [25]. Principal Component Analysis (PCA) is an unsupervised dimensional reduction technique, in which features are projected in high variance directions. There is a risk of losing class discrimination knowledge when we project the data onto these high variance directions [26]. Linear Discriminant Analysis (LDA) is a supervised dimensional reduction technique that requires class information of the samples and the data can be projected to a maximum of n-1 directions where n is the number of classes. In the case of LDA, the number of directions for projection may not be sufficient, especially when the number of classes is small, and the original feature size is large [27]. Visualization is helpful to understand the nature of data, but it is not possible to visualize data of higher dimensions especially it is not possible to visualize data with more than 3 dimensions.
Figure 2. Architectural details of the proposed system, representing different stages of the proposed method
In the recent past, a non-linear dimension reduction called t-SNE has been widely used to visualize high-dimensional data [28]. It is extensively used in various applications such as facial expression recognition, text classification and tumor identification in Magnetic Resonance (MR) images. In the proposed model we use t-SNE based transformations for reduction of deep features that are huge without loss of class discrimination information. Different from existing methods deep features from retinal images are extracted and then ⱱ-SVM is used for classification. Deep features are projected using t-SNE transformations to enhance the discrimination in the features.
3. Proposed Fundus Image Classification
Main objective of this work is to establish a robust model to find the severity of retinopathy given the retinal fundus images as input. As the pre-processing of the images increases the time to generate the result during testing, we avoid too much pre-processing of the retinal images. Rather than applying complex pre-processing steps, we focus on extracting the features that are most appropriate for classification. Figure 2 shows the detailed architecture of the proposed model.
Pre-processing fundus images: All the fundus images collected may not have the same shape but during the next stage, feature extraction module expects all the images in a fixed size. To satisfy this requirement all the fundus images are reshaped so that all the images are of the same size.
Deep Feature extraction: After pre-processing of the images, they are ready to extract features. NASNet, one of the deep convolution architectures, is used to extract deep features of size 4032. Since the number of features in the deep feature space is vast, there is a possibility that the machine learning models will over-fit when we train the models using deep features. t-SNE is used to transform deep features to reduce dimensional space to avoid overfitting of the models.
t-SNE converts the Euclidean distance between the data points in higher dimensional space to conditional probabilities in the lower dimensional space. By doing this t-SNE can retain both local and global structure of the data in the lower dimensional representation.
The conditional probability, P(xj|xi), represents the similarity between the two samples xi and xj in the high dimensional space and is computed as follows:
$P\left(x_{i} \mid x_{j}\right)=\frac{\exp \left(-\frac{-\left|x_{i}-x_{j}\right|^{2}}{2 \sigma^{2}}\right)}{\sum_{k \neq l} \exp \left(-\frac{-\left|x_{k}-x_{l}\right|^{2}}{2 \sigma^{2}}\right)}$ (1)
Let the yi and yj be the samples in the lower dimensional space that are corresponding to data points xi and xj in the higher dimensional space and Q(yj|yi), is then computed as follows:
$Q\left(y_{j} \mid y_{i}\right)=\frac{\left(1+\left|y_{i}-y_{j}\right|^{-1}\right)}{\sum_{k \neq l}\left(1+\left|y_{k}-y_{l}\right|^{-1}\right)}$ (2)
In Eq. (2), Q(yj|yi) represents Student t-distribution with 1-degree of freedom.
The measure of similarity between the joint probability distributions P and Q indicates how well the data points yi and yj model the similarity between the high-dimensional data points xi and xj in lower dimension space and is referred to as symmetric neighborhood embedding. Kullback-Leibler (KL) divergence is a good measure to indicate similarity between the distributions P and Q and the cost function of t-SNE can be formulated as:
$C=K L(P \| Q)=\sum_{i} \sum_{j} P_{j i} \log \frac{P_{j i}}{q_{j i}}$ (3)
This transformation produces mappings that are invariant to changes in the scale in the lower dimensions. Deep features of 4032 dimensions are projected to three dimensions using t-SNE transformation.
Model training and evaluation module: The features in the reduced space are fed to the classification models for the identification of severity levels in DR. Figure 3 shows the details of the feature extraction module and Figure 4 shows the details of the classification module.
The proposed model uses ⱱ-SVM to classify the retinal images. The advantage of using ⱱ-SVM over the C-SVM for classification is that finalizing the parameters in the case of the ⱱ-SVM is much simpler [29]. In the case of C-SVM, the value of C can take any value from 0 to infinity and can be a bit hard to estimate and use [30]. A modification to the C-SVM objective and the addition of the ⱱ parameter in the objective makes the parameter tuning simple [31]. The value of C can take the values within 0-1 as the search space is much smaller compared to the value of 0 to infinity in the case of C-SVM. The value of ⱱ indicates the lower and upper bounds on the number of data points that lie on the wrong side of the hyperplane.
Figure 3. Stages of non-linear feature projection module
Figure 4. Model training and performance evaluation module
4. Experimental Results
In this section, we address the effectiveness of the proposed model with the supporting experimental results.
Summary of the dataset: For experimental studies, we use APTOS 2019 blindness detection dataset available on the Kaggle website [32] to demonstrate the performance of the proposed model. The dataset is a large array of retinal fundus images under various imaging conditions. The images are manually graded into one of the severity levels. The images were collected over an extended period from several clinics using a range of cameras to add further variations.
The dataset comprises of a total of 3662 retinal images belonging to the affected and unaffected images. Out of the total images 1805 are not affected, there are 370, 999, 193, 295 images are available in mild, moderate, severe and proliferate severity levels among the affected images in each severity level. From each class 80% of the fundus images are used for training and the remaining 20% are used for testing. Table 1 summarizes the dataset used for the model evaluation.
Our initial experiments are carried out to check the effect of hand-crafted features such as Histogram Of oriented Gradients (HoG), Linear Binary Patterns (LBP), and Gist on the classification of DR severity. We use conventional machine learning models such as logistic regression, KNN, Naïve Bayes, C-SVM, and ⱱ-SVM to model the hand-crafted features extracted from retinal images. Different evaluation metrics such as accuracy, precision, recall, and F-scores of the models are reported to assess the performance of the models.
Table 2 shows the performance of various machine learning models trained using hand-crafted features. Table 2 shows that HoG features give a worse performance, while LBP and Gist models are comparable. Gist representations are better with an accuracy of 74.49%, and an F1 score of 70. As a result, we understand that handcrafted features require domain knowledge, and lack of such knowledge results in poor performance. SVM outperforms other models, regardless of the type of features used.
Table 1. Summary of the dataset
Dataset Summary
Unaffected (No DR)
Affected (DR)
Mild DR
Moderate DR
Severe DR
Proliferative DR
Table 2. Performance of various machine learning models trained using handcrafted features
F1 Score
Naive Bayes
C-SVM
ⱱ-SVM
Table 3. Performance of various machine learning models trained using deep features
Table 4. Performance of SVM model trained using features projected on to different spaces
Feature space
LDA space
PCA space
t-SNE space
In the next experiment, we compare the performance of deep feature representations extracted using transfer learning. NASNet is used as a pre-trained model to extract the deep features of every retinal image. For each image, we have 4032 features, which are fed to the classification models to get the level of severity. The logistic regularization with C value is set to 4 and L2 regularization is used. In KNN the value of K is set to 10. The decision criteria used in decision tree models is the Gini index. C-SVM is used, with C value as 50, and gamma value is set to 0.001 while in ⱱ-SVM, the value of ⱱ is set to 0.15.
From Table 3, we can see that logistic regression outperforms the rest of the models. The performance of SVM is comparable to logistic regression. Using deep features, the accuracy is increased from 74.49% to 77.22% which is significantly better.
In the next experiment, we will check the performance of deep features projected onto different spaces. For feature projections, we use PCA, LDA, and t-SNE. In all the cases, ⱱ-SVM with RBF kernel is used as a classification model and ⱱ-value is set to 0.15.
From Table 4, we can observe that deep features when projected onto other dimensional spaces can improve the performance of the models. After projection, the performance of the model either improved or did not change without deterioration in accuracy. Projections using t-SNE are superior representations for retinal images.
Figure 5 shows the performance of the DR identification model when the NASNet features are projected onto different feature spaces like LDA, PCA and t-SNE. From the experiments we carried out in this section we understand that the deep features extracted from NASNet offer discriminatory features compared to hand-crafted features. Such features become more discriminatory when the transformation of t-SNE is used to project them.
In support of our arguments, we present the visualizations of features in various projection spaces, such as PCA, LDA, and t-SNE.
Figure 6 shows that, in 2-D space, the features are discriminatory, especially the DR affected images are well separable from those that are not affected. These features are obtained by setting the number of components as 2, perplexity as 30.0, and using Euclidean as a distance metric.
Figure 5. Accuracy of the DR recognition model after projection onto different spaces
Figure 6. Deep features transformed to two-dimensional space using t-SNE
5. Conclusion and Future Directions
Major objective of this work is to establish an automated DR severity level prediction model based on the retinal images of diabetic patients. We focus on extracting appropriate features from the retinal images as the feature representations impact the performance of the machine learning models to a greater extent. In this work the deep features extracted from the NASNet are projected onto the t-SNE space to obtain lower dimensional representations. As these features undergone several non-linear operations, they can better learn the lesions present in the retinal images and hence helps to improve the performance of the models. Our experimental studies on the APTOS benchmark dataset show that the proposed features result in better accuracy, precision recall and F1-scores compared to the representations in the PCA and LDA space. The power of deep learning and compact representation together with the robustness of the SVM models, make the proposed approach more powerful with reduced misclassifications.
In future we would like to test the representation power of various deep CNN architectures like GoogLeNet, ResNet and VGG. In addition, we also have plans to test the model performance on large scale datasets.
[1] Sayres, R., Taly, A., Rahimy, E., Blumer, K., Coz, D., Hammel, N., Krause, J., Narayanaswamy, A., Rastegar Z., Wu, D., Xu, S., Barb, S., Joseph, A. Shumski M., Smith, J., Sood, A.B., Corrado, G.S., Peng, L., Webster, D.R. (2019). Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy. Ophthalmology, 126(4): 552-564. https://doi.org/10.1016/j.ophtha.2018.11.016
[2] Flaxman, S.R. (2017). Global causes of blindness and distance vision impairment 1990-2020: A systematic review and meta-analysis. The Lancet Global Health, 5(12): e1221-e1234. https://doi.org/10.1016/S2214-109X(17)30393-5
[3] Gulshan, V., Peng, L., Coram, M. (2016). Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Jama, 316(22): 2402-2410. https://doi.org/10.1001/jama.2016.17216
[4] Williams, R., Airey, M., Baxter, H., Forrester, J., Kennedy-Martin, T., Girach, A. (2004). Epidemiology of diabetic retinopathy and macular oedema: A systematic review. Eye, 18(10): 963-983. https://doi.org/10.1038/sj.eye.6701476
[5] Long, S.C., Huang, X.X., Chen, Z.Q., Pardhan, S., Zheng, D.C. (2019). Automatic detection of hard exudates in color retinal images using dynamic threshold and SVM classification: Algorithm development and evaluation. Pattern Recognition in Medical Dicision Support, 2019: 3926930. https://doi.org/10.1155/2019/3926930
[6] Haloi, M., Samarendra, D., Rohit S. (2015). A Gaussian scale space approach for exudates detection, classification and severity prediction. arXiv preprint arXiv:1505.00737
[7] Eftekhari, N. (2019). Microaneurysm detection in fundus images using a two-step convolutional neural network. Biomedical Engineering Online, 18(1): 67. https://doi.org/10.1186/s12938-019-0675-9
[8] Van, G., Mark, J.J.P. (2016). Fast convolutional neural network training using selective data sampling: Application to hemorrhage detection in color fundus images. IEEE Transactions on Medical Imaging, 35(5): 1273-1284. https://doi.org/10.1109/TMI.2016.2526689
[9] Srivastava, R. Duan, L., Wong, D.W.K., Liu, J., Wong, T.Y. (2017). Detecting retinal microaneurysms and hemorrhages with robustness to the presence of blood vessels. Computer Methods and Programs in Biomedicine, 138: 83-91. https://doi.org/10.1016/j.cmpb.2016.10.017
[10] Haloi, M. (2015). Improved microaneurysm detection using deep neural networks. arXiv preprint arXiv:1505.04424.
[11] Wu, L., Sauma, J., Hernandez-Bogantes, E., Masis, M.(2013). Classification of diabetic retinopathy and diabetic macular edema. World Journal of Diabetes, 4(6): 290. https://doi.org/10.4239/wjd.v4.i6.290
[12] Akram, M.U., Khan, S.A. (2013). Identification and classification of microaneurysms for early detection of diabetic retinopathy. Pattern Recognition, 46(1): 107-116. https://doi.org/10.1016/j.patcog.2012.07.002
[13] Akram, M.U., Khalid, S., Tariq, A., Khan, S.A., Azam, F. (2014). Detection and classification of retinal lesions for grading of diabetic retinopathy. Computers in Biology and Medicine, 45: 161-171. https://doi.org/10.1016/j.compbiomed.2013.11.014
[14] Casanova, R., Saldana, S., Chew, E.Y., Danis, R.P., Greven, C.M., Ambrosius, W.T. (2014). Application of random forests methods to diabetic retinopathy classification analyses. PLoS One, 9(6): e98587. https://doi.org/10.1371/journal.pone.0098587
[15] Verma, K., Deep, P., Ramakrishnan, A.G. (2011). Detection and classification of diabetic retinopathy using retinal images. 2011 Annual IEEE India Conference, Hyderabad, India, pp. 1-6. https://doi.org/10.1109/INDCON.2011.6139346
[16] Welikala, R.A. (2014). Automated detection of proliferative diabetic retinopathy using a modified line operator and dual classification. Computer Methods and Programs in Biomedicine, 114(3): 247-261. https://doi.org/10.1016/j.cmpb.2014.02.010
[17] Welikala, R.A (2015). Genetic algorithm-based feature selection combined with dual classification for the automated detection of proliferative diabetic retinopathy. Computerized Medical Imaging and Graphics, 43: 64-77. https://doi.org/10.1016/j.compmedimag.2015.03.003
[18] Roychowdhury, S., Koozekanani, D.D., Parhi, K.K. (2013). DREAM: Diabetic retinopathy analysis using machine learning. IEEE Journal of Biomedical and Health Informatics, 18(5): 1717-1728. https://doi.org/10.1109/JBHI.2013.2294635
[19] Mookiah, M.R.K. Acharya, U.R., Martis, R.J., Chua, C.K. (2013). Evolutionary algorithm-based classifier parameter tuning for automatic diabetic retinopathy grading: A hybrid feature extraction approach. Knowledge-Based Systems, 39: 9-22. https://doi.org/10.1016/j.knosys.2012.09.008
[20] Bodapati, J.D., Naralasetti, V., Shareef, S.N., Hakak, S. Bilal, M., Maddikunta, P.K.R., Jo, O. (2020). Blended multi-modal deep ConvNet features for diabetic retinopathy severity prediction. Electronics, 9(6): 914. https://doi.org/10.3390/electronics9060914
[21] Pratt, H. (2016). Convolutional neural networks for diabetic retinopathy. Procedia Computer Science, 90: 200-205. https://doi.org/10.1016/j.procs.2016.07.014
[22] Rahim, S.S., Palade, V., Shuttleworth, J., Jayne, C. (2016). Automatic screening and classification of diabetic retinopathy and maculopathy using fuzzy image processing. Brain Informatics, 3(4): 249-267. https://doi.org/10.1007/s40708-016-0045-3
[23] Dutta, S. (2018). Classification of diabetic retinopathy images by using deep learning models. International Journal of Grid and Distributed Computing, 11(1): 89-106. http://dx.doi.org/10.14257/ijgdc.2018.11.1.09
[24] Laurens van der, M., Hinton, G. (2008). Visualizing data using t-SNE. Journal of Machine Learning Research, 2579-2605.
[25] Bianco, S., Cadene, R., Celona, L., Napoletano, P. (2018). Benchmark analysis of representative deep neural network architectures. IEEE Access, 6: 64270-64277. https://doi.org/10.1109/ACCESS.2018.2877890
[26] Bodapati, J.D., Naralasetti, V. (2019). Feature extraction and classification using deep convolutional neural networks. Journal of Cyber Security and Mobility, 8(2): 261-276. https://doi.org/10.13052/jcsm2245-1439.825
[27] Bodapati, J.D., Kishore, K.V.K., Veeranjaneyulu, N. (2010). An intelligent authentication system using wavelet fusion of K-PCA, R-LDA. International Conference On Communication Control And Computing Technologies, Ramanathapuram, India, pp. 437-441. https://doi.org/10.1109/ICCCCT.2010.5670591
[28] Bodapati, J.D., Veeranjaneyulu, N. (2016). Performance of different models in non-linear subspace. 2016 International Conference on Signal and Information Processing (IConSIP), Vishnupuri, pp. 1-4. https://doi.org/10.1109/ICONSIP.2016.7857483
[29] Bodapati, J.D., Veeranjaneyulu, N. (2017). Abnormal network traffic detection using support vector data description. Proceedings of the 5th International Conference on Frontiers in Intelligent Computing: Theory and Applications, Singapore. https://doi.org/10.1007/978-981-10-3153-3_49
[30] Bodapati, J.D., Naralasetti, V. (2019). Facial emotion recognition using deep CNN based features. International Journal of Innovative Technology and Exploring Engineering, 8(7): 1928-1931.
[31] Kaggle, APTOS Challenge Dataset. https://www.kaggle.com/c/aptos2019-blindness-detection, accessed on Dec. 30, 2019.
[32] Deepika, Kancherla, Jyostna Devi Bodapati, and Ravu Krishna Srihitha. (2019). An efficient automatic brain tumor classification using LBP features and SVM-based classifier. Proceedings of International Conference on Computational Intelligence and Data Engineering, Singapore, pp. 163-170. https://doi.org/10.1007/978-981-13-6459-4_17
|
CommonCrawl
|
Journal of Cardiovascular Magnetic Resonance
Cardiovascular magnetic resonance assessment of acute cardiovascular effects of voluntary apnoea in elite divers
L. Eichhorn ORCID: orcid.org/0000-0002-9769-06391 na1,
J. Doerner2,3 na1,
J. A. Luetkens2,
J. M. Lunkenheimer2,
R. C. Dolscheid-Pommerich4,
F. Erdfelder1,
R. Fimmers5,
J. Nadal5,
B. Stoffel-Wagner4,
H. H. Schild2,
A. Hoeft1,
B. Zur4 na1 &
C. P. Naehle2,3 na1
Journal of Cardiovascular Magnetic Resonance volume 20, Article number: 40 (2018) Cite this article
Prolonged breath holding results in hypoxemia and hypercapnia. Compensatory mechanisms help maintain adequate oxygen supply to hypoxia sensitive organs, but burden the cardiovascular system.
The aim was to investigate human compensatory mechanisms and their effects on the cardiovascular system with regard to cardiac function and morphology, blood flow redistribution, serum biomarkers of the adrenergic system and myocardial injury markers following prolonged apnoea.
Seventeen elite apnoea divers performed maximal breath-hold during cardiovascular magnetic resonance imaging (CMR). Two breath-hold sessions were performed to assess (1) cardiac function, myocardial tissue properties and (2) blood flow. In between CMR sessions, a head MRI was performed for the assessment of signs of silent brain ischemia. Urine and blood samples were analysed prior to and up to 4 h after the first breath-hold.
Mean breath-hold time was 297 ± 52 s. Left ventricular (LV) end-systolic, end-diastolic, and stroke volume increased significantly (p < 0.05). Peripheral oxygen saturation, LV ejection fraction, LV fractional shortening, and heart rate decreased significantly (p < 0.05). Blood distribution was diverted to cerebral regions with no significant changes in the descending aorta. Catecholamine levels, high-sensitivity cardiac troponin, and NT-pro-BNP levels increased significantly, but did not reach pathological levels.
Compensatory effects of prolonged apnoea substantially burden the cardiovascular system. CMR tissue characterisation did not reveal acute myocardial injury, indicating that the resulting cardiovascular stress does not exceed compensatory physiological limits in healthy subjects. However, these compensatory mechanisms could overly tax those limits in subjects with pre-existing cardiac disease. For divers interested in competetive apnoea diving, a comprehensive medical exam with a special focus on the cardiovascular system may be warranted.
This prospective single-centre study was approved by the institutional ethics committee review board. It was retrospectively registered under ClinicalTrials.gov (Trial registration: NCT02280226. Registered 29 October 2014).
Hypoxia is associated with significant changes to the cardiovascular system. It is known from animal studies that hypoxia is associated with increased mean arterial blood pressure and altered myocardial morphometry due to increased left ventricular (LV) end-diastolic pressure with lengthening of end-diastolic and end-systolic myocardial fibres [1, 2]. In humans, prolonged breath-hold – also termed [voluntary] apnoea – can be used for studying the cardiovascular adaptations to acute dynamic hypoxemia and hypercapnia [3, 4]. Trained apnoea divers are able to achieve breath-hold durations of more than 6 minutes on a regular basis. Apnoea itself leads to hypoxia and hypercapnia, both in turn leading to an activation of the sympathetic nervous system and ultimately causing peripheral vasoconstriction [5]. At the end of apnoea, trained breath-hold divers can achieve hypoxic states with end tidal pO2 levels of < 30 mmHg O2 [6, 7]. Adequate oxygen supply of hypoxia sensitive organs (e. g. the brain) is assured by the so-called diving response, which initiates a preferential redistribution of blood flow to the brain and the heart [3, 8]. The diving response comprises bradycardia and peripheral vasoconstriction, with the latter causing severe hypertension. In some cases cardiac complications such as cardiac arrhythmias were observed [8, 9], possibly due to a transient, but marked LV dilation during prolonged apnoea [10]. Whether voluntary apnoea with its cardiovascular burden leads to measurable changes of cardiac biomarkers, especially those with high sensitivity, as a precursor of cardiomyocyte injury has not been evaluated yet.
Cardiovascular magnetic resonance (CMR) is a standard non-invasive method for functional analysis of the heart [11], which allows for a high-resolution, three-dimensional anatomical and functional visualization of the heart. Furthermore, CMR facilitates quantitative assessment of blood flow in the vascular system and can therefore determine fast and repetitive blood flow distribution under apnoea.
The aim of this study was to investigate cardiovascular effects during maximal apnoea in elite healthy subjects and to determine the accompanied blood flow redistribution. Cardiac biomarkers and serum markers of the adrenergic system were determined. Cerebral MRI was performed after maximal apnoea to detect potential brain injury.
Inclusion and exclusion criteria
Inclusion criteria were experience in apnoea diving with a minimum breath-hold time of 270 s, a minimum age of 18 years, an unremarkable history of cardiac and lung disease, and the absence of long-term medication. Exclusion criteria were any contraindication to CMR or any known heart or lung disease. Participants were required not to drink caffeine-containing drinks and were instructed not to eat at least 8 h before the examination.
All study subjects received an information sheet 14 days prior to the study. Informed consent was obtained from all participants prior to study inclusion. Participants were questioned about training protocols and diving experience.
The study protocol comprised two apnoea sessions of individual maximum breath-hold combined with CMR measurements (one "functional cardiac session" and one "flow session"). Participants were asked to perform their usual pre-apnoea routines (yoga and breathing exercises). Fifteen minutes before CMR the participants were required to stop with their individual exercises and to breathe normally. A maximum of three deep inspirations prior to the final breath-hold was allowed. Hyperventilation was not allowed. Apnoea was performed as long as the individual subjects were able to withstand the breathing reflex.
This "individual" approach close to personal best breath-hold amplifies the redistribution effects and supposedly exhibits the maximum effect on the cardiovascular system. Additionally, a brain MRI was performed at least 30 minutes after the first of the two breath-hold sessions to detect acute brain ischemia. Cardiac biomarkers and catecholamines were evaluated to detect cardiac damage (for time points see Fig. 1).
Illustration of the study protocol (CMR: cardiovascular magnetic resonance)
Magnetic resonance imaging technique
All CMR studies were performed during voluntary breath-hold in maximal inspiration and in supine position using a 1.5 Tesla whole body scanner (Ingenia, Philips Healthcare, Best, The Netherlands). The protocol for the "functional cardiac session" consisted of retrospectively gated balanced steady state free precision cine imaging with 30 cardiac phases per slice. To assess functional changes under apnoea, three short axis (apical, midventricular, basal) cines as well as a vertical long axis cine were acquired repeatedly over the course of apnoea. T2-mapping – indicative of myocardial oedema - using a gradient-spin-echo technique was performed in the same three slices in short axis orientation as the steady state free precision cine images prior and immediately after maximal apnoea [12].
MRI of the brain was performed using a 3 Tesla MRI scanner (Ingenia, Philips Healthcare) using a dedicated head coil. The protocol comprised a transverse T2-weighted turbo spin echo (TSE), a transverse fluid attenuated inversion recovery (FLAIR), a transverse T2*, a sagittal T1-weighted 3D gradient echo, and a transverse as well as a coronal diffusion weighted imaging (DWI) sequence.
The second CMR session was performed at least 4 h after the first "functional" cardiac session. This session was focused on flow measurements in the ascending and descending aorta, the pulmonary trunk, and both common carotid arteries using common 2D phase contrast imaging.
Two radiologists blinded to the study protocol independently evaluated all images. The first (apnoeaearly) and last completed imaging data set (apnoealate) of each maximum apnoea were compared. Global cardiac function at resting conditions (left ventricular end-diastolic volume (LVEDV), end-systolic volume (LVESV), stroke volume (LVSV), and ejection fraction (LVEF) were determined from the short axis (ViewForum, Philips Healthcare) and normalized to the body surface area (BSA) using Mosteller's formula [13]. LVEDV and LVESV were quantified manually by tracing the endocardial borders in all short axis slices. For functional parameters under apnoea LV volumes were determined using the modified Simpson rule. For the assessment of regional cardiac function, fractional shortening (FS) was assessed as described previously [14]. In short, the endocardial distance from the free LV lateral wall to the septal wall was measured in end-diastole (EDD) and end-systole (ESD) in apical, midventricular and basal slices in short axis orientation. FS was then calculated as followed: \( FS=\frac{EDD- ESD}{EDD}\times 100 \). For flow quantification, phase contrast imaging was analysed using dedicated software (ViewForum, Philips Healthcare). Borders of the ascending and the descending aorta, the pulmonary trunk and both common carotid arteries (CCA) were manually traced in the magnitude images and the region of interest was automatically copied to phase images with manual correction performed when deemed necessary. Maximum velocity (Vmax), mean velocity (Vmean), mean flow (Qmean) and absolute stroke volume (SV) was assessed, respectively. T2 relaxation times were extracted from the T2 maps that were generated by using dedicated software (Intellispace Portal 9.0, Philips Healthcare). A circular region of interest (ROI) was then manually placed in the septal and lateral LV wall and averaged.
Cerebral MRI was analysed by the same two radiologists for focal diffusion restrictions as a sign of acute cerebral ischemia, cerebral micro-bleedings, and incidental findings. Peripheral oxygen saturation (SpO2) and heart rate (HR) were measured continuously during both CMR sessions using a CMR-compatible device (Expression MR400, Invivo, Gainesville, Florida, USA).
Estimation of myocardial oxygen demand
Myocardial oxygen demand was estimated using the modified pressure work index as previously described (PWI mod = modified pressure work load index, Psystolic = systolic blood pressure, Pdiastolic = diastolic blood pressure, HR = heart rate, CO = cardiac output, BSA = body surface area) [15]:
$$ PWImod=0.02+\left({P}_{systolic}\times HR\times 8.37\times {10}^{-4}\right)+\left(0.8\times {P}_{systolic}+0.2\times {P}_{diasystolic}\right)\times \frac{CO}{BSA}\times 8\times {10}^{-5}\Big) $$
Urine was collected for baseline measurements 30 min before the first apnoea session started and 4 h thereafter, but before starting the second CMR session. Catecholamine levels, N-terminal pro-hormone of brain natriuretic peptide (NT pro-BNP), brain natriuretic peptide (BNP) and high sensitive troponin (hs-cT) were analysed from venous blood samples taken before, immediately after, 30 min, and 4 h after the first apnoea. All results (urine and blood samples) therefore reflect the effect of the first single breath-hold, but not those of repetitive apnoeal stages.
Laboratory analyses
NT-pro BNP measurements were performed immediately after blood collection under routine conditions with the LOCI™-based NT-proBNP assay for Dimension™ VISTA 1500 (Siemens Healthcare Diagnostics, Eschborn, Germany). For BNP, hs-cT, and catecholamine analysis, aliquots were stored at − 80 °C. BNP and hs-cT were measured using commercially available, specific immunoassays (BNP and STA High Sensitive Troponin-I assay for ARCHITECT™, both Abbott Diagnostics, Wiesbaden, Germany). Plasma catecholamine levels were analyzed using a catecholamine reagent kit (Chromsystems Instruments & Chemicals GmbH; ord. no. 5000; Graefelfing, Germany) with a HPL Chromatography (Waters Corporation, Milford, Massachusetts, USA). Urine catecholamines were analysed by Bio Rad HPLC Agilent 1100 Series (Agilent Technologies, Waldbronn, Germany).
Data are presented as mean +/- standard deviation (SD). Statistical analysis was performed using GraphPad Prism (version 7.02 for Windows, GraphPad Software, La Jolla, California, USA) and SAS version 9.4 (SAS Institut Inc., Cary, North Carolina, USA). Descriptive statistics are summarised as means and standard deviation (± SD). All parameters were compared using paired t-testing. Correlations were calculated using the Spearman's rank correlation analysis. Statistical significance was defined as p < 0.05.
Seventeen elite apnoea divers (15 men) age of 40 ± 11 years were investigated under maximum breath-hold (height 183 ± 8 cm, weight 82 ± 11 kg, BSA 2.0 ± 0.17 kg / m2 and body mass index (BMI) 24.4 ± 2.3 kg /m2). Five of 17 divers regularly participate in various national and international competitions. The training frequency was 2.2 ± 1.6 training sessions a week. Training content and focus varied inter-individually. The breath-hold experience of all athletes was 4.5 ± 2.6 years. The personal breath-hold records were 5:20 ± 0:49 min. No comorbidities were found in any diver. Five divers have had a hypoxic blackout in their history.
Maximal breath-hold time in the "functional cardiac session" was 413 s and maximal breath-hold time in the "flow session" was 483 s. Mean time of breath-hold in the "functional cardiac session" was 297 ± 52 s and 276 ± 80 s in the "flow session" (p = 0.14). SpO2 levels gradually decreased from 99 ± 1% to 74 ± 14% (p < 0.001) in the "functional cardiac session" and from 99 ± 1% to 77 ± 15% (p < 0.001) in the "flow session". No hypoxic loss of consciousness was observed. Physiological data of each participant are listed in Table 1.
Table 1 Demographical and physical characteristics of all apnoea divers
Cardiac functional analysis
In all participants, LVEDV, LVESV, and LVSV increased from onset to the end of apnoea (122.9 ± 24.2 ml vs. 176.9 ± 26.4 ml, p < 0.001; 47.3 ± 16.8 ml vs. 75.9 ± 16.3 ml, p < 0.001; 75.6 ± 16.9 ml vs. 101.0 ± 22.9 ml, p = 0.003) (Fig. 2a), while LVEF (Fig. 2b) decreased from 61.8 ± 9.4% to 56.8 ± 8.2% (p = 0.04). Despite a significant decrease of HR (75 ± 23 bpm vs. 61 ± 12 bpm; p = 0.028) (Fig. 2c), LV-CO remained unchanged (5.5 ± 1.6 l/min vs. 6.1 ± 1.7 l/min, p = 0.88) (Fig. 2d). A representative image series of gradual LV enlargement is shown in Fig. 3. FS decreased over the course of apnoea from 33.0 ± 6.0% to 23.8 ± 4.4% (p < 0.001), with the decrease in the apical slice (41.5 ± 7.4% vs. 27.9 ± 7.2%, p < 0.001) and the midventricular slice (30.7 ± 9.0% vs. 21.6 ± 4.4%, p < 0.001), whereas only a non-significant trend was observed in the basal slice (26.7 ± 7.1% vs. 22.6 ± 5.5%, p = 0.065) (see also Table 2).
Left ventricular changes during apnoea. a) LV volumes: left ventricular volumes. b) LVEF: left ventricular ejection fraction, c) HR: heart rate, d) LVCO: left ventricular cardiac output (*p = < 0.05; **p = < 0.01; ***p = < 0.001; ****p = < 0.0001). Values are expressed as mean ± standard deviation
Representative image showing a progressive LV dilation over the course of apnoea in diastolic heart phase (subject 12)
Table 2 Parameters of CMR functional cardiac session
Changes in HR (∆HR) over the course of apnoea (Fig. 4) showed a significant negative correlation with the changes of LVSV (∆LVSV) (Spearman's rank correlation analysis; correlation coefficient − 0.64, p = 0.008). Additionally, ∆HR over the course of apnoea had a significant negative correlation with the change of LVEDV (∆LVEDV) (Spearman's rank correlation analysis; − 0.59; p = 0.016). This was less prominent when ∆LVEDV was normalized to BSA (Spearman's rank correlation analysis; − 0.55; p = 0.028). In contrast, there was only a weak correlation between ∆HR and an increase in ∆LVESV (− 0,319; p = 0.23).
Correlation of a) ΔHR (Apnoeaearly – Apnoealate) with ΔLVSV (Apnoeaearly – Apnoealate, panel and b) ΔLVEDV (Apnoeaearly – Apnoealate, panel respectively, using Spearman's rank correlation (ΔHR with ΔLVSV: − 0.637, p = 0.008; ΔHR with ΔLVEDV: -0.592923; p = 0.0155). HR: heart rate, LVSV: left ventricular stroke volume; LVEDV: left ventricular end-diastolic volume
Quantitative flow analysis
While SV, Vmax, Vmean, and Qmean increased significantly in the ascending aorta and the pulmonary trunk during apnoea, no changes were observed in the descending aorta, indicating a preferential blood flow distribution to the heart and the brain. In addition, both CCAs showed a significant increase in SV (see Fig. 5), Vmean and Qmean over the course of apnoea. All flow measurements are summarised in Table 3. A relevant shunt was neither observed at rest nor under apnoea (Qp/Qs: 1.06 ± 0.25 vs. 1.06 ± 0.19; p = 0.97).
Stroke volumes of common carotid arteries during course ao apnoea. SV-CCA: stroke volume in common carotid arteries. Values are expressed as mean ± SD
Table 3 Parameters of CMR flow session
Calculation of myocardial oxygen demand and oxygen supply
We found a decrease in HR (76 ± 23 bpm vs. 61 ± 12 bpm) and an increase in LVSV (75.6 ± 16.9 ml vs. 95.1 ± 32.6 ml) in this study. Using a previously reported increase of systolic and diastolic blood pressure from 135 ± 13 mmHg to 185 ± 25 mmHg [16], the estimated myocardial oxygen demand using the modified pressure work index [15] increases from 8.51 ml/min/100 g to 9.48 ml/min/100 g (increase of 11%) during apnoea.
T2 mapping
Compared to baseline values, T2 relaxation times showed no significant change (51.7 ± 2.4 ms vs. 52.6 ± 2.5 ms, p > 0.05).
Laboratory analysis of catecholamine levels, NT pro-BNP, hs-cT, BNP
Serum catecholamine levels (see Fig. 6a, b) showed a fast, increase immediately after apnoea onset (epinephrine from 67.5 ± 23.9 pg/ml to 173.8 ± 113.2 pg/ml (p < 0.001); norepinephrine from 590 ± 197 pg/ml to 2063 ± 1703 pg/ml (p < 0.001)). Serum catecholamine levels returned to baseline values as early as 30 min post apnoea (epinephrine: 50.9 ± 26.6 pg/ml; norepinephrine: 474.5 ± 138.1 pg/ml). Catecholamine levels derived from urine samples still showed a slight, but significant increase 4 h after apnoea compared to baseline conditions (epinephrine from 6.1 ± 2.0 pg/ml to 11.3 ± 6.1 pg/ml, p = 0.003; norepinephrine from 25.0 ± 20.1 pg/ml to 42.3 ± 22.6 pg/ml, p = 0.011).
Serum parameters of a) epinephrine, b) norepinephrine, c) NT pro-BNP and d) high sensitive Troponin (hsTrop) under apnoea
NT pro-BNP increased slightly from baseline levels 45.9 ± 40.3 pg/ml to 49.3 ± 43.3 pg/ml immediately after apnoea (p = 0.011) and to 53.8 ± 49.4 pg/ml (p = 0.037) 4 h after breath-hold (see Fig. 6c). BNP could not be quantified in 7 out of 17 subjects due to values lower than the detection limit (< 10 pg/ml). Overall, there were no significant changes of BNP serum levels at any time point.
Hs-cT increased from baseline until 4 h after apnoea (2.2 ± 1.1 pg/ml vs. 3.1 ± 1.7 pg/ml, p = 0.026) (Fig. 6d). Compared to baseline levels, this resulted in a mean relative Hs-cT increase of 56%, but was still far from any pathological range.
Cerebral MRI
DWI revealed neither acute nor sub-acute signs of cerebral ischemia. In one participant, a clinically irrelevant singular micro-bleeding formation located in the brain stem was observed. In two participants, unilateral fluid collections of the mastoid were observed and reported. No further incidental findings were observed.
In this present study, a holistic approach with state-of-the-art cardiac function evaluation, tissue characterization and biomarker analysis was performed to evaluate myocardial function, thoracic and supra-aortic blood flow, and their changes during maximal individual apnoea. The major findings of our study are a stepwise (1) increase of LVEDV, LVESV, LVSV and an unchanged CO, (2) decrease of LVEF and FS at the end of apnoea, (3) increase of supra-aortic blood flow without concurring flow changes in the descending aorta, and (4) an elevated hs-cT and NT-pro-BNP levels.
In the present study we were able to demonstrate a significant LV dilatation along with an increased LVSV, which is in line with a previous study [17], where increased EDD and ESD, an increase in SV and CO and a reduction in contractile function after an apnoea time of 3.7 ± 0.3 min was reported. In contrast to our results, neither bradycardia nor increased calculated systemic vascular resistance were observed [17], although both effects are part of the accepted concept of the diving response [8]. In a more recent study by Batinic et al., cardiac parameters (i.e. HR, LV volumes, LVEF, LVCO) taken at two time points of apnoea (minute 1 and minute 3) were compared [18]. These investigators found a significant increase in LVEDV and CO (112 ± 15 ml to 125 ± 15 ml; 5.4 ± 1.9 l/min to 6.0 ± 1.2 l/min), which was similar to our results (123 ± 24 ml to 177 ± 26 ml; 5.5 ± 1.6 l/min to 6.1 ± 1.7 l/min). In contrast to the previous results from Pingitore et al. [17] and the results of the present study, no changes in SV were observed (69 ± 12 ml to 69 ± 8 ml), while HR increased from 80 ± 15 bmp to 87 ± 16 bmp during apnoea [18].
Since the mammalian diving response to maximal voluntary apnoea considerably varies depending on the examined individual and the study setup, the at first apparently contradictory results of the three studies might be explained by the breath-hold duration [5, 9, 19]. In contrast to previous studies focusing on physiological changes during apnoea, the breath-hold time in the present study was considerably longer (297 ± 99 s vs. 234 ± 66 s; 199 ± 11 s; 210 ± 70 s) [5, 9, 19]. However, even though individual responses may vary, it is known that physiological changes are most notable at the end of apnoea [9, 20, 21]. In this context it is important to mention that the previous studies [17, 18] used predefined time points for data collection, which will not necessarily coincide with the individual maximum breath-hold duration of each athlete. We have therefore decided to use a minimal breath-hold duration of 270 s to eliminate the possible shortcomings of a too short apnoea duration in the previous studies [5, 9, 19,20,21]. Therefore, one can speculate that the shorter breath-hold durations registered in both previous studies [10, 17] are not suitable to push all compensatory mechanisms to their limits, and that a predefined time point might lead to undersampling. This is further supported by the fact that SpO2 decreased more profoundly in our study compared to the study from Pingitore et al. (from 99 ± 1% to 74 ± 14% vs. 97 ± 0.2% to 84 ± 2%) [17].
We found a relative increase in LVSV of 30 ± 48% during apnoea, but a decrease in FS and LVEF. FS depends on inter-ventricular dimensions and is affected by ventricular filling. Ejection fraction, in contrast, is a relatively load independent surrogate parameter for cardiovascular performance. In general, the efficiency of myocardial performance is determined by preload, afterload and contractility [22, 23]. An increase in afterload will therefore result in decreased efficiency of myocardial performance. In case of prolonged breath-hold the peripheral chemoreflex regulation, the elevated sympathetic nerve activity and the increase in norepinephrine will lead to peripheral vasoconstriction and hypertension [5, 24] and subsequently to bradycardia via the baroreflex [25]. In accordance with this established physiological pathway, we observed a significant increase in norepinephrine levels to above the upper cut-off limit of > 420 pg/ml and a decrease in HR at the end of apnoea. Therefore, the HR decrease and the concommitant increase of both ventricles may be seen as an indirect visualization of the aforementioned baroreflex (Fig. 4).
NT-proBNP was elevated early after maximal apnoea. Although some authors describe BNP as an "emergency" cardiac hormone against ventricular overload [26], the observed elevations of pro-BNP were only minor and far from pathological levels. Nevertheless, in absence of other triggers even this small increase may be regarded as an indicator for LV wall stress. Although an increase in hs-cT was found in this study, the normal T2 relaxation times directly after apnoea may indicate that the increased hs-cT may be more attributable to the LV dilatation and not to acute and persistent myocardial damage. This may further be supported by the fact that elevated cardiac troponin (cT) levels are also commonly found in patients with dilated cardiomyopathy [27].
In addition, myocardial perfusion and oxygen consumption is dependant on various parameters. At the end of apnoea, HR decreases while SV and systolic and diastolic blood pressure increase. These physiological changes translate into an increase of estimated oxygen demand in our study from 8.5 ml/min/100 g to 9.5 ml/min/100 g (i.e. only by 11%). However, this increase in demand may be assumed to be outweighed by a theoretical increase of approximately 40% of coronary perfusion due to increase of the diastolic blood pressure. It is of note that these theoretical considerations are based on healthy subjects without any coronary morbidities.
Clinical context
The human diving response (i.e. bradycardia, peripheral vasoconstriction, increased blood pressure) helps to preserve O2 in case of apnoea [28]. These protective mechanisms against hypoxia are triggered by apnoea per se and are augmented by face immersion [29]. The constriction of intramuscular and dermal vessels results in an increased total peripheral resistance and thus in an increased blood pressure [9, 30]. Due to peripheral vasoconstriction and reduced blood flow, the remaining circulating blood flow is redistributed to more hypoxia sensitive organs such as the brain [19, 25]. In the present study, only minimal blood flow changes were seen in the descending aorta while blood flow in the ascending aorta and the carotid arteries massively increased, indicating that even the gastrointestinal tract is excluded from blood flow redistribution in the case of hypoxia. This perfusion preference of the cerebrum emphasises the efficiency of the body's compensatory mechanism to avoid hypoxic damage of the brain. Accordingly, cerebral MRI showed no case of acute ischemia-induced brain injury, indicating the effectiveness of the compensatory mechanisms, even in case of breath-holds longer than 8 min (Table 1, subject 12).
Prolonged apnoea is not exclusively seen in breath-hold divers, also patients with obstructive sleep apnoea (OSA) show compensatory mechanisms to avoid brain damage [31]. Patients with OSA show an increase in cerebral blood flow [32, 33], elevated sympathetic activity [34], elevated arterial blood pressure [35], and an increase in norepinephrine levels [31]. Interestingly, LV and right ventricular (RV) afterload are increased and cardiac arrhythmia is commonly seen [36]. OSA is independently associated with coronary artery disease, atherosclerosis, hypertension, stroke, endothelial function and myocardial infarction [37, 38]. A main problem in understanding the underlying pathophysiology stems from the lack of an adequate clinical model to simulate OSA [39]. So far, hypoxic gas mixtures have been used to mimic hypoxia in humans [40], but because of the resulting hyperventilation, these models are more representative for high altitude environments than for OSA. In addition, the transmissibility of animal models is also limited. Apnoea divers are mostly free of comorbidities, and our study shows that even a short episode of hypoxia affects the cardiovascular system. Therefore, voluntary extended breath-hold might be taken as a clinical relevant model to simulate short term changes due to hypoxia [41], although the exposure levels to hypoxemia differ significantly [41]. In this context it should be noted that this study was performed with trained athletes, and that a transfer of these findings to patients with cardiovascular diseases and obstructive sleep apnoea should be done with caution.
Patent foramen ovale has been demonstrated to have a higher prevalence in patients with obstructive sleep aponoea compared to healthy controls, and is suspected to inrease nocturnal oxygen desaturation in these patients [42] and to enhance other pathologic conditions associated with OSA [43]. In both scuba and apnoea divers knowledge about the implications of a patent foramen ovale regarding incidence and severity of decompression sickness is scarce [44], especially because it is unkown if recurrent decompression sickness is a result of a patent foramen ovale, the inabiliy to adopt a more conservative diving style, or both [45]. In the present study, no relevant changes of Qp/Qs (the stroke volume in the ascending aorta relative to the stroke volume in the pulmonary trunk) and thus no indication for a cardiac shunt, was found.
Cardiac dysrhythmia or irregular heartbeats (mainly premature ventricular excitations) were observed in 14 of 17 divers at the end of apnoea and during the early recovery phase (example shown in Additional file 1: Figure S1). It is tempting to speculate that the massive LV and RV dilatation triggers cardiac depolarization and repolarization. However, ECG quality was limited in this study and did not allow for a comprehensive analysis.
Measurment accuracy (CMR and SpO2) might be limited at the end of apnoea due to e.g. motion artefacts (CMR), peripheral vasoconstriction (SpO2), and other technical restrictions. Blood pressure data is not available as invasive blood pressure measurement was not performed due to ethical considerations and automatic non-invasive blood pressure measurement failed due to the high and dynamic changes in blood pressure during apnoea. Due to the chosen CMR imaging protocol with only limited coverage of the RV, neither volumetric nor functional RV data are available. Future studies should also focus on effects of hypxoxia on pulmonary vasoconstriction and their effects on the RV function.
Compensatory effects of prolonged apnoea, including a massive LV dilatation and an increase in norepinephrine levels, substantially burden the cardiovascular system. This hemodynamic cardiac stress results in increased hs-cT and NT-pro-BNP, and leads to a reduction of FS. CMR tissue characterisation did not reveal acute myocardial injury, indicating that the resulting cardiovascular stress does not exceed compensatory short-term physiological limits in healthy elite divers. However, these compensatory mechanisms could overly tax those limits in subjects with pre- existing cardiac disease. Also, repetitive apnoea over decades, rather than over years as observed in our study population, may reveal different findings and may have a different impact on the cardiovascular system. For divers interested in competitive apnoea diving, a comprehensive medical exam with a special focus on the cardiovascular system may be warranted.
BMI :
BNP:
Brain natriuretic peptide
BSA:
CCA:
Common carotid arteries
CMR:
Cardiovascular magnetic resonance
Cardiac output
Cardiac troponin
DWI:
Diffusion weighted imaging
End-diastolic dimension
End-diastolic volume
EF:
Ejection fraction
End-systolic dimension
ESV:
End-systolic volume
FLAIR:
Fluid attenuated inversion recovery
FS :
Fractional shortening
HR:
hs-cT:
High sensitive troponin
LV:
Left ventricle/left ventricular
MRI :
NT-pro-BNP :
N-terminal pro-hormone of brain natriuretic peptide
Obstructive sleep apnoea
pO2 :
Partial pressure of O2
PWI:
Pressure work index
Qmean:
Mean flow
ROI:
Region of interest
RV:
Right ventricle/right ventricular
SpO2 :
Peripheral oxygen saturation
Stroke volume
TSE:
Turbo spin echo
Vmax :
Maximum velocity
Vmean :
Mean velocity
Chen L, Sica AL, Greenberg H, Scharf SM. Role of hypoxemia and hypercapnia in acute cardiovascular response to periodic apneas in sedated pigs. Respir Physiol. 1998;111:257–69.
O'Donnell CP, King ED, Schwartz AR, Robotham JL, Smith PL. Relationship between blood pressure and airway obstruction during sleep in the dog. J Appl Physiol. 1994;77:1819–28.
Eichhorn L, Erdfelder F, Kessler F, Doerner J, Thudium MO, Meyer R, et al. Evaluation of near-infrared spectroscopy under apnea-dependent hypoxia in humans. J Clin Monit Comput. 2015;29:749–57.
Eichhorn L, Kessler F, Böhnert V, Erdfelder F, Reckendorf A, Meyer R, et al. A model to simulate clinically relevant hypoxia in humans. J Vis Exp JoVE. 2016:e54933.
Heusser K, Dzamonja G, Tank J, Palada I, Valic Z, Bakovic D, et al. Cardiovascular regulation during apnea in elite divers. Hypertension. 2009;53:719–24.
Overgaard K, Friis S, Pedersen RB, Lykkeboe G. Influence of lung volume, glossopharyngeal inhalation and P(ET) O2 and P(ET) CO2 on apnea performance in trained breath-hold divers. Eur J Appl Physiol. 2006;97:158–64.
Willie CK, Ainslie PN, Drvis I, MacLeod DB, Bain AR, Madden D, et al. Regulation of brain blood flow and oxygen delivery in elite breath-hold divers. J Cereb Blood Flow Metab Off J Int Soc Cereb Blood Flow Metab. 2015;35:66–73.
Lindholm P, Lundgren CE. The physiology and pathophysiology of human breath-hold diving. J Appl Physiol. 2009;106:284–92.
Perini R, Tironi A, Gheza A, Butti F, Moia C, Ferretti G. Heart rate and blood pressure time courses during prolonged dry apnoea in breath-hold divers. Eur J Appl Physiol. 2008;104:1–7.
Marabotti C, Piaggi P, Menicucci D, Passera M, Benassi A, Bedini R, et al. Cardiac function and oxygen saturation during maximal breath-holding in air and during whole-body surface immersion. Diving Hyperb Med. 2013;43:131–7.
Gopal AS, King DL, King DL, Keller AM, Rigling R. Left ventricular volume and endocardial surface area by three-dimensional echocardiography: comparison with two-dimensional echocardiography and nuclear magnetic resonance imaging in normal subjects. J Am Coll Cardiol. 1993;22:258–70.
Sprinkart AM, Luetkens JA, Träber F, Doerner J, Gieseke J, Schnackenburg B, et al. Gradient spin Echo (GraSE) imaging for fast myocardial T2 mapping. J Cardiovasc Magn Reson. 2015;17:12.
Mosteller RD. Simplified calculation of body-surface area. N Engl J Med. 1987;317:1098.
Lewis RP, Sandler H. Relationship between changes in left ventricular dimensions and the ejection fraction in man. Circulation. 1971;44:548–57.
Hoeft A, Sonntag H, Stephan H, Kettler D. The influence of anesthesia on myocardial oxygen utilization efficiency in patients undergoing coronary bypass surgery. Anesth Analg. 1994;78:857–66.
Eichhorn L, Erdfelder F, Kessler F, Dolscheid-Pommerich RC, Zur B, Hoffmann U, et al. Influence of apnea-induced hypoxia on catecholamine release and cardiovascular dynamics. Int J Sports Med. 2017;38:85–91.
Pingitore A, Gemignani A, Menicucci D, Di Bella G, De Marchi D, Passera M, et al. Cardiovascular response to acute hypoxemia induced by prolonged breath holding in air. Am J Physiol Heart Circ Physiol. 2008;294:H449–55.
Batinic T, Utz W, Breskovic T, Jordan J, Schulz-Menger J, Jankovic S, et al. Cardiac magnetic resonance imaging during pulmonary hyperinflation in apnea divers. Med Sci Sports Exerc. 2011;43:2095–101.
Cross TJ, Kavanagh JJ, Breskovic T, Johnson BD, Dujic Z. Dynamic cerebral autoregulation is acutely impaired during maximal apnoea in trained divers. PLoS One. 2014;9:e87598.
Laurino M, Menicucci D, Mastorci F, Allegrini P, Piarulli A, Scilingo EP, et al. Mind-body relationships in elite apnea divers during breath holding: a study of autonomic responses to acute hypoxemia. Front Neuroengineering. 2012;5:4.
Costalat G, Pichon A, Joulia F, Lemaître F. Modeling the diving bradycardia: toward an "oxygen-conserving breaking point"? Eur J Appl Physiol. 2015;115:1475–84.
Hoeft A, Korb H, Hellige G, Sonntag H, Kettler D. The energetics and economics of the cardiac pump function. Anaesthesist. 1991;40:465–78.
Schipke JD. Cardiac efficiency. Basic Res Cardiol. 1994;89:207–40.
Palada I, Obad A, Bakovic D, Valic Z, Ivancev V, Dujic Z. Cerebral and peripheral hemodynamics and oxygenation during maximal dry breath-holds. Respir Physiol Neurobiol. 2007;157:374–81.
Eichhorn L, Erdfelder F, Kessler F, Dolscheid-Pommerich RC, Zur B, Hoffmann U, et al. Influence of apnea-induced hypoxia on catecholamine release and cardiovascular dynamics. Int J Sports Med. 2016;
Nakagawa O, Ogawa Y, Itoh H, Suga S, Komatsu Y, Kishimoto I, et al. Rapid transcriptional activation and early mRNA turnover of brain natriuretic peptide in cardiocyte hypertrophy. Evidence for brain natriuretic peptide as an "emergency" cardiac hormone against ventricular overload. J Clin Invest. 1995;96:1280–7.
Sato Y, Yamada T, Taniguchi R, Nagai K, Makiyama T, Okada H, et al. Persistently increased serum concentrations of cardiac troponin T in patients with idiopathic dilated cardiomyopathy are predictive of adverse outcomes. Circulation. 2001;103:369–74.
Andersson JPA, Linér MH, Rünow E, Schagatay EKA. Diving response and arterial oxygen saturation during apnea and exercise in breath-hold divers. J Appl Physiol. 2002;93:882–6.
Andersson JPA, Evaggelidis L. Arterial oxygen saturation and diving response during dynamic apneas in breath-hold divers. Scand J Med Sci Sports. 2009;19:87–91.
Foster GE, Sheel AW. The human diving response, its function, and its control. Scand J Med Sci Sports. 2005;15:3–12.
Bisogni V, Pengo MF, Maiolino G, Rossi GP. The sympathetic nervous system and catecholamines metabolism in obstructive sleep apnoea. J Thorac Dis. 2016;8:243–54.
Busch DR, Lynch JM, Winters ME, McCarthy AL, Newland JJ, Ko T, et al. Cerebral blood flow response to hypercapnia in children with obstructive sleep apnea syndrome. Sleep. 2016;39:209–16.
Alex R, Bhave G, Al-Abed MA, Bashaboyina A, Iyer S, Watenpaugh DE, et al. An investigation of simultaneous variations in cerebral blood flow velocity and arterial blood pressure during sleep apnea. Conf Proc Annu Int Conf IEEE Eng Med Biol Soc IEEE Eng Med Biol Soc Annu Conf. 2012;2012:5634–7.
Gilmartin GS, Tamisier R, Curley M, Weiss JW. Ventilatory, hemodynamic, sympathetic nervous system, and vascular reactivity changes after recurrent nocturnal sustained hypoxia in humans. Am J Physiol Heart Circ Physiol. 2008;295:H778–85.
Mohsenin V. Obstructive sleep apnea and hypertension: a critical review. Curr Hypertens Rep. 2014;16:482.
Kasai T, Bradley TD. Obstructive sleep apnea and heart failure: pathophysiologic and therapeutic implications. J Am Coll Cardiol. 2011;57:119–27.
Geovanini GR, Gowdak LHW, Pereira AC, Danzi-Soares NJ, LOC D, Poppi NT, et al. OSA and depression are common and independently associated with refractory angina in patients with coronary artery disease. Chest. 2014;146:73–80.
Nieto FJ, Young TB, Lind BK, Shahar E, Samet JM, Redline S, et al. Association of sleep-disordered breathing, sleep apnea, and hypertension in a large community-based study. Sleep heart health study. JAMA. 2000;283:1829–36.
Drager LF, Polotsky VY, O'Donnell CP, Cravo SL, Lorenzi-Filho G, Machado BH. Translational approaches to understanding metabolic dysfunction and cardiovascular consequences of obstructive sleep apnea. Am J Physiol Heart Circ Physiol. 2015;309:H1101–11.
Kolb JC, Ainslie PN, Ide K, Poulin MJ. Protocol to measure acute cerebrovascular and ventilatory responses to isocapnic hypoxia in humans. Respir Physiol Neurobiol. 2004;141:191–9.
Ivancev V, Bakovic D, Obad A, Breskovic T, Palada I, Joyner MJ, et al. Effects of indomethacin on cerebrovascular response to hypercapnea and hypocapnea in breath-hold diving and obstructive sleep apnea. Respir Physiol Neurobiol. 2009;166:152–8.
Shaikh ZF, Jaye J, Ward N, Malhotra A, de Villa M, Polkey MI, et al. Patent foramen ovale in severe obstructive sleep apnea: clinical features and effects of closure. Chest. 2013;143:56–63.
Rimoldi SF, Ott S, Rexhaj E, de Marchi SF, Allemann Y, Gugger M, et al. Patent foramen Ovale closure in obstructive sleep apnea improves blood pressure and cardiovascular FunctionNovelty and significance. Hypertension. 2015;66:1050–7.
Smart D, Mitchell S, Wilmshurst P, Turner M, Banham N. Joint position statement on persistent foramen ovale (PFO) and diving. South Pacific underwater medicine society (SPUMS) and the United Kingdom sports diving medical committee (UKSDMC). Diving Hyperb Med. 2015;45:129–31.
Lafère P, Balestra C, Caers D, Germonpré P. Patent Foramen Ovale (PFO), personality traits, and iterative decompression sickness. Retrospective analysis of 209 cases. Front Psychol. 2017;8:1328.
The authors thank all of the volunteers who participated in the study. We furthermore thank A. Carstensen and M. Schmidt for their technical support.
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Personal data or any data, which allow conclusions on clearly identified or identifiable individuals, are not available.
L. Eichhorn, J. Doerner, B. Zur and C. P. Naehle contributed equally to this work.
Department of Anaesthesiology and Intensive Care Medicine, University Hospital of Bonn, Bonn, Germany
L. Eichhorn
, F. Erdfelder
& A. Hoeft
Department of Radiology, University Hospital of Bonn, Bonn, Germany
J. Doerner
, J. A. Luetkens
, J. M. Lunkenheimer
, H. H. Schild
& C. P. Naehle
Department of Radiology, University Hospital of Cologne, Cologne, Germany
Institute for Medical Biometry, Informatics and Epidemiology (IMBIE), Bonn, Germany
R. C. Dolscheid-Pommerich
, B. Stoffel-Wagner
& B. Zur
Medical Biometry, Information Technology and Epidemiology, University of Bonn, Bonn, Germany
R. Fimmers
& J. Nadal
Search for L. Eichhorn in:
Search for J. Doerner in:
Search for J. A. Luetkens in:
Search for J. M. Lunkenheimer in:
Search for R. C. Dolscheid-Pommerich in:
Search for F. Erdfelder in:
Search for R. Fimmers in:
Search for J. Nadal in:
Search for B. Stoffel-Wagner in:
Search for H. H. Schild in:
Search for A. Hoeft in:
Search for B. Zur in:
Search for C. P. Naehle in:
LE, JD, CPN and BZ designed and planned the study and drafted the manuscript. JAL, JML, RCDP analysed datasets and gave technical support. JAL and RCDP drafted paragraphs of the manuscript. FE, RF, JN performed statistical analysis. BSW, HHS, AH helped to design the study and revised the manuscript. All authors read and approved the final manuscript.
Correspondence to L. Eichhorn.
This prospective single-centre study was registered under ClinicalTrials.gov (identifier: NCT02280226) and additionally approved by the institutional ethics committee review board of Bonn; Germany (373/13). Participants received an information sheet 14 days prior to the study. Informed consent was individually obtained from all participants included in the study. This study was not funded and all contributors participated on a voluntary basis.
Figure S1. Screenshots of monitored arrhythmia in different subjects (a-d) and in early recovery phase (e). (JPG 161 kb)
Eichhorn, L., Doerner, J., Luetkens, J.A. et al. Cardiovascular magnetic resonance assessment of acute cardiovascular effects of voluntary apnoea in elite divers. J Cardiovasc Magn Reson 20, 40 (2018) doi:10.1186/s12968-018-0455-x
Apnoea
hs-cT
|
CommonCrawl
|
Electro-magneto-static study of the nonlinear Schrödinger equation coupled with Bopp-Podolsky electrodynamics in the Proca setting
Scattering of radial data in the focusing NLS and generalized Hartree equations
November 2019, 39(11): 6669-6682. doi: 10.3934/dcds.2019290
Global large smooth solutions for 3-D Hall-magnetohydrodynamics
Huali Zhang ,
Changsha University of Science and Technology, School of Mathematics and Statistics, Changsha 410114, China
* Corresponding author: Huali Zhang
Received March 2019 Published August 2019
Fund Project: The first author is supported by Education Department of Hunan Province, general Program(grant No.17C0039), and Hunan Provincial Key Laboratory of Intelligent Processing of Big Data on Transportation, Changsha University of Science and Technology, Changsha; 410114, China
In this paper, the global smooth solution of Cauchy's problem of incompressible, resistive, viscous Hall-magnetohydrodynamics (Hall-MHD) is studied. By exploring the nonlinear structure of Hall-MHD equations, a class of large initial data is constructed, which can be arbitrarily large in $ H^3(\mathbb{R}^3) $. Our result may also be considered as the extension of work of Lei-Lin-Zhou [15] from the second-order semilinear equations to the second-order quasilinear equations, because the Hall term elevates the Hall-MHD system to the quasilinear level.
Keywords: Incompressible Hall-MHD equations, large data, global smooth solution.
Mathematics Subject Classification: Primary: 35A01, 35A02, 35A09; Secondary: 35E15, 35Q35.
Citation: Huali Zhang. Global large smooth solutions for 3-D Hall-magnetohydrodynamics. Discrete & Continuous Dynamical Systems - A, 2019, 39 (11) : 6669-6682. doi: 10.3934/dcds.2019290
M. Acheritogaray, P. Degond, A. Frouvelle and J.-G. Liu, Kinetic formulation and global existence for the Hall-magneto-hydrodynamics system, Kinet. Relat. Models, 4 (2011), 901-918. doi: 10.3934/krm.2011.4.901. Google Scholar
S. A. Balbus and C. Terquem, Linear analysis of the Hall effect in protostellar disks, The Astrophysical Journal, 552 (2001), 235-247. doi: 10.1086/320452. Google Scholar
D. Chae, P. Degond and J. G. Liu, Well-posedness for Hallmagnetohydrodynamics, Ann. I. H. Poincaré, 31 (2014), 555-565. doi: 10.1016/j.anihpc.2013.04.006. Google Scholar
D. Chae and J. Lee, On the blow-up criterion and small data global existence for the Hall- magneto-hydrodynamics, J. Differential Equations, 256 (2014), 3835-3858. doi: 10.1016/j.jde.2014.03.003. Google Scholar
D. Chae and M. Schonbek, On the temporal decay for the Hall-magnetohydrodynamic equations, J. Differential Equations, 255 (2013), 3971-3982. doi: 10.1016/j.jde.2013.07.059. Google Scholar
D. Chae, R. Wan and J. Wu, Local well-posedness for the Hall-MHD equations with fractional magnetic diffusion, J. Math. Fluid Mech. 17 (2015), 627–638. doi: 10.1007/s00021-015-0222-9. Google Scholar
D. Chae and S. Weng, Singularity formation for the incompressible Hall-MHD equations without resistivity, Ann. Inst. Henri Poincaré Anal. Non Linéaire, 33 (2016), 1009-1022. doi: 10.1016/j.anihpc.2015.03.002. Google Scholar
D. Chae and J. Wolf, On partial regularity for the 3D non-stationary Hall magnetohydrodynamics equations on the plane, SIAM J. Math. Anal., 48 (2016), 443-469. doi: 10.1137/15M1012037. Google Scholar
J. Y. Chemin and I. Gallagher, Well-posedness and stability results for the Navier-Stokes equa tions in R3, Ann. Inst. H. H. Poincaré Anal. Non Lineaire, 26 (2009), 599-624. doi: 10.1016/j.anihpc.2007.05.008. Google Scholar
P. Constantin and A. Majda, The Beltrami spectrum for incompressible fluid flows, Commun. Math. Phys., 115 (1988), 435-456. doi: 10.1007/BF01218019. Google Scholar
M. M. Dai, Local well-posedness for the Hall-MHD system in optimal Sobolev spaces, preprint, arXiv: 1803.09556. Google Scholar
[12] P. A. Davidson, An Introduction to Magnetohydrodynamics, Cambridge University Press, Cambridge, 2001. doi: 10.1017/CBO9780511626333. Google Scholar
T. G. Forbes, Magnetic reconnection in solar flares, Geophysical and Astrophysical Fluid Dynamics, 62 (1991), 15-36. doi: 10.1080/03091929108229123. Google Scholar
H. Homann and R. Grauer, Bifurcation analysis of magnetic reconnection in Hall-MHD systems, Physica D., 208 (2005), 59-72. doi: 10.1016/j.physd.2005.06.003. Google Scholar
Z. Lei, F. H. Lin and Y. Zhou, Structure of helicity and global solutions of incompressible Navier-Stokes equation, Arch. Ration. Mech. Anal., 218 (2015), 1417-1430. doi: 10.1007/s00205-015-0884-8. Google Scholar
M. J. Lighthill, Studies on magnetohydrodynamic waves and other anisotropic wave motions. Philos,, Trans. R. Soc. Lond., Ser., 252 (1960), 397-430. doi: 10.1098/rsta.1960.0010. Google Scholar
F. H. Lin and P. Zhang, Global small solutions to an MHD-type system: The three-dimensional case, Comm. Pure Appl. Math., 67 (2014), 531-580. doi: 10.1002/cpa.21506. Google Scholar
Y. R. Lin, H. L. Zhang and Y. Zhou, Global smooth solutions of MHD equations with large data, J. Differential Equations, 261 (2016), 102-112. doi: 10.1016/j.jde.2016.03.002. Google Scholar
P. D. Mininni, D. O. Gómez and S. M. Mahajan, Dynamo action in magnetohydrodynamics and Hall magnetohydrodynamics, The Astrophysics Journal, 587 (2003), 472-481. doi: 10.1086/368181. Google Scholar
X. X. Ren, J. H. Wu, Z. Y. Xiang and Z. F. Zhang, Global existence and decay of smooth solution for the 2-D MHD equations without magnetic diffusion, J. Funct. Anal., 267 (2014), 503-541. doi: 10.1016/j.jfa.2014.04.020. Google Scholar
M. Sermange and R. Temam, Some mathematical questions related to the MHD equations, Comm. Pure Appl. Math., 36 (1983), 635-664. doi: 10.1002/cpa.3160360506. Google Scholar
D. A. Shalybkov and V. A. Urpin, The Hall effect and the decay of magnetic fields, Astron. Astrophys., 321 (1997), 685-690. Google Scholar
[23] E. M. Stein, Singular Integrals and Differentialbility Properties of Functions, Princeton University Press, Princeton, 1970. Google Scholar
J. B. Taylor, Relaxation of toroidal plasma and generation of reverse magnetic fields, Phy. Rev. Letter, 33 (1974), 1138-1141. Google Scholar
M. Wardle, Star formation and the Hall effect, Magnetic Fields and Star Formation, (2004), 231–237. doi: 10.1007/978-94-017-0491-5_24. Google Scholar
K. Yamazaki and M. T. Moha, Well-posedness of Hall-magnetohydrodynamics system forced by Lévy noise, Stoch. PDE: Anal. Comp., (2018), 1–48. Google Scholar
Y. Zhou and Y. Zhu, A class of large solutions to the 3D incompressible MHD and Euler equations with damping, Acta Math. Sinica English Series, 34 (2018), 63-78. doi: 10.1007/s10114-016-6271-z. Google Scholar
Fei Chen, Yongsheng Li, Huan Xu. Global solution to the 3D nonhomogeneous incompressible MHD equations with some large initial data. Discrete & Continuous Dynamical Systems - A, 2016, 36 (6) : 2945-2967. doi: 10.3934/dcds.2016.36.2945
Ning Duan, Yasuhide Fukumoto, Xiaopeng Zhao. Asymptotic behavior of solutions to incompressible electron inertial Hall-MHD system in $ \mathbb{R}^3 $. Communications on Pure & Applied Analysis, 2019, 18 (6) : 3035-3057. doi: 10.3934/cpaa.2019136
Jishan Fan, Fucai Li, Gen Nakamura. Low Mach number limit of the full compressible Hall-MHD system. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1731-1740. doi: 10.3934/cpaa.2017084
Zhong Tan, Huaqiao Wang, Yucong Wang. Time-splitting methods to solve the Hall-MHD systems with Lévy noises. Kinetic & Related Models, 2019, 12 (1) : 243-267. doi: 10.3934/krm.2019011
Qi S. Zhang. An example of large global smooth solution of 3 dimensional Navier-Stokes equations without pressure. Discrete & Continuous Dynamical Systems - A, 2013, 33 (11&12) : 5521-5523. doi: 10.3934/dcds.2013.33.5521
Fei Chen, Boling Guo, Xiaoping Zhai. Global solution to the 3-D inhomogeneous incompressible MHD system with discontinuous density. Kinetic & Related Models, 2019, 12 (1) : 37-58. doi: 10.3934/krm.2019002
Ming He, Jianwen Zhang. Global cylindrical solution to the compressible MHD equations in an exterior domain. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1841-1865. doi: 10.3934/cpaa.2009.8.1841
Huajun Gong, Jinkai Li. Global existence of strong solutions to incompressible MHD. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1553-1561. doi: 10.3934/cpaa.2014.13.1553
Xiaoping Zhai, Yongsheng Li, Wei Yan. Global well-posedness for the 3-D incompressible MHD equations in the critical Besov spaces. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1865-1884. doi: 10.3934/cpaa.2015.14.1865
Hua Qiu. Regularity criteria of smooth solution to the incompressible viscoelastic flow. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2873-2888. doi: 10.3934/cpaa.2013.12.2873
Hua Qiu, Shaomei Fang. A BKM's criterion of smooth solution to the incompressible viscoelastic flow. Communications on Pure & Applied Analysis, 2014, 13 (2) : 823-833. doi: 10.3934/cpaa.2014.13.823
Baoquan Yuan. Note on the blowup criterion of smooth solution to the incompressible viscoelastic flow. Discrete & Continuous Dynamical Systems - A, 2013, 33 (5) : 2211-2219. doi: 10.3934/dcds.2013.33.2211
Lihuai Du, Ting Zhang. Local and global strong solution to the stochastic 3-D incompressible anisotropic Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4745-4765. doi: 10.3934/dcds.2018209
Jishan Fan, Shuxiang Huang, Fucai Li. Global strong solutions to the planar compressible magnetohydrodynamic equations with large initial data and vacuum. Kinetic & Related Models, 2017, 10 (4) : 1035-1053. doi: 10.3934/krm.2017041
Xiaoping Zhai, Zhaoyang Yin. Global solutions to the Chemotaxis-Navier-Stokes equations with some large initial data. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2829-2859. doi: 10.3934/dcds.2017122
Yaobin Ou, Pan Shi. Global classical solutions to the free boundary problem of planar full magnetohydrodynamic equations with large initial data. Discrete & Continuous Dynamical Systems - B, 2017, 22 (2) : 537-567. doi: 10.3934/dcdsb.2017026
Bingkang Huang, Lan Zhang. A global existence of classical solutions to the two-dimensional Vlasov-Fokker-Planck and magnetohydrodynamics equations with large initial data. Kinetic & Related Models, 2019, 12 (2) : 357-396. doi: 10.3934/krm.2019016
Huali Zhang
|
CommonCrawl
|
Ionospheric disturbance caused by artificial plasma clouds under different release conditions
Xiaoli Zhu ORCID: orcid.org/0000-0001-9619-97011,
Yaogai Hu1,
Zhengyu Zhao1,
Binbin Ni1 &
Yuannong Zhang1
Earth, Planets and Space volume 72, Article number: 183 (2020) Cite this article
The generation and evolution of artificial plasma clouds is a complicated process that is strongly dependent on the background environment and release conditions. In this paper, based on a three-dimensional two-species fluid model, the evolution characteristics of artificial plasma clouds under various release conditions were analyzed numerically. In particular, the effect of ionospheric density gradient and ambient horizontal wind field was taken into account in our simulation. The results show that an asymmetric plasma cloud structure occurs in the vertical direction when a nonuniform ionosphere is assumed. The density, volume, and expansion velocity of the artificial plasma cloud vary with the release altitude, mass, and initial ionization rate. The initial release velocity can change the cloud's movement and overall distribution. With an initial velocity perpendicular to the magnetic field, an O+ density cavity and two bumps exist. When there is an initial velocity parallel to the magnetic field, the generated plasma cloud is bulb-shaped, and only one O+ density cavity and one density bump are created. Compared to the cesium case, barium clouds expand more rapidly. Moreover, Cs+ clouds have a higher density than Ba+ clouds, and the snowplow effect of Cs+ is also stronger.
As early as the 1960s, the disturbance of electron density in the ionosphere during the rocket launching process was first discovered. Due to its advantages of low ionization potential, easy gasification, and easy observation, barium (Ba) has become the most commonly used substance in space experiments (e.g., Foppl et al. 1967; Rosenberg 1971; Haerendel et al. 1967; Valenzuela et al. 1986; Huba et al. 1992). In recent years, lanthanide metals such as samarium have also been used in active space experiments because their ionization is independent of sunlight (Zhao et al. 2016; Caton et al. 2017). In summary, artificially created plasma clouds by the release of chemical substances in the ionosphere have important applications in studying related space physics problems, measuring wind fields and electromagnetic fields in the upper atmosphere, artificially modifying the ionosphere, and affecting the short-wave communication and satellite communication. (Pavlov et al. 1993; Oraevsky et al. 2002; Xie et al. 2015).
The main processes of the generation and evolution of artificial plasma clouds in the ionosphere include the expansion of neutral clouds, the photoionization process and the movement of charged particles bound by magnetic fields. The evolution characteristics of artificial plasma clouds have been studied for many years both experimentally and theoretically (e.g., Lloyd and Haerendel 1973; Morse and Destler 1973; Mitchell et al. 1985; Bernhardt et al. 1987; Zakharov 2002; Xie et al. 2014). In 1967, Haerendel et al. conducted a preliminary qualitative discussion on artificial plasma clouds through a simplified low-density perturbation model. Subsequently, some one-dimensional, idealized cloud models were used to study the diffusion and classical effect of artificial plasma clouds (e.g., Scholer 1970; Samir et al. 1983; Schunk and Szuszczewicz 1988, 1991). Mitchell et al. (1985) presented a two-dimensional, electrostatic model to simulate a plasma cloud injected transverse to the ambient geomagnetic field with high velocities. To comprehensively describe the expansion and three-dimensional motion of artificial plasma clouds, more detailed three-dimensional models have been established (e.g., Rozhansky et al. 1990; Drake et al. 1988; Zalesak et al. 1988, 1990; Gatsonis and Hastings 1991; Ma and Schunk 1991, 1993, 1994; Delamere et al. 2001; Xie et al. 2014), and the expansion characteristics of plasma clouds under the influence of background neutral wind, electromagnetic fields, collisions between particles and inertia have been studied.
Previous studies have indicated that artificial plasma cloud evolution is complicated and strongly depends on the background environment and release conditions. In 1990, a two-dimensional fluid model was used to study the motion of artificial plasma clouds under various geophysical and release conditions, such as the solar cycle, geomagnetic activity, release altitudes and injection velocities. (Ma and Schunk 1990). After that, Ma and Schunk (1991, 1993) explored the effects of variable neutral wind, cloud sizes, electron temperature, and release velocities on short-term (~ 10 s) plasma cloud expansion based on a three-dimensional model. Although significant progress has been made in understanding the effects of release conditions on the evolution of plasma clouds, most of those models assumed a uniform background ionosphere and concentrated on relatively short time scales. In this paper, the background density gradient, temperature gradient, and ambient horizontal wind field were taken into account in our simulation, and the long-term distribution of cloud expansion can be predicted through this model.
The photoionization of barium in the ionosphere relies on sunlight. As the element with the lowest ionization potential in the alkali metal group, cesium (Cs) can also be ionized through thermal ionization in addition to photoionization, breaking the restriction of light on experimental conditions. Therefore, cesium was used as the release substance in early active space experiments (Pressman et al. 1960; Holmgren et al. 1981; Eliason et al. 1988). Compared with studies of barium, simulation and experimental studies of cesium release are rarely reported and not sufficient. In general, it is of great significance for the understanding and exploration of the Earth's space environment to transform and utilize the space environment with the release of chemical substances. Carrying out the comparative simulation of the evolution characteristics of released chemicals under different release conditions, and exploring the influence of different released substances on ionospheric disturbance are significant in guiding the selection of the release substances and conditions in active space experiments.
In this paper, based on a three-dimensional two-species fluid model, the evolution characteristics of artificial plasma clouds under various release conditions were studied through quantitative numerical simulations. The effects of the release altitude (220 km and 300 km), species (Ba and Cs), mass (1 kg, 10 kg and 100 kg), initial ionization probability (0%, 20% and 80%) and release velocity (parallel and perpendicular to the magnetic field) were systematically analyzed.
Diffusion of charged particles
Due to the different spatial distributions between barium ions (Ba+) and background ions, the different ions are treated separately. In addition, oxygen ions (O+) are the dominant ion component of the ionospheric F region, so we considered a two-species model of Ba+ and O+. Ignoring the density change in background oxygen ions caused by chemical reactions and assuming that the generation of ion clouds is only from photoionization, the density of charged particles is controlled by the continuity equation:
$$\frac{\partial {n}_{p}}{\partial t}=-\nabla \cdot \left({n}_{p}{{\varvec{u}}}_{p}\right)+{P}_{p}-{L}_{p},$$
where the subscript p represents Ba+ or O+, \({n}_{p}\) and \({{\varvec{u}}}_{p}\) represent the number density and drift velocity, respectively. \(\nabla \bullet \left({n}_{p}{{\varvec{u}}}_{p}\right)\) is the convection term. \({P}_{p}\) and \({L}_{p}\) represent the generation term and loss term, respectively. For O+, the density is assumed to be constant, that is, the generation term and loss term are set to be zero. For Ba+, \({P}_{{Ba}^{+}}=\sigma {n}_{s}\), where \(\sigma =0.0357 {s}^{-1}\) is the photoionization generation rate of the ion cloud (Mitchell et al. 1985), and \({n}_{s}\) is the number density of the released neutral atoms. It is assumed that the photoionization generation rate is the net generation rate of Ba+, that is, the charge exchange reaction (\(Ba+{O}^{+}\to {Ba}^{+}+O\)) and recombination reaction (\({Ba}^{+}+{e}^{-}\to Ba+hv\)) are ignored, so \({L}_{{Ba}^{+}}=0\). According to Eq. (1), as long as the drift velocities of Ba+ and O+ and the number density of neutral barium clouds are calculated, the number density of plasma clouds varies with time and space can be obtained.
The velocity of charged particles is calculated by the momentum equation:
$${n}_{p}{m}_{p}\frac{d{{\varvec{u}}}_{p}}{dt}+\nabla {p}_{p}-{n}_{p}{q}_{p}\left({\varvec{E}}+{{\varvec{u}}}_{p}\times {\varvec{B}}\right)=\frac{\delta {{\varvec{M}}}_{p}}{\delta t},$$
where \(\frac{d}{dt}=\frac{\partial }{\partial t}+({{\varvec{u}}}_{p}\bullet \nabla )\); \({m}_{p},{ q}_{p}\) and \({p}_{p}\) represent the mass, charge and pressure of charged particles, respectively, E and B are the electric field and magnetic field, respectively; and \(\frac{\delta {M}_{p}}{\delta t}\) is the change in momentum. In an ideal gas, the pressure gradient term is \(\nabla {p}_{p}=\nabla k{T}_{p}{n}_{p}\), in which k is the Boltzmann constant and \({T}_{p}\) is the ion temperature. To solve Eq. (2), we replaced \(\frac{d{{\varvec{u}}}_{p}}{dt}\) with its zero-order item \(\frac{d{{\varvec{u}}}_{p0}}{dt}\) (Ma and Schunk 1991), in which \({{\varvec{u}}}_{p0}\) is the dominant portion of \({{\varvec{u}}}_{p}\). Therefore, the first term on the left side of Eq. (2) can be neglected. Then, the momentum equation becomes:
$$\frac{k{T}_{p}}{{n}_{p}{m}_{p}}\nabla {n}_{p}-\frac{{q}_{p}}{{m}_{p}}\left({\varvec{E}}+{{\varvec{u}}}_{p}\times {\varvec{B}}\right)=\frac{1}{{n}_{p}{m}_{p}}\frac{\delta {{\varvec{M}}}_{p}}{\delta t}.$$
The Cartesian coordinate system is established with the x axis in the eastern direction, y axis in the northern direction, and z axis in the vertical direction, and it is assumed that the direction of the magnetic field \(\overrightarrow{s}=\overrightarrow{z}/sinI\approx \overrightarrow{z}\) at high latitudes. Therefore, Eq. (3) can be written as the component form below:
$$\left\{\begin{array}{c}\frac{k{T}_{p}}{{m}_{p}{n}_{p}}\frac{\partial {n}_{p}}{\partial x}-\frac{{q}_{p}{E}_{{\varvec{x}}}}{{m}_{p}}-{\Omega }_{p}{u}_{py}=\frac{1}{{m}_{p}{n}_{p}}\frac{\delta {M}_{px}}{\delta t}\\ \frac{k{T}_{p}}{{m}_{p}{n}_{p}}\frac{\partial {n}_{p}}{\partial y}-\frac{{q}_{p}{E}_{y}}{{m}_{p}}+{\Omega }_{p}{u}_{px}=\frac{1}{{m}_{p}{n}_{p}}\frac{\delta {M}_{py}}{\delta t}\\ \frac{k{T}_{p}}{{m}_{p}{n}_{p}}\frac{\partial {n}_{p}}{\partial z}-\frac{{q}_{p}{E}_{z}}{{m}_{p}}=\frac{1}{{m}_{p}{n}_{p}}\frac{\delta {M}_{pz}}{\delta t}\end{array}\right.,$$
where \({\Omega }_{p}={q}_{p}B/{m}_{p}\) is the cyclotron frequency of Ba+ or O+, \({u}_{px}\), \({u}_{py}\) and \({u}_{pz}\) are the components of \({{\varvec{u}}}_{p}\) in three directions, and \({u}_{pz}\) is implicit in the last formula of Eq. (4).
The momentum changes of O+ or Ba+ are as follows:
$$\frac{\delta {M}_{{(O}^{+})}}{\delta t}={m}_{{(O}^{+})}{n}_{{(O}^{+})}{\sum }_{\alpha }{\nu }_{{(O}^{+})\alpha }\left({{\varvec{u}}}_{\alpha }-{{\varvec{u}}}_{{(O}^{+})}\right),$$
$$\frac{\delta {M}_{{(Ba}^{+})}}{\delta t}={m}_{{(Ba}^{+})}{n}_{{(Ba}^{+})}{\sum }_{\alpha }{\nu }_{{(Ba}^{+})\alpha }\left({{\varvec{u}}}_{\alpha }-{{\varvec{u}}}_{{(Ba}^{+})}\right)+{m}_{{(Ba}^{+})}{n}_{s}\sigma \left({{\varvec{u}}}_{s}-{{\varvec{u}}}_{{(Ba}^{+})}\right),$$
where \({\nu }_{{O}^{+}\alpha }\) and \({\nu }_{{Ba}^{+}\alpha }\) are the collision frequencies between O+/Ba+ and other particles \(\alpha\) (\(\alpha\) represents charged particles and neutral particles), and \({{\varvec{u}}}_{s}\) and \({n}_{s}\) are the velocity and number density of the neutral atoms, respectively. In addition to the change in momentum due to collision, momentum change due to the source term, \({ m}_{{(Ba}^{+})}{n}_{s}\sigma ({{\varvec{u}}}_{s}-{{\varvec{u}}}_{{(Ba}^{+})})\), should also be included for barium ions. Due to the binding effect of the geomagnetic field, the difference between charged particles in velocity perpendicular to B is usually small, so the collision terms associated with it can be ignored (Schunk and Szuszczewicz 1988), and the electron inertia is also ignored.
The electric field is calculated by \({\varvec{E}}={{\varvec{E}}}_{0}-\nabla \varphi\), where \({{\varvec{E}}}_{0}\) is the constant electric field of the background, \(\varphi\) is the disturbance of the electrostatic potential, and \(\nabla \varphi\) can be approximately calculated by the following formula (Ma and Schunk 1993):
$$\nabla \varphi =(k{T}_{e}/e)\nabla ln{(n}_{e}),$$
where \({T}_{e}\) is the electron temperature, \(e\) is the Napierian base, and the electron density \({n}_{e}\) is calculated by the quasi-neutral condition, \({n}_{e}={n}_{({Ba}^{+})}+{n}_{{(O}^{+})}\).
By substituting Eqs. (5, 6, 7) into Eq. (4), the drift velocities of charged particles in the three directions can be obtained.
Diffusion of neutral particles
After being released in the ionosphere, the photoionization and recombination reaction of neutral barium as well as the oxidation reactions will occur. The main reaction process is described as follows:
$$Ba+hv\stackrel{\sigma }{\to }{Ba}^{+}+{e}^{-}$$
$$2Ba+{O}_{2}\stackrel{{k}_{c}}{\to }2BaO$$
$${k}_{c}=0.9\times {10}^{-10} {cm}^{3}{s}^{-1},$$
where \({k}_{c}\) is the reaction coefficient.
Taking the photoionization and oxidation losses into account, the general equation satisfied by the neutral particles can be described as follows:
$$\frac{\partial {n}_{s}}{\partial t}+\nabla \cdot \left({{\varvec{u}}}_{s}{n}_{s}\right)=\nabla \cdot \left(D\nabla {n}_{s}\right)-{L}_{s},$$
where \({L}_{s}=\sigma {n}_{s}+{k}_{c}{n}_{{O}_{2}}{n}_{s}\) is the loss term, in which \(\sigma {n}_{s}\) represents photoionization loss and \({k}_{c}{n}_{{O}_{2}}{n}_{s}\) represents oxidation loss. \(D\) is the diffusion coefficient, which can be calculated by the following formula (Bleecker et al. 2004):
$$D=\frac{3}{16}\frac{{(2\pi kT/{\mu }_{sO})}^{1/2}}{{n}_{O}\pi {({r}_{s}+{r}_{O})}^{2}},$$
where \({n}_{O}\) is the number density of atomic oxygen, and \(T\) is the neutral temperature. \({\mu }_{sO}=\frac{{m}_{O}{m}_{s}}{{m}_{s}+{m}_{O}}\) is the reduced mass, and \({m}_{s}\), \({m}_{O}\), \({r}_{s}\) and \({r}_{O}\) are the mass and atomic radius of released neutral and oxygen atoms, respectively. The approximate solution of Eq. (8) has been derived (Hu et al. 2012):
$${n}_{s}\left(r,t\right)=\frac{{N}_{0}}{{\pi }^{3/2}(4Dt+{r}_{0}^{2})(4Dt+{\varepsilon }^{2}{r}_{0}^{2}{)}^{1/2}}\times \mathit{exp}\left(-\frac{(x-{x}_{0}-\int {u}_{x}dt{)}^{2}+(y-{y}_{0}-\int {u}_{y}dt{)}^{2}}{4Dt+{r}_{0}^{2}}-\frac{(z-{z}_{0}-\int {u}_{z}dt{)}^{2}}{4Dt+{\varepsilon }^{2}{r}_{0}^{2}}-\sigma t-{k}_{c}{n}_{{O}_{2}}t\right),$$
where \(({x}_{0},{y}_{0},{z}_{0})\) is the initial release center of the cloud, \({u}_{x}\),\({u}_{y}\) and \({u}_{z}\) are the velocity components of \({{\varvec{u}}}_{s}\) in three directions, \({r}_{0}\) is the initial characteristic radius, \({N}_{0}\) is the total number of released particles, and \(\varepsilon\) represents the shape factor.
In the ionospheric F region, the neutral barium cloud decelerates under the impact of collisions with the background particles, and its velocity changes with time can be approximately expressed as:
$${{\varvec{u}}}_{s}\left(t\right)=\left({{\varvec{v}}}_{0}-{{\varvec{u}}}_{n}\right){e}^{-{\nu }_{s}t},$$
where \({{\varvec{v}}}_{0}\) is the initial release velocity of the neutral cloud, \({{\varvec{u}}}_{n}\) is the background neutral wind, and \({\nu }_{s}\) is the damping coefficient of the decelerating motion of the cloud due to collisions, which can be obtained according to elastic collision theory (Zhang et al. 2019):
$${\nu }_{s}={n}_{O}{\left({r}_{s}+{r}_{O}\right)}^{2}\pi \sqrt{\frac{8kT}{\pi {\mu }_{sO}}}\frac{{m}_{O}}{{m}_{s}+{m}_{O}}.$$
By time integrating Eq. (11), the change in the position of the cloud center can be obtained:
$${r}_{c}\left(t\right)={{\varvec{u}}}_{n}t+{\int }_{0}^{t}\left({{\varvec{v}}}_{0}-{{\varvec{u}}}_{n}\right){e}^{-{\nu }_{s}t}dt.$$
The spatial distribution of particles in the released region can then be obtained by solving Eqs. (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13) numerically.
In this section, the evolution characteristics of artificial plasma clouds under various release conditions were investigated, including different release altitudes (220 km and 300 km), different release masses (1 kg, 10 kg and 100 kg), different initial ionization rates (0%, 20% and 80%), and different release velocities. The ionospheric disturbances of barium and cesium were also compared.
The ambient atmospheric density, ionospheric particle density, temperature, magnetic field intensity and other initial conditions can be obtained by the atmospheric model MSIS-E-90, International Reference Ionosphere model (IRI-2016), and International Geomagnetic Reference Field model (IGRF-13). An equivalent extrapolation boundary condition is used at all boundaries. The design flow of the numerical algorithm is shown in Fig. 1, and the simulation process can be summarized as follows:
The release parameters and background ionospheric parameters are set;
The distribution of neutrals is calculated according to the neutral diffusing model ("Diffusion of neutral particles" section);
The drift velocity is calculated according to the momentum equation (Eq. (2) in "Diffusion of charged particles" section);
The particle number density distribution can be obtained by solving the continuity equation (Eq. (1) in "Diffusion of charged particles" section);
Steps 2–4 are repeated until the time limit is reached.
Flowchart of numerical simulation
Effects of release altitude
The ambient temperature, magnetic field, and number density of neutral particles and charged particles in the ionosphere vary with altitude, so the collision frequency and diffusion coefficient vary with altitude. At low altitudes, the diffusion coefficient of barium is small due to the high concentration of atomic oxygen, molecular oxygen and other particles, and the chemical consumption of barium is high, resulting in a relatively small plasma cloud. However, if the chemical is released at a very high altitude, rapid diffusion leads to a sharp reduction in the concentration of the released substance, and the background ionospheric particles cannot be significantly affected, which is not conducive to observation. For this reason, release altitudes of 220 km and 300 km are selected in our simulation, where the diffusion velocity is moderate and the disturbing effect on the background ionosphere is obvious, which is more convenient for experimental observation.
The distribution of the density and temperature of the ambient particles, and the diffusion coefficient and damping coefficient of barium atoms, and the meridional and zonal winds obtained in Horizontal Wind Model 07 are shown in Fig. 2.
The density of background neutral particles (left), the diffusion coefficient and damping coefficient of barium (middle), and the ambient wind (right)
Figures 3a, 4a show the release results at 220 km and 300 km, respectively. The profile of barium ions (Figs. 3a, 4a), background oxygen ions (Figs. 3b, 4b) and electron number density (Figs. 3c, 4c) are shown in the subgraphs. Due to the binding effect of the geomagnetic field, the expansion of the plasma cloud across B is constrained. In the direction along the magnetic field, the movement of the barium ion cloud is not restricted, so the plasma cloud will be gradually tied to the magnetic field, and stretched into an elliptical structure along the direction of the magnetic field. The momentum of Ba+ and O+ are coupled together because of collision, and barium ions transfer the kinetic energy to oxygen ions, which pushes the oxygen ions to move along B, forming an oxygen ion density hole in the release center. On the other hand, the outflowing O+ slows down under the effects of the background thermal and pressure gradients, which creates two density bumps on both sides of the Ba+ cloud along B. This phenomenon is called the snowplow effect (Ma and Schunk 1991), which has been detected by a high-resolution incoherent scatter radar in the Spacelab 2 upper atmospheric modification experiment (Bernhardt et al. 1988).
The plasma cloud distribution in the x–z plane (at t = 60 s) after 10 kg Ba release at 220 km, with no initial ionization (a, b, c), and the distribution of electron density and velocity over height at different times (d, e). The maximum densities at t = 5 s, 30 s, 90 s and 150 s are 5.82 × 107 cm−3, 4.23 × 107 cm−3, 2.45 × 107 cm−3 and 1.87 × 107 cm−3, respectively (d)
Figures 3d, e, 4d, e show the number density and velocity of electrons as a function of altitude and time, respectively. Due to the high collision frequency at low altitudes, the expansion of neutral barium clouds is greatly restrained, and the plasma clouds expand more slowly, so the plasma clouds are mainly concentrated in a relatively small area. Furthermore, Figs. 3d, 4d show that the peak number density of electrons at the release center decreases more slowly at low altitudes. At a high altitude, with a lower collision frequency and higher diffusion coefficient, the expansion of neutral clouds is faster, and the plasma clouds stretch faster along the magnetic field, so the plasma clouds have a larger radius than those at low altitudes. As can be seen from Figs. 3e, 4e, when barium is released at 220 km and 300 km, the maximum diffusion velocities of electrons that can be achieved at t = 30 s are 466 m/s and 965 m/s, respectively.
When considering a nonuniform ionosphere, the diffusion coefficient and damping coefficient change with height, and the background density gradient also affects the ion motion. The motion of artificial ion clouds along the magnetic field and its snowplow effect on ambient oxygen ions are similar to previous simulation results in which the background ionosphere is assumed to be uniform (Gatsonis and Hastings 1991; Ma and Schunk 1991, 1993). The difference is that the asymmetric structure of the artificial ion cloud appears in the vertical direction, which can be ascribed to the asymmetry of the collision frequency and diffusion coefficient, leading to a plasma cloud with a longer top and shorter bottom. In addition, the snowplow effect of O+ is also asymmetric on both sides of the expansion cloud due to the influence of the background density gradient. Based on the results without considering the ambient wind field (not shown), the effect of the ambient wind field is not significant because the speed of the ambient neutral wind is approximately tens of meters per second, which is much smaller than the expansion velocity of the plasma cloud (on the order of kilometers per second). Qualitatively, the morphology and evolution characteristics of the artificial plasma cloud are in agreement with the observations in previous space experiments (Haerendel et al. 1967; Foppl et al. 1967; Bernhardt et al. 1987; Huba et al. 1992).
Effects of initial cloud density
In this section, we analyze the effects of different initial cloud densities on the evolution of artificial plasma clouds. Changes in release masses and initial ionization rates can both change the initial densities of the cloud.
For a fixed initial radius, different release masses mean different initial cloud densities. Qualitatively speaking, the results of ionospheric disturbances with different release amounts are similar. It can be seen from Fig. 5 that with a larger release mass, the number density of the plasma clouds will be higher, and the enhanced pressure gradient enhances the expansion kinetic energy of neutral clouds, leading to a stronger disturbance of background oxygen ions and electrons. Additionally, a large release mass causes a larger ionospheric disturbance area and a longer duration of disturbance.
The density of artificial plasma clouds at the release center varies with time under different release masses. The maximum densities of Ba+ clouds are 5.93 × 106 cm−3, 6.06 × 106 cm−3 and 6.84 × 106 cm−3 when 1 kg, 10 kg, and 100 kg of barium are released, respectively
In an actual release experiment, it often takes some time for the cloud to reach the initial state of our simulation (which is generally called the average collision time), so that a few barium atoms may have been ionized before the beginning of the simulation, leading to a difference between the initial number of barium atoms and the total number of barium atoms released, which is called the initial ionization rate. In addition, due to the different release techniques (thermal release, explosion release, etc.) used in the active release experiment, the ionization rates of neutral clouds at the beginning of release also vary greatly.
Figure 6 shows the simulation results with initial ionization rates of 0%, 20% and 80%. The plasma cloud consists of two parts: one part is the high-density part due to initial ionization, and this part will become longer and narrower over time, because it has little movement across the magnetic field except for the initial inertial motion (e.g., the thermal expansion after release from the canister and the velocity generated by the suborbital motion of the sounding rocket), which is soon be captured by magnetic field. Another part of the plasma cloud comes from the continual photoionization of expanding barium neutrals. At the initial stage after release, a steeper density gradient results in a faster expansion of the plasma cloud along B. As seen from Fig. 6, with the increase in the initial ionization rate, the distribution of barium ion clouds become increasingly concentrated, and the numerical dissipation brought by the numerical simulation process decreases, so the highest density of Ba+ increases, but the plasma density decreases faster. Figure 6d, e shows that the maximum Ba+ cloud densities are 7.77 × 108 cm−3, 2.02 × 108 cm−3 and 6.06 × 107 cm−3 with initial ionization rates of 0%, 20% and 80%, respectively, and the diameters of the plasma clouds with initial ionization rates of 0%, 20% and 80% are 43 km, 50 km and 57 km at t = 60 s, respectively. At the same time, the steeper density gradient makes the barium cloud stretch faster along the magnetic field, so the vertical diameter of the plasma cloud also increases rapidly, and the sheet-like structure of the plasma cloud along the magnetic field is more obvious. As time passes, the density difference of Ba+ at the release center caused by different initial ionization rates decreases.
The density distribution of Ba+ clouds in the x–z plane (at t = 60 s) after 10 kg Ba release at 220 km, with initial ionization rates (σ) of 0%, 20% and 80% (a, b, c), the maximum density of the Ba+ cloud changes with time (d), and the vertical diameter of the generated plasma cloud changes with time (e). Note that the location where the barium ion number density is 1/e of the peak density is defined as the plasma cloud boundary, and the vertical diameter of the ionosphere hole is determined accordingly
Effects of release velocity
We also considered the evolution characteristics of released clouds with different initial release velocities. The cloud evolution results with initial velocity perpendicular to B and initial velocity parallel to B were simulated, with a velocity of 2 km/s. It should be noted that, for the release with initial velocity perpendicular to magnetic field, we take the release point located at [− 20, 0, 0] km; for the release with initial velocity parallel to magnetic field, we take the release point located at [0, 0, − 20] km.
Figure 7 shows the evolution result of a cloud released with an initial velocity perpendicular to B. The expansion of the plasma cloud along B and its snowplow effect on O+ are very similar to those of a stationary release. However, since the volume of the Ba+ cloud is larger than that in the former case, the number density of the Ba+ cloud is lower. Due to the movement of neutral barium clouds, an ionic tail will be generated behind them from photoionization in the early stage, but the Ba+ cloud still becomes a sheet-like structure eventually because the motion perpendicular to B is constrained. The Ba+ cloud decelerated rapidly in response to the magnetic field, while the neutral barium cloud was not affected by the magnetic field; thus, the barium ion cloud was slowly separated from barium neutral clouds (not shown). In addition, there are still two O+ density enhancement regions and one O+ density depletion region in the background ionosphere, and it can be seen that a small number of oxygen ions are pushed to the front of the cloud due to momentum transfer, forming O+ density enhancement regions at both sides in front of the cloud.
The plasma cloud distribution in the x–z plane at t = 20 s (a, b, c) and 60 s (d, e, f) after 10 kg Ba release at 220 km, and the injection velocity us is 2 km/s perpendicular to B
As shown in Fig. 8, when the neutral barium cloud is released with a velocity along B, at the early stage, the snowplow effect of the ion cloud creates an O+ density hole on the back side of the expanding Ba+ cloud and an O+ density bump at the front. Compared with the cases without injection velocity and with a release velocity perpendicular to B, this case features a much greater O+ density enhancement in front of the plasma cloud, and no density enhancement appears behind the plasma cloud. While the initial velocity of the barium neutrals is parallel to the background magnetic field and although the movement of the barium ion cloud is not affected by the j × B force, with the high concentration of background particles, the motion of the ion cloud still slows down due to collisions. At t = 20 s, the plasma cloud has moved approximately 12 km. Moreover, because of the existence of Coulomb collision between charged particles, the damping coefficient of the ion cloud is higher than that of the neutral cloud, which eventually leads to the separation of the ion cloud and neutral cloud.
The plasma cloud distribution in the x–z plane at t = 20 s (a, b, c) and 60 s (d, e, f) after 10 kg Ba release at 220 km, and the injection velocity us is 2 km/s along B
Cesium release
Although the thermal ionization of cesium is prone to occur due to its low ionization potential, thermal ionization is insignificant compared to photoionization under conditions of strong sunlight, so the thermal ionization process of cesium is not discussed here. The comparison of the diffusion coefficient and damping coefficient of barium and cesium is shown in Fig. 9. The diffusion coefficient of Ba is slightly larger than that of Cs, but its damping coefficient is smaller than that of Cs.
Comparison of the diffusion coefficient (left) and damping coefficient (right) of Cs and Ba
Figure 10 shows the simulation results of Cs with the same number of molecules released. Except for the type of material released, other parameters are the same as in Fig. 3. Qualitatively speaking, the expansion characteristics of Cs+ and Ba+ and the disturbance effect on background O+ are similar. Since the diffusion coefficient of cesium is smaller, the barium cloud expands more rapidly and covers a wider area than the cesium cloud, but the ionization yield of cesium is higher than that of barium under the same release mass due to the higher photoionization rate of cesium. The cesium ion cloud is denser than the barium ion cloud, so the cesium ion cloud has a larger expansion speed, as seen from Figs. 3e, 10e. The maximum vertical velocities of electrons at t = 5 s are 289 m/s and 613 m/s, respectively. At t = 60 s, the peak number density of cesium ion clouds reaches 4.48 × 107 cm−3, which is nearly one and a half times that of barium ion clouds (3.03 × 107 cm−3) under the same conditions. In addition, the collision frequency of Cs+-O+ is greater than that of Ba+-O+, and the snowplow effect of Cs+ is stronger than that of Ba+, resulting in larger oxygen ion density holes and bumps.
The plasma cloud distribution in the x–z plane (at t = 60 s) after 10 kg Cs release at 220 km, with no initial ionization (a, b, c), and the distribution of electron density and velocity over height at different times (d, e). The maximum densities at t = 5 s, 30 s, 90 s and 150 s are 9.27 × 107 cm−3, 6.44 × 107 cm−3, 3.59 × 107 cm−3 and 2.72 × 107 cm−3, respectively (d)
In this study, the evolution characteristics of artificial plasma clouds under different release conditions were systematically studied based on a three-dimensional two-species model. The influences of different release altitudes, masses, initial ionization rates, release velocities and release substances on the evolution of the artificial plasma cloud were discussed. Based on the simulation results, the conclusions can be summarized below:
When a nonuniform ionosphere with altitude is assumed, the motion of artificial ion clouds along the magnetic field and their snowplow effect on background oxygen ions are similar to the results obtained when considering a uniform ionosphere. The difference is that an asymmetric structure of the artificial ion cloud appears in the vertical direction, which can be ascribed to the asymmetry in the collision frequency. In addition, the snowplow effect of oxygen ions is asymmetric on both sides of the expansion cloud due to the influence of the background density gradient.
At high altitudes, due to the low collision frequency between particles, the neutral clouds expand rapidly, and the artificial ion cloud has a large volume and is stretched rapidly along the magnetic field. The disturbance to the background oxygen ions is strong, but the disturbance recovers rapidly.
Essentially, different release masses and initial ionization rates can both change the initial density of the plasma cloud, and have a similar consequence on cloud evolution in the early stage. The difference is that the disturbance in the ionosphere will last longer with a larger release mass, while with higher initial ionization rates, the density gradient of the plasma cloud will be steeper, so that the plasma stretches faster along B, leading to a plasma cloud with a more obviously elongated structure.
The initial release velocity can change the motion and overall distribution of the cloud. For a cloud released with an initial velocity perpendicular to B, the cloud becomes stretched in the same direction as the initial velocity without moving, and a small amount of oxygen ions is pushed away in front of the cloud due to momentum transfer. When the initial release velocity is parallel to B, the O+ density enhancement in front of the cloud is much greater than that behind the cloud, and the plasma cloud moves in same direction. Barium releases with initial velocities both perpendicular and parallel to B have a strong disturbance on background particles, eventually resulting in the separation of the ion cloud and the neutral cloud.
Qualitatively, the evolution characteristics of Cs+ and Ba+ and their effects on background O+ are similar. Due to the large diffusion coefficient of barium, the expansion of barium clouds is more rapid, and the coverage area of Ba+ clouds is wider, but the density of Cs+ clouds is higher than that of Ba+ clouds when the same mass is released because of the large photoionization rate of cesium. Moreover, the snowplow effect of Cs+ is stronger than that of Ba+, and the oxygen ion density disturbance caused by cesium is stronger.
It is worth noting that the formation of striations, as observed in many experiments, does not appear in our simulation (Zalesak et al. 1988; Goldman et al. 1976; Kelley and Livingston 2003). The main reason is that we assumed that the density distribution of the neutral cloud is Gaussian and the overall density distribution is smooth, so that the E × B gradient instability did not occur, which is generally thought to be the main cause of striations. In addition, the lack of a varying background E field may be another reason for the absence of striations.
Simulation data can be provided upon request.
IRI:
International Reference Ionosphere
IGRF:
International Geomagnetic Reference Field
Bernhardt PA, Roussel-Dupre RA, Pongratz MB, Haerendel G, Valenzuela A, Gurnett DA et al (1987) Observations and theory of the AMPTE magnetotail barium releases. J Geophys Res 92(A6):5777. https://doi.org/10.1029/ja092ia06p05777
Bernhardt PA, Swartz WE, Kelly MC, Sulzer MP, Noble ST (1988) Spacelab 2 upper atmospheric modification experiment over Arecibo, 2, plasma dynamics. Astrophys Lett Comm 27(3):183
Bleecker KD, Bogaerts A, Gijbels R, Goedheer W (2004) Numerical investigation of particle formation mechanisms in silane discharges. Phys Rev E 69(5):056409. https://doi.org/10.1103/PhysRevE.69.056409
Caton RG, Pedersen TR, Groves KM, Hines J, Cannon PS, Jackson-Booth N et al (2017) Artificial ionospheric modification: the metal oxide space cloud experiment. Radio Sci 52(5):539–558. https://doi.org/10.1002/2016RS005988
Delamere PA, Swift DW, Stenbaek-Nielsen HC (2001) An explanation of the ion cloud morphology in the crres plasma injection experiments. J Geophys Res 106(A10):21289. https://doi.org/10.1029/2000ja000353
Drake JF, Mulbrandon M, Huba JD (1988) Three-dimensional equilibrium and stability of ionospheric plasma clouds. Phys Fluids 31(11):3412. https://doi.org/10.1063/1.866906
Eliason L, Lundin R, Holmgren G (1988) Energetic electron enhancements due to the TOR chemical releases. Adv Space Res 8(1):93. https://doi.org/10.1016/0273-1177(88)90347-X
Foppl H, Haerendel G, Haser L, Loidl J, Lütjens P, Lüst R et al (1967) Artificial strontium and barium clouds in the upper atmosphere. Planet Space Sci 15(2):357–372. https://doi.org/10.1016/0032-0633(67)90200-0
Gatsonis NA, Hastings DE (1991) A three-dimensional model and initial time numerical simulation for an artificial plasma cloud in the ionosphere. J Geophys Res Space Phys. https://doi.org/10.1029/90JA02249
Goldman SR, Baker L, Ossakow SL, Scannapieco AJ (1976) Striation formation associated with barium clouds in an inhomogeneous ionosphere. J Geophys Res 81(28):5097–5113. https://doi.org/10.1029/JA081i028p05097
Haerendel G, Lüst R, Rieger E (1967) Motion of artificial ion clouds in the up-per atmosphere. Planet Space Sci 15(1):1–18. https://doi.org/10.1016/0032-0633(67)90062-1
Holmgren G, Kintner PM, Kelley MC (1981) Artificial particle and wave stimulation in the trigger experiment. Adv Space Res 1(2):311. https://doi.org/10.1016/0273-1177(81)90305-7
Huba JD, Bernhardt PA, Lyon JG (1992) Preliminary study of the CRRES magnetospheric barium releases. J Geophys Res: Space Phys. https://doi.org/10.1029/91JA02144
Hu YG, Zhao ZY, Zhang YN (2012) Numerical simulation on the early dynamics of barium clouds released in the ionosphere. Acta Phys Sin 61(8):536–552. https://doi.org/10.1007/s11783-011-0280-z
Kelley MC, Livingston R (2003) Barium cloud striations revisited. J Geophys Res 108(A1):1044. https://doi.org/10.1029/2002JA009412
Lloyd KH, Haerendel G (1973) Numerical modeling of the drift and deformation of ionospheric plasma clouds and of their interaction with other layers of the ionosphere. J Geophys Res 78(31):7389–7415. https://doi.org/10.1029/ja078i031p07389
Ma TZ, Schunk RW (1991) Plasma cloud expansion in the ionosphere: three-dimensional simulation. J Geophys Res Space Phys. https://doi.org/10.1029/90JA02618
Ma TZ, Schunk RW (1993) Ionization and expansion of barium clouds in the ionosphere. J Geophys Res Space Phys. https://doi.org/10.1029/92JA01552
Ma TZ, Schunk RW (1994) Dynamics of three-dimensional plasma clouds with coupling to the background ionosphere. J Geophys Res 99(A4):6331. https://doi.org/10.1029/93ja02645
Ma TZ, Schunk RW (1990) A two-dimensional model of plasma expansion in the ionosphere. Planet Space Sci 38(6):723. https://doi.org/10.1016/0032-0633(90)90032-l
Mitchell HG, Fedder JA, Huba JD, Zalesak ST (1985) Transverse motion of high-speed barium clouds in the ionosphere. Space Phys J Geophys Res. https://doi.org/10.1029/93ja02645
Morse DL, Destler WW (1973) Laboratory simulation of artificial plasma clouds in the ionosphere. J Geophys Res 78(31):7417–7430. https://doi.org/10.1029/ja078i031p07417
Oraevsky VN, Ruzhin YY, Badin VI, Deminov MG (2002) Alfven wave generation by means of high orbital injection of barium cloud in magnetosphere. Adv Space Res 29(9):1327–1334. https://doi.org/10.1016/S0273-1177(02)00187-4
Pavlov VA, Pinegin AN, Smirnovskii IR (1993) Plasma-perturbation evolution in the F-region and estimation of ionospheric parameters from sounding data. Radiophys Quantum Electron 36(3–4):131–138. https://doi.org/10.1007/BF01037199
Pressman J, Marrmo FF, Aschenbrand LM (1960) Artificial electron clouds—VI: Low altitude study, release of cesium at 69, 82 and 91 km. Planet Space Sci 2(4):228
Rosenberg NW (1971) Observations of striation formation in a barium ion cloud. J Geophys Res 76(28):6856–6864. https://doi.org/10.1029/ja076i028p06856
Rozhansky VA, Veselova IY, Voskoboynikov SP (1990) Three-dimensional computer simulation of plasma cloud evolution in the ionosphere. Planet Space Sci 38(11):1375–1386. https://doi.org/10.1016/0032-0633(90)90113-5
Samir U, Wright KH, Stone NH (1983) The expansion of a plasma into a vacuum: Basic phenomena and processes and applications to space plasma physics. Rev Geophys 21(7):1631–1646. https://doi.org/10.1029/RG021i007p01631
Scholer M (1970) On the motion of artificial ion clouds in the magnetosphere. Planetary Space Sci 18(7):977–1004. https://doi.org/10.1016/0032-0633(70)90101-7
Schunk RW, Szuszczewicz EP (1988) Early-time plasma expansion characteristics of ionized clouds in the ionosphere. J Geophys Res Space Phys 93(A11):12901–12915. https://doi.org/10.1029/JA093iA11p12901
Schunk RW, Szuszczewicz EP (1991) Plasma expansion characteristics of ionized clouds in the ionosphere: macroscopic formulation. J Geophys Res Space Phys 96(A2):1337–1349. https://doi.org/10.1029/90JA02345
Valenzuela A, Haerendel G, Foppl H, Melzner F, Neuss H, Rieger E et al (1986) The AMPTE artificial comet experiments. Nature 320(6064):700–703. https://doi.org/10.1038/320700a0
Xie LH, Li L, Wang JD, Zhang YT (2014) Three-dimensional, two-species magneto-hydrodynamic studies of the early time behaviors of the combined release and radiation effects satellite g2 barium release. Phys Plasmas 21(4):042903. https://doi.org/10.1063/1.4871729
Xie LH, Li L, Wang JD, Tao R (2015) Determining wind field and electric field by a barium release experiment in the ionosphere. Sci China: Earth Sci 58(7):1210–1215. https://doi.org/10.1007/s11430-014-5051-9
Zakharov YP (2002) Laboratory simulation of artificial plasma releases in space. Adv Space Res 29(9):1335–1344. https://doi.org/10.1016/s0273-1177(02)00184-9
Zalesak ST, Drake JF, Huba JD (1988) Dynamics of three-dimensional ionospheric plasma clouds. Radio Sci 23(4):591–598. https://doi.org/10.1029/rs023i004p00591
Zalesak ST, Drake JF, Huba JD (1990) Three-dimensional simulation study of ionospheric plasma clouds. Geophys Res Lett 17(10):1597–1600. https://doi.org/10.1029/gl017i010p01597
Zhao HS, Feng J, Xu ZW, Wu J, Wu ZS, Xu B et al (2016) A temporal three-dimensional simulation of samarium release in the ionosphere. J Geophys Res: Space Phys 121(10):10508–10519. https://doi.org/10.1002/2016JA022425
Zhang X, Sun A, Tian L, Zhang GJ (2019) Three-dimensional fluid simulations of the Cs plasma release in the ionosphere. AIP Adv 9(1):015117. https://doi.org/10.1063/1.5079433
We are thankful to the reviewers that helped us to improve the quality of the paper, and a special thanks to all the persons that helped us to complete this work.
This research received no external funding.
Department of Electronic Information, Wuhan University, Wuchang District, Wuhan, 430000, China
Xiaoli Zhu, Yaogai Hu, Zhengyu Zhao, Binbin Ni & Yuannong Zhang
Xiaoli Zhu
Yaogai Hu
Zhengyu Zhao
Binbin Ni
Yuannong Zhang
XZ worked on theory development, simulations and manuscript writing. YH and BN helped with the analyses and in preparing the manuscript. ZZ and YZ worked on theory development, discussion and supervision of the study. All authors read and approved the final manuscript.
Correspondence to Yaogai Hu.
Zhu, X., Hu, Y., Zhao, Z. et al. Ionospheric disturbance caused by artificial plasma clouds under different release conditions. Earth Planets Space 72, 183 (2020). https://doi.org/10.1186/s40623-020-01317-9
Artificial plasma cloud
Chemical release
Electron density disturbance
Three-dimensional fluid model
3. Space science
|
CommonCrawl
|
M-Phi
A blog dedicated to mathematical philosophy.
How Might PA be Inconsistent?
The recent discussion of Edward Nelson's claim to have a found a proof that Peano arithmetic, $PA$, is inconsistent has been very interesting in many ways. The proof has turned out to contain a major flaw and Professor Nelson has very graciously withdrawn the claim. I was not able to follow the alleged proof because I don't know enough about Chaitin's Theorem, or about the properties of the system $Q_0^{\ast}$ being examined. But the episode set me thinking about what might lie behind an inconsistency in $PA$, despite the fact that we have many standard mathematical proofs that $PA$ is consistent (indeed, true).
For readers who are a bit rusty on the properties of first-order arithmetic, here is some background and attempted explanation. $PA$ is a basic mathematical theory which functions, for mathematical logicians, roughly as E. Coli functions for microbiologists. $PA$ is a theory expressed in a first-order formalized language $L_A$ (with identity) with
four primitive non-logical symbols: $0$, $S$, $+$, $\times$
Each of these is, in effect, functional. Along with the variables, $x_1, x_2, \dots$, one can define the terms of the language by saying that $t$ is a term if $t$ is $0$, or a variable, or is obtained by applying $S$, $+$ or $\times$ to previous terms. (This is a recursive definition.) Because $L_A$ has only one primitive predicate symbol, namely $=$, the atomic formulas of $L_A$ are equations, of the form
$t = u$,
where $t$ and $u$ are terms. For reasons of metalogical simplicity, it is usual to assume that the only logical primitives, beyond $=$, are $\neg$, $\rightarrow$ and $\forall$. Other truth-functional connectives can be defined (e.g., $\phi \vee \theta$ is short for $\neg \phi \rightarrow \theta$) and $\exists x \phi$ is short for $\neg \forall x \neg \phi$.
The $L_A$-formulas are defined, again recursively, by saying that $\phi$ is a formula of $L_A$ just if either $\phi$ is an atomic formula (i.e., an equation) or is obtained from previous formulas by applying connectives or a quantifier (i.e., $\forall x$).
The axioms of $PA$ are then specified as the following six individual axioms, and one axiom scheme.
Individual Axioms of $PA$:
$PA1$. $S(x) \neq 0$.
$PA2$. $S(x) = S(y) \rightarrow x = y$.
$PA3$. $x + 0 = x$.
$PA4$. $x + S(y) = S(x + y)$.
$PA5$. $x \times 0 = 0$.
$PA6$. $x \times S(y) = x \times y + x$
Induction Scheme: $IND_{\phi}$
$[\phi(0) \wedge \forall x(\phi(x) \rightarrow \phi(S(x)))] \rightarrow \forall x \phi(x)$
where $\phi(x)$ is a formula of $L_A$. The formula $\phi(x)$ may contain other free variables (called "parameters"). $\phi(0)$ means the result of substituting the term $0$ for all free occurrences of $x$ in $\phi$.
The axioms of $PA$ are then the above six and all instances of the Induction Scheme. In addition, there are underlying purely logical axioms, for example,
$\phi \rightarrow (\theta \rightarrow \phi)$
$\forall x \phi \rightarrow \phi(x/t)$
and, usually, one or two inference rules (e.g., Modus Ponens: from $\phi$ and $\phi \rightarrow \theta$, infer $\theta$). A derivation of $\phi$ in $PA$ is a finite sequence $(\theta_1, \dots, \theta_n)$ such that $\theta_n$ is $\phi$, and each, for $i \in \{1, n\}$, $\theta_i$ is either a logical axiom, or an axiom of $PA$, or is obtained by Modus Ponens on some previous $\theta_k$ and $\theta_p$. If there is a derivation of $\phi$ in $PA$, then $\phi$ is a theorem of $PA$, and we write:
$PA \vdash \phi$
Our specification of the axioms $PA$ is parasitic on our prior understanding of the set $N$ of natural numbers and their properties with respect to successor, addition and multiplication. So, one can define an $L$-interpretation $\mathbb{N}$ by specifying that $dom(\mathbb{N}) = N$, and that $0$ refers to $0$, $S$ refers to successor and that $+$ and $\times$ refer to plus and times.
This gives the interpreted language $(L_A, \mathbb{N})$. It is precisely this interpreted language that we have in mind when writing the axioms formally as opposed to informally. The constraint is that we formalize the informally expressed truths into truths of this language, and keep the interpretation fixed. One can verify that each axiom of $PA$ is true in $\mathbb{N}$. (The verification is in a sense circular. For example, the meta-theoretic assumption required to verify that $\forall x(S(x) \neq 0)$ is true is precisely the fact that $0$ is not a successor of any number. But this is no different from showing that the truth of "snow is white" follows from the assumption that snow is white. To show each axiom true, we assume that very axiom.) Since the inference rules are sound, it follows that each theorem of $PA$ is true. Consequently, since $0 = 1$ is not true, it follows that $0=1$ is not a theorem of $PA$. So, $PA$ is consistent. This reasoning can itself be formalized by adding a new truth predicate symbol to the language $L_A$ and setting out the required properties of truth. This leads to the area of axiomatic truth theories.
How might $PA$ be inconsistent? Here are four guesses:
1. Perhaps there is something wrong with the successor axioms (i.e., $PA1$ and $PA2$).
2. Perhaps there is something wrong with the defining clauses for $+$ and $\times$ (i.e., $PA3-PA6$: these are examples of definition by primitive recursion).
3. Perhaps there is something wrong with the induction scheme.
4. Some combination of the above.
The successor axioms force any model of them to be infinite. If $X$ is a set containing say $a$, and $f : X \rightarrow X$ is injective and such that $f(x) \neq a$ (for all $x$), then $X$ must be infinite. From the point of view of the theory itself, the infinity is only "potential" (the axioms $PA1$ and $PA2$ themselves does not assert the existence of an infinite object: rather, the semantic meta-theory asserts the existence of an infinite object---i.e., a model---satisfying them). So, presumably, for an inconsistency to arise with the successor axioms, there must be some problem with potential infinity. However, I really cannot see a genuine argument here, other than a dogmatic rejection of even potential infinity. (The objects in question are mathematical abstract objects, not concrete tokens.)
Perhaps defining arithmetic operations by primitive recursion is problematic, and potentially inconsistent. (There is a standard mathematical proof that it isn't: Dedekind's recursion theorem.) Perhaps because primitive recursions exhibit a kind of "circularity"? (Roughly, $f(n+1)$ is defined in terms of $f(n)$.) But, again, I cannot see a genuine problem, as the circularity is only apparent: a primitive recursive definition of a function $f$ allows one to compute $f(n)$, for any argument $n$, requiring only the values for smaller numbers.
What has been most often suggested is that the induction scheme might lead to an inconsistency somehow. Aside from general ultra-finitist concerns (which I think are based merely on type/token confusion), it's unclear what exactly might happen to generate an inconsistency. What seems a likely proof idea is that one could find a formula $\phi(x)$ and a term $t$ (no matter how specified) such that one can show:
Inductive Inconsistency of $PA$ with respect to $\phi(x)$ and $t$:
(i) $PA \vdash \phi(0)$
(ii) $PA \vdash \forall x(\phi(x) \rightarrow \phi(Sx)))$
(iii) $PA \vdash \neg \phi(t)$
To explain why we have not "observed" an inconsistency, it might be the case that the shortest derivation $d$ of $0=1$ remains incredibly huge. The reason is that the derivation would have to generate a canonical term $SSS....S0$ (say representing the number $n$), and prove $t = SSS...S0$, and apply Modus Ponens $n$ times to get $\phi(t)$, contradicting (iii). But the size of $n$ might be astronomical.
Perhaps an example is a theory $T$ with axioms:
1. $x + 0 = x$.
2. $x + S(y) = S(x + y)$.
3. $x \times 0 = 0$.
4. $x \times S(y) = x \times y + x$
5. $2^{0} = S0$.
6. $2^{S(x)} = 2 \times 2^{x}$
7. $F(0)$
8. $F(x) \rightarrow F(S(x))$
9. $\neg F(2^{2^{2^{2^{2}}}})$
(where $2$ is defined to mean $SS0$)
I think a similar example was first given by Rohit Parikh in the 1970s, though I haven't seen the original. So, this example might be quite different from Parikh's. If I've worked this out right, then $T$ is inconsistent, but the shortest derivation of an inconsistency has length at least $2^{65,536}$ symbols. (On the other hand, I may be wrong in specifying this: there may be a clever trick which allows one to get round the specification of a canonical numeral $SS...S0$ with $2^{65,536}$ occurrences of $S$.)
George Boolos, in a 1987 paper, "A Curious Inference", gave an example of a theory similar to the one above which is inconsistent, where the number of symbols in the shortest such derivation is an exponential stack of $65,536$ 2s. (Actually, Boolos gave a valid inference whose shortest derivation is of this length, but one can easily convert it to an example of such a theory by negating the conclusion formula.)
Published by Jeffrey Ketland at 6:48 pm
John Baez 3 October 2011 at 12:12
Nice post! There are a lot of technical aspects to Nelson's proposed proof, but I think all philosophers of mathematics would enjoy Chaitin's Theorem:
http://en.wikipedia.org/wiki/Kolmogorov_complexity#Chaitin.27s_incompleteness_theorem
I consider this result to be vastly more mind-blowing than Goedel's incompleteness theorems, even though it's closely related. Simply put, it goes like this.
Let the "Kolmogorov complexity" of a (finite) string of bits be the length of the shortest computer program that prints out that string. Fix a computer language ahead of time, so this is well-defined. Also fix a finitely axiomatizable system of mathematics, which we'll assume is consistent: for example, Peano Arithmetic, or Zermelo-Fraenkel set theory. Then:
There exists a constant L, such that for no string of bits has Kolmogorov complexity that provably exceeeds L.
This is true even though all but finitely many strings have Kolmogorov complexity exceeding L!
In short, while most things are complicated, there's a limit on how complicated we can prove things are.
The proof is nicely sketched in the Wikipedia article.
Jeffrey Ketland 3 October 2011 at 14:30
Hi John, many thanks.
Yes I know Chaitin's theorem! It used to be a recurrent theme on sci.logic. Panu Raatikainen has written a couple of things of the topic, maybe 10 years ago or so, concerning confusions that arise concerning its consequences.
In the first para, I think I mis-stated what I meant - what I meant to say is that I don't know the details of how Chaitin meshes with Nelson's proof strategy, in particular with his system $Q_0^{\ast}$ and its properties (e.g., he says a result from Shoenfield is formalizable in this theory - it proves its own quasitautological consistency - but I suppose the details are in his 1986 book Predicative Arithmetic).
Bhupinder Singh Anand 7 August 2012 at 04:47
Dear Jeff,
"It is precisely this interpreted language that we have in mind when writing the axioms formally as opposed to informally. The constraint is that we formalize the informally expressed truths into truths of this language, and keep the interpretation fixed. One can verify that each axiom of PA is true in N. (The verification is in a sense circular...)"
I would argue that it is this subjectively-imposed constraint on a 'fixed' interpretation that has perhaps prevented a satisfactory perspective concerning a proof of consistency for PA!
After all, if the intention is to formalise a subjectively conceived informal concept, then the aim of any sound interpretation of the formalisation cannot be to arrive back at the subjectively conceived informal concept.
Rather, the aim would reasonably be to arrive at what can be agreed upon as an objectively verifiable common core of such subjectively conceived concepts.
In other words, the aim would not be to try and justify a formal theory by the subjective interpretation that gave it birth, but to seek an objective interpretation that justifies the theory.
I have argued that this is possible in the paper 'Evidence-Based Interpretations of PA' that I presented at AISB/IACAP 2012, Birmingham last month.
http://alixcomsi.com/34_Evidence_Based_Interpretations_Of_PA_Final_AISB_IACAP_2012.pdf
Bhup
Jeffrey Ketland 7 August 2012 at 15:48
Many thanks, Bhup.
"I would argue that it is this subjectively-imposed constraint on a 'fixed' interpretation that has perhaps prevented a satisfactory perspective concerning a proof of consistency for PA!"
But there is a satisfactory - by the standards of ordinary mathematics - proof of the consistency of PA. This consists in the observation that its axioms are true and that modus ponens preserves truth.
If one insists on changing the standards of mathematics to Cartesian standards, then of course one might become sceptical. Similarly, if I change my current epistemic standards to Cartesian standards, I may disbelieve that I have hands. But such modification of standards is itself unscientific & irrational.
If you begin with scepticism you will never escape, more or less by definition. This means that, in order to do science and mathematics, one must adopt ordinary scientific standards, not infallibilist Cartesian standards. Ordinary scientific standards do not demand infallibilism or certainty. There is *always* a possibility of error.
So, adopting ordinary, fallible, scientific standards, I see the language of arithmetic as objectively interpreted. Its variables range over $\mathcal{N}$ and the primitive symbols denote the number zero, and the successor, addition and multiplication operations. Otherwise, I can't make sense of the claim that it has anything to do with number theory. I therefore don't see anything subjective about the natural numbers.
I think your claim must be that the language of arithmetic is *uninterpreted*. I disagree with that. Rather, it's is an interpreted language: $(\mathcal{L}, \mathbb{N})$.
Cheers, Jeff
Sceptic! Difficult to see myself in that light, though I do prefer 'convincing' to 'satisfactory' in mathematical argumentation (which is why I have difficulty conceiving of any interpretation that admits completed infinities without qualification).
I assumed that the 'fixed' interpretation you referred to in your post was the Standard interpretation of PA over the structure of the natural numbers (I presume that it is this structure that you refer to by $(\mathcal{L}, \mathbb{N})$).
If so, then---even if the above is the intended interpretation (under Tarski's classical definitions) that may initially motivate a human intelligence in the formalization that is defined as the first order Peano Arithmetic PA---I have essentially argued in my recent AISB/IACAP 2012 paper 'Evidence-Based Interpretations of PA' that (and suggested why) this interpretation (of PA) cannot be seen as sound; in the sense that the Axioms of PA are not seen to interpret as objectively true under the interpretation, and that the rules of inference are not seen as preserving such truth objectively under the interpretation.
However, I have further argued in the paper that there is an objectively definable, algorithmic (hence finitary), interpretation of PA (which could be seen as reflecting the way a machine intelligence would interpret PA over the numerals) which is sound; in the sense that the Axioms of PA can be shown to interpret as objectively true under the interpretation, and that the rules of inference can be shown to preserve such truth objectively under the interpretation.
In a follow-up paper, 'Some Consequences of Evidence-Based Interpretations of PA' (link below) that I am currently finalising, I argue further that the Standard interpretation of PA can be shown to be unsound (a conclusion that may perhaps lie implicitly at the heart of the argument that led Ed Nelson to conclude that PA is inconsistent), and suggest why this interpretation cannot validate the PA Induction Axiom schema.
http://alixcomsi.com/39_Consequences_Evidence_Based_Interpretations_Of_PA.pdf
This is the sense in which I remarked '… that it is this subjectively-imposed constraint on a 'fixed' interpretation that has perhaps prevented a satisfactory perspective concerning a proof of consistency for PA'.
Regards and thanks for your prompt response,
Hello again, Bhup,
What would you say the cardinality of $\mathcal{L}$ is?
Let me respond obliquely to your question.
The philosophical position underlying the argumentation of my posts is that we may need to explicitly recognize the limitations on the ability of highly expressive mathematical languages such as ZF to communicate effectively (unless we can offer a finitary interpretation of ZF); and the limitations on the ability of effectively communicating mathematical languages such as PA (which can be shown to have a finitary interpretation) to adequately express abstract concepts---such as those involving Cantor's first limit ordinal $\omega$.
For instance, in an unpublished critical examination of the proof of Goodstein's Theorem (link below), I argue that we cannot add a symbol corresponding formally to the concept of an `infinite' mathematical entity---such as is referred to symbolically in the literature by `$\aleph$' or `$\omega$'---to the first-order Peano Arithmetic PA without inviting inconsistency; and that no model of PA can admit a constant term `greater than' any natural number (which I would term as a completed infinity).
http://alixcomsi.com/10_Goodstein_case_against_1000.pdf
Hi Bhup,
But I'm just asking for the cardinality of something you believe in.
You believe that there is some "formal" thing, which you call "$PA$", in some language $\mathcal{L}$. So, what is the cardinality of $\mathcal{L}$?
Bhupinder Singh Anand 10 August 2012 at 07:03
You've lost me there!
Cardinal numbers are defined specifically as equivalence classes in a formal set theory such as ZF, which contains a sub-set of finite ordinals that, meta-mathematically, can be effectively put into a 1-1 correspondence with the natural numbers, and whose cardinal number is defined in ZF as $\aleph_{0}$.
PA is a specific, recursively defined, first order Peano Arithmetic, whose domain contains the numerals, which meta-mathematically can also be effectively put into a 1-1 correspondence with the natural numbers.
$\mathcal{L}$ seems to be the informal language of Peano Arithmetic in which the Standard interpretation of PA is defined, and whose domain is that of the of the natural numbers.
So what exactly do you mean by the cardinal number of $\mathca{L}$, or the cardinal number of something that I believe in?
And how exactly would a 'belief' be relevant, or even useful, here?
After all, I may choose to believe that Pegasus exists in a world of my conception in the same way as I choose to believe that 1+1=2 in the same world.
Prima facie, that should have less significance than my being able to convincingly demonstrate to others---with whom I share a common lingua franca---that their belief that Pegasus exists, or that 1+1=2 in their conception, would not 'conflict' with my beliefs or conceptions.
Jeffrey Ketland 10 August 2012 at 17:35
So you think that $\mathcal{L}_{PA}$ is an infinite set with cardinality $\aleph_0$?
Don't quite see how one could view $\mathcal{L}_{PA}$ as a well-defined ZF formula that is also a set in ZF; but OK I pass ... what's the catch?
Well, $\mathcal{L}$ is the set of all expressions/strings. This is infinite, right?
And there are a couple of distinguished subsets, e.g., $Tm(\mathcal{L})$ and $Form(\mathcal{L})$ (both infinite too); and a couple of operations, namely concatenation and substitution.
However I would describe them as denumerable (non ZF) sets in order to make clear that they are not formally defined ZF sets.
This is not mere pedantry.
In my proposed ICLA 2013 submission, I show that Goedel's famous 'undecidable' PA formula $[R(x)]$ is not algorithmically computable as always true, even though it is algorithmically verifiable as always true.
Now, $[R(x)]$ is defined as the arithmetical representation in PA of a primitive recursive number theoretic relation $Q(x)$ that, by definition, is algorithmically computable as always true.
This means that, for any natural number $n$ and numeral $[n]$:
If $Q(n)$ is true then $[R(n)]$ is PA-provable;
If $Q(n)$ is false then $[\neg R(n)]$ is PA-provable.
Thus $[R(x)]$ and $Q(x)$ are instantiationally 'equivalent' arithmetical relations, but the latter is algorithmically computable whilst the former is not!
This distinction is not possible in ZF, where we would represent the ranges of $[R(x)]$ and $Q(x)$ as completed infinities (i.e. as sets of ordinals in ZF) that define the same ZF relation by the ZF Axiom of Extension.
Hello Bhup,
So I think you agree that $\mathcal{L}$ is an infinite set.
I'm not sure what your formula $[R(x)]$ is meant to be and how it is related to the fixed point $G$ of $Prov_{PA}(x)$.
Yes, we agree that $\mathcal{L}$ is an infinite set.
The formula $[R(x)]$ is the one with Goedel-number $r$ defined by Goedel (in his seminal 1931 paper on formally undecidable arithmetical propositions), for which he first proved that the (fixed point) formula $(\forall x)R(x)$ with Goedel number $17 Gen r$ is not provable in the second-order Peano Arithmetic P (also specifically defined by him in the 1931 paper) if P is consistent; and then proved that the formula $\neg(\forall x)R(x)$ with Goedel number $Neg(17 Gen r)$ is also not provable in P if P is further assumed to be $\omega$-consistent.
Goedel constructed $[R(x)]$ such that, if $[R(x)]$ interprets as the arithmetical relation $R*(x)$ then, for any natural number $n$ and numeral $[n]$:
If $R*(n)$ is a true arithmetical sentence then $[R(n)]$ is not PA-provable.
So, $[R(x)]$ is, in modern notation, $\neg Proof_{PA}(x, \ulcorner G \urcorner)$, where $G$ is such that
$PA \vdash G \leftrightarrow \forall x \neg Proof_{PA}(x, \ulcorner G \urcorner)$?
Something odd here!
I would have thought that, in modern notation, $[R(x)]$, would be $\neg Proof_{PA}(x, \ulcorner R(x) \urcorner)$!
We would then have that $[G]$ is $[\forall x R(x)]$, and so $PA \vdash [G \leftrightarrow \forall x \neg Proof_{PA}(x, \ulcorner R(x) \urcorner)]$ would follow trivially.
Perhaps I need to go back to first principles and retrace Goedel's original argument.
Notation: Although I forgot to do so consistently in my previous post, I try to use square brackets to enclose expressions that denote formulas (uninterpreted strings) of a formal language $L$, so as to distinguish them from expressions that denote interpreted relations and/or functions that are not formulas of $L$.
"so $PA \vdash [G \leftrightarrow \forall x \neg Proof_{PA}(x,\ulcorner R(x)\urcorner)]$ would follow trivially."
This is not right. Rather, what I think you have in mind is that $R(x)$ is the formula $\neg Proof_{PA}(x, \ulcorner G \urcorner)$, and $G$ is the formula $\forall x R(x)$.
Then we have:
$PA \vdash [G \leftrightarrow \forall x \neg Proof_{PA}(x,\ulcorner G \urcorner)]$
It's unclear even what your version means, but if it means what I think it means, then its right-to-left direction, i.e., $\forall x \neg Proof_{PA}(x,\ulcorner R(\dot{x}) \urcorner) \rightarrow \forall x R(x)$ is not provable in $PA$. This is not what a fixed point means.
$G$ is a fixed point of the undedicable *provability* predicate $Prov_{PA}(x)$ (which is a $\Sigma_1)$ formula), not the proof predicate $Proof_{PA}(x, y)$, which is decidable.
"... so as to distinguish them from expressions that denote interpreted relations and/or functions that are not formulas of L."
I don't quite get this ... these are *expressions* of English? Why are there any expressions of any language except $L$ involved at all? Why not just write $R(x)$ to mean some formula of the object language $L$, with $x$ free?
You're right, the fixed point $G$ (with which I am not familiar except through its definition only) for the proof predicate $Prov_{PA}(x)$ cannot be $[\forall x R(x)]$.
I was wrongly conjecturing the relation of $G$ to Goedel's original argument in his 1931 paper.
This argument was based on the primitive recursive relations $prf_{PA}(x, y)$ ($x$ is the GN of a PA-proof of the PA-formula whose GN is $y$) and $q_{PA}(x, y)$ ($x$ is the GN of a PA-proof of the PA-formula---whose GN is $y$---when we replace the variable '$y$' in this formula with its GN, i.e. with the value $[y]$).
Returning to your original query, if $[Q(x, y)]$ expresses $\neg q_{PA}(x, y)$ in PA, and $p$ is the GN of $[\forall x Q(x, y)]$, then $[R(x)]$ is the PA-formula $[Q(x, p)]$ (to which Goedel refers by its GN '$r$'), and Goedel's original undecidable proposition in PA would be the formula $[\forall x R(x)]$ (whose GN Goedel denotes by '$17Genr$').
The reason I use square brackets is to be able to distinguish clearly between the natural number $y$ and the numeral $[y]$ in an argument such as the one above.
Ah, now I see what you mean when you write, "on the primitive recursive relations $prf_{PA}(x,y)$ ...". Normally, one should simply write "on the primitive recursive relations $prf_{PA}$ ...". Here $prf_{PA}$ is a relation, i.e., a subset of $\mathbb{N}^2$. Strictly speaking, "$prf_{PA}(x,y)$" is a sentence of the meta-language, containing variables "$x$" and "$y$". Similarly, it is better to say "the function $f$ ..." It would be a bit misleading to say "the function $f(x)$ ..." Normally, $f(x)$ is the value of the function $f$ on argument $x$ (and "$f(x)$" is a singular term denoting this value). It's important to distinguish a function $f$ from its value $f(x)$; or, analogously, a relation $R$ and the entity $Rxy$ (which is, technically, a truth value, given $x$ and $y$) or the meta-language sentence "$Rxy$".
"$q_{PA}(x,y)$ ($x$ is the GN of a PA-proof of the PA-formula---whose GN is $y$---when we replace the variable '$y$' in this formula with its GN, i.e. with the value $[y]$)."
I don't quite get this? What formula does "... in this formula ..." refer to? I think you intend to refer to some sort of diagonalization?
The usual definition is this. If $\phi(x)$ is a formula with $x$ free, then the diagonalization of $\phi(x)$ is $\phi(\ulcorner \phi \urcorner)$.
So, your relation $q_{PA}$ is the diagonal relation?
1. The modern notation that you prefer seems suited to argumentation in the language of a set theory such as ZF, which defines functions and relations as sets. By the axiom of extensionality, two functions or relations are identical if they define the same set. My AICB/IACAP 2012 paper---and the paper that I have just submitted to ICLA 2013---aim to highlight a curious limitation of such a language.
2. I find that the classical notation followed in Mendelson's `Introduction to Mathematical Logic' and Kleene's `Introduction to Metamathematics' is not similarly limited, since they treat function/relation symbols in a formal language (such as `$R$' in PA or `$prf_{PA}$' in Primitive Recursive Arithmetic) as part of the alphabet only for constructing the formulas that denote functions and relations in the language (such as `$R(x)$' in PA or `$prf_{PA}(x, y)$' in PRA).
3. The distinction is convenient when I argue that there are number theoretical relations / functions that are not computationally identical, even though their corresponding relations / functions over the ZF ordinals may define the same set.
4. More precisely---and expressing it for the moment in the notation that I have been using---if $[(Ax)R(x)]$ is Goedel's undecidable PA formula, then there is a primitive recursive number theoretic relation $q_{PA}(x)$ in PRA (clarified further in 7 below) such that, for any natural number $n$ and numeral $[n]$, we have the metamathematical equivalence:
The PRA expression denoted by $\neg q_{PA}(n)$ evaluates as true in $N$ iff the PA formula denoted by $[R(n)]$ interprets as true in $N$ under any sound interpretation of PA.
5. However, I show that whereas there is an algorithm that will give evidence to show that any member of the denumerable sequence of PRA expressions denoted by $\{\neg q_{PA}(1), \neg q_{PA}(2), \ldots \}$ evaluates as true in $N$, there is no algorithm that will give evidence to show that any member of the denumerable sequence of PA formulas denoted by $\{[R(1)], [R(2)], \ldots \}$ interprets as true in $N$ under a sound interpretation of PA.
6. In the terminology of my paper, whilst the PRA relation $\neg q_{PA}(x)$ is algorithmically computable as always true in $N$, the (metamathematically) instantiationally equivalent PA relation $[R(x)]$ is algorithmically verifiable, but not algorithmically computable, as always true in $N$ under a sound interpretation of PA.
7. As to your final query, I think Wikipedia refers to the argument involved in this case (i.e. Goedel's argument) as `indirect self-reference'. Perhaps I should have expressed the metamathematical interpretation of the primitive recursive relation $q_{PA}(x,y)$ unequivocally by writing:
`$q_{PA}(x,y)$ ($x$ is the GN of a PA-proof of the PA-formula $[\phi]$---whose GN is $y$---when we replace the variable `$y$' in the formula $[\phi]$ (whose GN is $y$) with the numeral $[y]$ that denotes the GN $y$ in PA.'
I am not sure if there is any `diagonalisation' involved in the above in the sense of your remarks, since $[\phi]$ is not necessarily a formula in a single variable. The $[\phi]$ considered in Goedel's argument is actually a formula $[\phi (x, y)]$ with two variables.
Thus, in his 1931 paper (as translated in `The Undecidable' edited by Martin Davis) Goedel's original definition of the primitive recursive relation `$\neg q_{PA}(x,y)$' is expressed as:
$\neg xB_{\kappa}[Sb(y \scriptsize \begin{array}{c} 19 \\ Z(y) \end{array})]$
fixaolissues 20 January 2021 at 10:25
Hi, lovely post i would like to share this because its very helpful for me keep it up & please don't stop posting thanks for sharing such kind of nice information with us.
Anyone can have a CenturyLink Email login problems and account and then can have access to its multiple services. But, what to do in the case where you are facing trouble in logging in to your CenturyLink Email Account or you don't know how to resolve those problems? Don't Worry…!!!Get CenturyLink customer support via the toll-free number at any time from any corner of the world.
Bhupinder Singh Anand 20 January 2021 at 20:05
Thanks. The above arguments are now developed, and expressed formally, in my forthcoming book [An20] (link below), where I seek to highlight the necessity of distinguishing between what is believed to be true, what can be evidenced as true, and what ought not to be believed as true.
Bhupinder Singh Anand
[email protected]
[An20] Bhupinder Singh Anand: The Significance of Evidence-based Reasoning in Mathematics, Mathematics Education, Philosophy, and the Natural Sciences.
https://www.dropbox.com/s/gd6ffwf9wssak86/16_Anand_Dogmas_Submission_Update_3.pdf?dl=0
(Current update of book; 7.4Mb, 702p as of now; under final revision/editing/indexing; scheduled for release mid-2021)
fixaolissues 8 March 2021 at 09:56
Your post is written in a perfect way. I really like your knowledge and your thoughts. I feel happy reading your post. Thank you for sharing that wonderful information.
Xfinity is the world-wide most popular & more secure server for email services. It becomes Xfinity Error Code 554 when the Xfinity server does not accept your message and bounce back to the sender of the email. The main reason for this error is the spam filtration of the Xfinity server or the AI of your domain is not trusted by the server for security reasons. You can dial our toll-free number that especially mentioned for customer support regarding your issues. We always provide best support related to your problems.
Water FIlters UAE 3 October 2021 at 12:59
A very awesome blog post. We are really grateful for your blog post. You will find a lot of approaches after visiting your post.
Hi! We are water filter supplier Great points made up above!
And Home Water Filter Reverse Osmosis thanks…
I think this is one of the most important information for me. And i am glad reading your article. But should remark on few general things…
Yakshita 17 October 2021 at 19:53
Thank you for posting such a great article. Keep it up mate.
EEDS Login
E Shram Card Online Registration 2021
Kajal Agerwaal 2 December 2021 at 11:17
It's really nice and meaningful. It's really cool blog. Thanks for sharing and please update some more information here. Students need to gain relevant ideas about the Top Courses after 12th. The changing time has provided various career segments through which they can see their future. However, a substantial decision can be taken by evaluating one's own likings towards the career field. Students can select the courses after completing their 12th as per the subject they had taken in the 12th itself from a career point of view.
ellendecruz 23 December 2021 at 17:07
I saw your information. It's really very helpful information. Thanks for sharing with us.
Coin Master Free Spins is one of the thrilling and entertaining games. You can play this game easily and get coin master free spins and links. If you don't know how to get a 1k Free Spin Coin Master easily? For more information, you can visit our website.
Tree Proof Generator
The (in)consistency of PA and consensus in mathematics
Credences in vague propositions: supervaluationist semantics and Dempster-Shafer belief functions
The Quine-Putnam Indispensability Argument
Logical Consequence 2
A List of Achievements of Analytic Metaphysics
Top Questions - MathOverflow
Are there any central simple algebras admitting a standard basis?
Rapid Variable B Subdwarf Stars
The n-Category Café
Optimal Transport and Enriched Categories IV: Examples of Kan-type Centres
The inverse theorem for the U^3 Gowers uniformity norm on arbitrary finite abelian groups: Fourier-analytic and ergodic approaches
Sprachlogik
The Threefold Root of the How-Question About Mathematical Knowledge
Leicester mathematics under threat again
Honest Toil
This blog is retired, new blogging at wescholars.org
Choice & Inference
CFP: Scientific Explanations, Competing and Conjunctive
Entia et Nomina
Assistant professorship in formal philosophy (UGdansk)
Value and News in Evidential Decision Theory
Philosophy in Progress
The Access Problem for Holes
Logic and Rational Interaction
2nd World Congress on Logic and Religion
It's Only A Theory
CFP: HOPOS 2016 (Minneapolis)
Richard Zach blogs
Open Logic Project
logic.rforge.com
Welcome to logic.rforge!
The Hidden Abacus
Aerenchyma
Campbell Brown
Catrin Campbell-Moore
Chris Menzel
David Corfield
F.A. Muler
Florian Steinberger
Hannes Leitgeb
Jason Konek
Jeffrey Ketland
Lara Buchak
Mike Titelbaum
Nicholas Shackel
NorbertG
Rafal Urbaniak
Richard Pettigrew
RoyTCook
Sam Roberts
Sara L. Uckelman
Vincenzo Crupi
Wes Holliday
mszn
|
CommonCrawl
|
Practice Exam 1 Solutions
Stony Brook Physics phy131studiof15:lectures
~~SLIDESHOW~~ ====== Practice Exam 1 Solutions ====== ===== Question 1 Solution ===== {{phy141f13mid2fig1.png}} As halloween entertainment a 4kg pumpkin is shot. The pumpkin breaks in to 3 pieces of equal mass. The bullet is lodged in one of the pieces and this piece continues in the original direction of the bullet. Another piece flies straight up and the third piece moves angle $\theta$ below the horizontal. The velocity of the piece that flies straight up is 1ms$^{-1}$ and the velocity of the third piece moving at an angle $\theta$ below the horizontal is 3ms$^{-1}$. A. (5 points) The bullet weighs 15g and has an initial velocity of 400 ms$^{-1}$. What was the initial momentum and kinetic energy of the bullet? $\vec{p_{i}}=0.015\times400=6\mathrm{\,kg\,m\,s^{-1}}$ right $KE=\frac{1}{2}mv^{2}=\frac{1}{2}\times0.015\times400^{2}=1200\mathrm{\,J}$ B. (5 points) How high does the piece of pumpkin that flew straight up go above it's original position before it turns around and starts coming back down? $v^{2}=v_{0}^{2}-2\times9.8\times(x-x_{0})$ $2\times9.8\times h=1$ $h=\frac{1}{19.6}=0.051\mathrm{\,m}=5.1\mathrm{\,cm}$ C. (5 points) What is the value of the angle $\theta$? From conservation of momentum in the y direction $\frac{4}{3}\times3\times \sin\theta=\frac{4}{3}\times 1$ $\sin\theta=\frac{1}{3}$ $\theta=19.47^{o}$ D. (5 points) What is the velocity of the piece of the pumpkin with the bullet in it after the collision? From conservation of momentum in the x direction $(\frac{4}{3}+0.015)v_{f}-\frac{4}{3}\times3\times\cos19.47^{o}=6$ $v_{f}=\frac{6+\frac{4}{3}\times3\times\cos19.47^{o}}{\frac{4}{3}+0.015}=7.25\mathrm{\,m\,s^{-1}}$ E. (5 points) How much kinetic energy was lost in this collision? $KE_{final}=\frac{1}{2}\times\frac{4}{3}\times1^{2}+\frac{1}{2}\times\frac{4}{3}\times3^{2}+\frac{1}{2}\times(\frac{4}{3}+0.015)\times7.25^{2}=42\mathrm{\,J}$ $KE_{lost}=1158\mathrm{\,J}$ ===== Question 2 Solution ===== {{phy141f13mid2fig2.png}} In an overly elaborate halloween trick a large fake bat (mass 2kg) is to be raised using a rope which runs over a pulley and is wound around a wheel which can then be turned with a handle. The wheel has mass 4kg and has a radius of 20cm, while the pulley has a mass of 0.5kg and a radius of 10cm. The handle has a length of 30cm and is attached to the center of the wheel. In this problem all turning objects can be considered to have frictionless bearings and the rope has no mass. Both the pulley and the wheel can be considered to be solid disks. You may neglect the moment of inertia of the handle. A. (5 points) Initially the system is held at rest by a force exerted on the handle. What is the magnitude of this force? Torques on big wheel must balance, so $F\times0.3=2\times9.8\times0.2$ $F=\frac{2\times9.8\times0.2}{0.3}=13.1\mathrm{\,N}$ In order to have the bat rise up with an acceleration of 5ms$^{-2}$, a perpendicular force is exerted on the handle. B. (5 points) What is the tension in the rope attached to the bat while it is being raised up with an acceleration of 5ms$^{-2}$? Consider Newton's second law applied to the bat $T_{1}-m_{B}g=m_{B}5\mathrm{\,ms^{-2}}$ $T_{1}=2\times(9.8+5)=29.6\mathrm{\,N}$ C. (5 points) What is the tension in the rope on the other side of the pulley while the bat is being raised up with an acceleration of 5ms$^{-2}$? Consider the sum of the torques around the pulley $T_{2}r_{p}-T_{1}r_{p}=\frac{1}{2}m_{p}r_{p}^{2}\alpha$ $T_{2}-T_{1}=\frac{1}{2}m_{p}a$ $T_{2}=\frac{1}{2}m_{p}a+T_{1}$ $T_{2}=30.85\mathrm{\,N}$ D. (5 points) In order to have the bat rise up with an acceleration of 5ms$^{-2}$, how much force must be applied to the handle (you may assume that the force is always directed perpendicular to the handle)? Consider the sum of the torques around the wheel $F\times0.3-30.85\times0.2=\frac{1}{2}\times4\times0.2^2\times\frac{5}{0.2}$ $F=27.23\mathrm{N}$ E. (5 points) What is the speed of the bat when it has been raised by 1.5m? $v^{2}=2a(x-x_{0})$ $v^{2}=2\times5\times1.5$ $v=3.87\mathrm{\,m\,s^{-1}}$ F. (10 points) How much work does the person turning the wheel have to do to raise the bat by 1.5m? (Don't forget to include kinetic energy, the bat is moving with the velocity you found in part C when it has been raised by 1.5m) $W=\Delta KE+\delta PE=\frac{1}{2}\times2\times3.87^{2}+\frac{1}{2}\times0.5\times0.5\times0.1^{2}\times(\frac{3.87}{0.1})^2+\frac{1}{2}\times0.5\times4\times0.2^{2}\times(\frac{3.87}{0.2})^2+2\times9.81\times1.5=61.25\mathrm{\,J}$ ===== Question 3 Solution ===== {{phy141f13mid2fig3.png}} A skeleton is suspended by 3 ropes as shown. You can consider each arm to be a rigid uniform rod of mass 0.8kg and length 70cm. The rest of the skeleton weighs 4kg and it's center of mass is 0.6m directly below the rope which is attached to skull. A. (10 points) What is the tension in one of the ropes attached to the end of an arm. (Hint: Consider the forces and torques on the arm exerted by the rope and the shoulder. From balance of torques on arm calculated around shoulder $T\sin40^{o}=0.8\sin40^{o}\times g\times 0.35$ $T\frac{0.35\times0.6\times0.8\times g}{0.7}=0.4g=3.92\mathrm{\,N}$ B. (10 points) What are the horizontal and vertical components of the force exerted by the shoulder on the arm? Specify the directions of these forces (i.e.. up, down, left, right) . Horizontal: O N Vertical, shoulder force needs to balance 0.8g down and 0.4 g up, so the force should be 0.4g, or 3.92N up. C. (10 points) What is the tension in the rope attached to the skull of the skeleton? Tension should be equal and opposite to weight of main part of body and reaction force of the arm on the shoulder joint $T=4\times9.8+2\times3.92=47.04\mathrm{\,N}$ D. (10 points) If one of the ropes attached to an arm is cut and the arm swings down, how fast is the tip of the hand moving when the arm reaches the vertical position. You may consider the shoulder to remain in in it's original position. (Hint: Use the principle of conservation of mechanical energy) Need the change in the height of the center of mass of the arm $\Delta h=0.35\times\cos40^{0}-0.35=-0.0818\mathrm{\,m}$ $\Delta KE=-\Delta PE$ $\frac{1}{2}I\omega^{2}=-mg\Delta h$ $\frac{1}{2}\frac{1}{3}ml^{2}(\frac{v}{l})^2=-mg\Delta h$ $v=\sqrt{-6g\Delta h}$ $v=2.19\mathrm{\,m\,s^{-1}}$
phy131studiof15/lectures/m2p1sol.txt · Last modified: 2015/10/27 14:25 by mdawber
|
CommonCrawl
|
Anti-proliferative therapy for HIV cure: a compound interest approach
A majority of HIV persistence during antiretroviral therapy is due to infected cell proliferation
Daniel B. Reeves, Elizabeth R. Duke, … Joshua T. Schiffer
Determinants of HIV-1 reservoir size and long-term dynamics during suppressive ART
Nadine Bachmann, Chantal von Siebenthal, … the Swiss HIV Cohort Study
Recommendations for measuring HIV reservoir size in cure-directed clinical trials
Mohamed Abdel-Mohsen, Douglas Richman, … The BEAT-HIV Delaney Collaboratory to Cure HIV-1 infection
Functional cure of HIV: the scale of the challenge
Miles P. Davenport, David S. Khoury, … Stephen J. Kent
A quantitative approach for measuring the reservoir of latent HIV-1 proviruses
Katherine M. Bruner, Zheng Wang, … Robert F. Siliciano
The within-host fitness of HIV-1 increases with age in ART-naïve HIV-1 subtype C infected children
Pradeep Nagaraja, Bindu P. Gopalan, … Anita Shet
Longitudinal HIV sequencing reveals reservoir expression leading to decay which is obscured by clonal expansion
Marilia Rita Pinzone, D. Jake VanBelzen, … Una O'Doherty
Expression of CD20 after viral reactivation renders HIV-reservoir cells susceptible to Rituximab
Carla Serra-Peinado, Judith Grau-Expósito, … Maria J. Buzon
HIV-1 diversity considerations in the application of the Intact Proviral DNA Assay (IPDA)
Natalie N. Kinloch, Yanqin Ren, … R. Brad Jones
Daniel B. Reeves1 na1,
Elizabeth R. Duke1,2 na1,
Sean M. Hughes ORCID: orcid.org/0000-0002-9409-94053,
Martin Prlic1,4,
Florian Hladik1,3 na2 &
Joshua T. Schiffer1,2,5 na2
HIV infections
In the era of antiretroviral therapy (ART), HIV-1 infection is no longer tantamount to early death. Yet the benefits of treatment are available only to those who can access, afford, and tolerate taking daily pills. True cure is challenged by HIV latency, the ability of chromosomally integrated virus to persist within memory CD4+ T cells in a non-replicative state and activate when ART is discontinued. Using a mathematical model of HIV dynamics, we demonstrate that treatment strategies offering modest but continual enhancement of reservoir clearance rates result in faster cure than abrupt, one-time reductions in reservoir size. We frame this concept in terms of compounding interest: small changes in interest rate drastically improve returns over time. On ART, latent cell proliferation rates are orders of magnitude larger than activation and new infection rates. Contingent on subtypes of cells that may make up the reservoir and their respective proliferation rates, our model predicts that coupling clinically available, anti-proliferative therapies with ART could result in functional cure within 2–10 years rather than several decades on ART alone.
The most significant accomplishment in HIV medicine is the suppression of viral replication and prevention of AIDS with antiretroviral therapy (ART). However, HIV cure remains elusive due to viral latency, the ability of integrated virus to persist for decades within CD4+ T cells in a latent state. When ART is discontinued, latent cells soon activate, and virus rebounds1, 2. HIV cure strategies aim to eradicate the latent reservoir of infected cells3 but have been unsuccessful except in one notable example4. In addition, substantial technological and financial hurdles preclude the widespread use of many developing cure strategies. The anti-proliferative therapies we propose here are used widely, permitting broad and immediate availability following a proof of efficacy study.
Several recent studies link cellular proliferation (both antigen-driven expansion and homeostatic proliferation) with persistence of the HIV reservoir on long-term ART (>1 year)5,6,7,8,9,10,11,12,13. Using a mathematical model, we demonstrate that continuous, modest reductions in latent cell proliferation rates would deplete the latent reservoir more rapidly than comparable increases in HIV activation as occurs with latency reversing agents. Further, we find that more rapid reservoir elimination on anti-proliferative therapy occurs with lower pre-treatment reservoir size and higher proportions of rapidly proliferating effector and central memory CD4+ T cells in the reservoir.
Based on analogies to finance, we call this strategy "compound interest cure". We demonstrate the promise of the compound interest approach by identifying reservoir reduction commensurate with predictions from our model in HIV-infected patients treated with mycophenolate mofetil (MMF) in past studies. We confirm the anti-proliferative effect of MMF on naïve and memory CD4+ T cell subsets via in vitro experiments.
ART decouples latent pool dynamics from ongoing infection
Our model is visualized in Fig. 1 and detailed in the Methods. If ART is perfectly effective, all susceptible cells are protected from new infection, even when cells activate from latency. Thus, the dynamics of the latent cells can be considered separately, decoupled from the dynamics of the other cell types, and the only mechanisms changing the latent cell pool size are cell proliferation, death, and activation (bottom panel, Fig. 1).
Schematics of models for HIV dynamics on and off ART. The top panel shows all possible transitions in the model (equation (1)). The bottom shaded panel shows the available transitions for the decoupled dynamic equations when ART suppresses the virus. Model parameters are given in Table 1. HIV virus V infects susceptible cells S at rate β reduced by ART of efficacy ε to β ε . The probability of latency given infection is τ. The rate of activation from latently infected cells (L) to actively infected cells (A) is ξ. Cellular proliferation and death are determined by rates α and δ for each compartment. The mechanisms of action of anti-proliferative and latency reversal therapies are to decrease α L and increase ξ, respectively.
However, perfectly effective ART is not strictly necessary to consider the latent pool separately. As previously described14, 15, we define ART "critical efficacy" ε c as the ART efficacy above which there is no set-point viral load, i.e. virus decreases rapidly with time (see Methods). Above the critical efficacy, viral production from activation could cause some new cell infection, but because the probability of latency (τ) is so low, new infection does not affect reservoir size or dynamics meaningfully. Using parameters from Table 1, we find ε c ~ 85%. Because true ART efficacy is generally greater than this efficacy16, we predict little de novo infection in ART-suppressed patients, consistent with the lack of viral evolution following years of ART without re-seeding of the latent reservoir8, 10, 11, 13, 17.
Table 1 Parameters used in the HIV latency model.
Sustained mild effects on clearance rate deplete the reservoir more rapidly than large, one-time reservoir reductions
The HIV cure strategy most extensively tested in humans is "shock-and-kill" therapy: latency reversing agents activate HIV in latent cells to replicate and express HIV proteins, allowing immune clearance while ART prevents further infection3. Other strategies in development include therapeutic vaccines18, viral delivery of DNA cleavage enzymes19, and transplantation of modified HIV-resistant cells20 informed by the "Berlin patient"4. Some of these therapies manifest as one-time reductions in the number of latent cells. We simulate such instantaneous decreases using equation (4) and cure thresholds described in Methods. Briefly, using ART interruption data, Hill et al. and Pinkevych et al. estimated the number of latently infected cells that would result in ART-free suppression of viremia for one year (Hill 1-yr and Pinkevych 1-yr) versus 30 years, Hill cure (Hc), in 50% of HIV-infected patients21, 22. With the reservoir clearance rate θ L constant and a 100-fold reduction in reservoir size L 0, the Pinkevych 1-yr threshold is immediately satisfied, but the Hill 1-yr and Pinkevych cure still require 15 years of ART. Hill cure requires a 1,000-fold reduction and more than 10 subsequent years of ART (Fig. 2a).
Simulated comparisons of latent reservoir eradication strategies on standard antiretroviral (ART) treatment. Treatment thresholds (discussed in Methods) are shown as solid black lines both in the plots and color bar, which is consistent between panels. (a) One-time therapeutic reductions of the latent pool (L 0). (b) Continuous therapeutic increases in the clearance rate (θ L ). Relatively small decreases in the clearance rate θ L produce markedly faster times to cure than much larger decreases in the initial reservoir size. (c–e) Latency reversal agent (LRA) and anti-proliferative (AP) therapies are given continuously for durations of weeks with potencies given in fold increase in activation rate (ε LRA) and fold decrease in proliferation rate (ε AP), respectively. The color bar is consistent between panels, and thresholds of cure are shown as solid black lines both on plots and on the color bar. (c) Latency reversing agent therapy (LRA) administered alone requires years and potencies above 100 to achieve the cure thresholds. (d) Anti-proliferative therapies (AP) administered alone lead to cure thresholds in 1–2 years provided potency is greater than 2–3. (e) LRA and AP therapies are administered concurrently, and the reduction in the latent pool is measured at 70 weeks. Because the proliferation rate is naturally greater than the activation rate, increasing the AP potency has a much stronger effect than increasing the LRA potency.
Continuous-time interventions are more promising. Relatively small changes in θ L in equation (4) lead to significant changes in the time to cure (Fig. 2b). On ART alone, estimated cure occurs at roughly 70 years1. However, just a 3-fold increase in clearance rate achieves Hill cure in fewer than 20 years. A 10-fold sustained increase requires only five years for Hill cure.
Further, when continuous-time therapies are given, outcomes improve more by extending duration than by equivalent increases in potency (Fig. 2c,d demonstrate this given the substantial asymmetry of the contours over their y = x axes). Analogous to the so-called "miracle of compound interest," increasing the clearance rate for an extended duration produces profound latency reduction.
Smaller reductions in proliferation rate achieve more rapid reservoir depletion than comparable relative increases in activation rate
Latency reversing therapy can be modeled with equation (3) if treatment is assumed to be a continuous-time multiplication of activation. Simulations at various potencies and therapy durations indicate both Hill and Pinkevych cure thresholds require more than a 100-fold multiplication of ξ sustained for two or three years, respectively (Fig. 2c).
The latent cell proliferation rate is considerably larger than the activation rate (\({\alpha }_{L}\gg \xi \), Table 1). Thus, anti-proliferative therapies would clear the reservoir faster than equivalently potent latency reversing strategies. When the reservoir of CD4+ T cells harboring replication-competent HIV is assumed to consist only of central memory cells (Tcm), a 10-fold reduction in α cm leads to Pinkevych 1-yr, Hill 1-yr, Pinkevych cure, and Hill cure in 0.8, 1.6, 1.6, and 1.8 years, respectively (Fig. 2d).
The improvement in cure time (when compared to an equivalent 10-fold increase in net reservoir clearance rate θ L ) is possible because decreasing the proliferation rate means the net clearance rate approaches the latent cell death rate δ L . In fact, potency is relatively unimportant beyond reducing the proliferation rate by a factor of ten because the underlying death rate δ L is the bound on clearance rate. The relative impact of anti-proliferative therapy is greater than that of latency reversing therapy when the two therapies are given concurrently for 70 weeks (Fig. 2e).
Heterogeneity in reservoir cell types may necessitate prolonged anti-proliferative therapy
Recent studies indicate that the reservoir is heterogeneous, consisting of CD4+ central memory (Tcm), naïve (Tn), effector memory (Tem), and stem cell-like memory (Tscm) T cells. Further, reservoir cell composition differs dramatically among patients6, 12, 23. This heterogeneity suggests the potential for variable responses to anti-proliferative agents. Proliferation rates of Tcm (once per 66 days) exceed Tn (once every 500 days) but lag behind Tem proliferation rates (once every 21 days, Table 1). In our model Tscm are assumed to proliferate at the same frequency as Tn based on similar properties. We simulate possible reservoir profiles with different percentages of Tn, Tcm, and Tem in Fig. 3a–c. At least 7 years of treatment is needed for Pinkevych functional cure (Hill 1-yr) if slowly proliferating cells (Tn and/or Tscm) comprise more than 20% of the reservoir. In contrast, an increased proportion of Tem has no clinically meaningful impact on time to cure. Slowly proliferating cells are predicted to comprise the entirety of the reservoir within two years of 10-fold anti-proliferative treatment regardless of initial percentage of Tn or Tscm (Fig. 3d,e).
Simulated comparisons of anti-proliferative therapies on standard antiretroviral therapy (ART) assuming variable reservoir composition. Proliferation and death rates in Table 1. The potency of the therapy is ε AP = 10 (i.e., each cell type i has proliferation rate equal to α i /10 with \(i\in [{\rm{em}},{\rm{cm}},{\rm{n}}]\)). Plausible initial compositions of the reservoir (L i (0)) are taken from experimental measurements6, 12, 23. It is assumed that the HIV activation rate ξ is equivalent across all reservoir subsets. (a–c) Plots of times to therapeutic landmarks on long-term ART and anti-proliferative therapy with heterogeneous reservoir compositions consisting of effector memory (Tem), central memory (Tcm), and naïve plus stem cell-like memory (Tn + Tscm) CD4+ T cells. Tem and Tn + Tscm percentages are shown with the remaining cells representing Tcm. Times to one-year remission and functional cure are extremely sensitive to percentage of Tn + Tscm but not percentage of Tem. (d,e) Continuous 10-fold therapeutic decreases in all proliferation rates (α i ) result in Hill 1-yr in (d) 3.5 years assuming Tn + Tscm = 1% and (e) 6 years assuming Tn + Tscm = 10%. The reservoir is predicted to become Tn + Tscm dominant within 2 years under both assumptions, providing an indicator to gauge the success of anti-proliferative therapy in potential experiments.
The uncertainty in the reservoir composition tempers the results in Fig. 2. On the other hand, our model assumes that the HIV activation rate ξ is equivalent across all CD4+ T cell reservoir subsets. It is biologically plausible, though unproven, that latent cell proliferation and activation are linked processes and that therefore HIV rarely or never activates from resting Tn or Tscm. Under this assumption, functional cure might occur once Tem and Tcm reservoirs have been reduced to the Hill cure level, i.e. approximately 1.5 years in Fig. 3d,e.
Initial reservoir size, anti-proliferative potency, and reservoir cell subtypes predict time to cure
Using literature-derived ranges for the parameters of interest, we completed a global sensitivity analysis to examine which factors might impact time to cure in a heterogeneous patient pool developed by Latin Hypercube sampling of a broad parameter space24 (Fig. 4). We correlate variables with time to cure on ART/anti-proliferative combination therapy. Varying the probability of latency given infection (τ) does not change time to cure. Similarly, varying the basic reproductive number on ART (\({R}_{0}^{ART}\)), a measure of ART efficacy, defined as the number of new infected cells generated by one infected cell during ART, does not change time to cure. On the other hand, as the pre-treatment size of the latent pool L 0 increases, the necessary time to cure also increases. Increasing anti-proliferative therapy potency ε AP decreases cure time. Increasing percentages of naïve T cells L n (0)/L 0 in the latent reservoir delay the time to cure while a faster latent decay rate θ L hastens cure. Finally, we simulated the possibility of a diminishing impact of anti-proliferative therapy over time in Fig. 5. The simulation shows that when potency decreases by less than 5% per month, cure thresholds are still achieved within 10 years of ART and anti-proliferative treatment. The fastest waning of potency (20% per month) results in return to the natural clearance rate within the first 2 years of therapy prompting longer times to cure.
Global sensitivity analysis. We use the ranges of parameters from Supplementary Table S4. (a) 1,000 simulations drawn from Latin Hypercube sample parameter sets where \({R}_{0}^{ART} < 1\) are shown to demonstrate the variability of latent pool dynamics with respect to all combinations of parameter ranges. (b) The time until each cure threshold, Pinkevych 1-yr (P1) and Hill cure (Hc), are calculated as the time when the latent reservoir contains fewer than 20,000 and 200 cells respectively. In some cases cures are achieved within months. In others, cure requires many years. (c) Pearson correlation coefficients indicate the correlations between each variable and time to cure. L 0 is the initial number of latent cells. L n (0)/L 0 is the initial fraction of naïve cells in the latent pool. τ is the probability of latency given infection. \({R}_{0}^{ART}\) is the basic reproductive number on ART. ε ART is the percent decrease in viral infectivity in the presence of ART. θ L is the decay rate of latent cells. ε AP is the fold reduction in proliferation rate.
Waning anti-proliferative potency over-time modulates cure. Latent reservoir dynamics on combined ART and anti-proliferative therapy simulated for waning potency of anti-proliferative therapy over time. The latent reservoir size is shown with horizontal black lines corresponding to the cure threshholds used throughout the paper. Cure thresholds are achieved within 10 years if potency decreases by less than 5% per month considering 1% naïve T cells (L n (0)/L 0 = 0.01) and initial anti-proliferative potency ε AP = 5.
Model output is congruent with available clinical data
Chapuis et al. treated eight ART-suppressed, HIV-infected patients with 24 weeks of mycophenolate mofetil, a licensed anti-proliferative agent. As a marker of anti-proliferative effect, the percentages of Ki67+ CD4+ T cells were measured before and after MMF treatment (2 × 500 mg daily) and were found to have decreased on average 2.2-fold. Incorporating that reduction in latent cell proliferation rate ε AP = 2.2 over 24 weeks of treatment, we estimate a 10- to 40-fold reduction in the latent reservoir (see Fig. 2d). Chapuis et al. found a 10- to 100-fold reduction in infectious units per million (IUPM) by quantitative viral outgrowth assay in five patients, comparable to our estimate25. These reductions far exceed natural reservoir clearance rates and are consistent with a therapeutic effect26.
García et al. assessed the effect of MMF (2 × 250 mg daily) on HIV in the context of ART treatment interruption27. Seventeen HIV-infected patients received ART for a year and then were randomized into a control group that remained on ART only and an experimental group that also received MMF for 17 weeks. ART was interrupted in both groups and viral rebound assessed. MMF inhibited CD4+ T cell proliferation (as measured by an in vitro assay) in six of nine MMF recipients (responders). The time to rebound was 1–4 weeks in the control group and 6–12 weeks in the MMF-responder group. Using results from Pinkevych et al., a median time to rebound of seven weeks (see Fig. 5b of ref. 22) corresponds to a 7-fold decrease in the latent reservoir. Using results from Hill et al., the same median time to detection of seven weeks (see Fig. 4 of ref. 21) corresponds to a 50-fold reduction in the latent reservoir. These calculations are congruent with our model's estimate that 17 weeks of MMF treatment at potency ε AP = 2.2 leads to a 10-fold reduction in the reservoir.
MMF decreases proliferation in CEM cells, CD4+ T cells from HIV positive and negative donors, and all CD4+ T cell subsets
To explain the heterogeneous impact of MMF treatment (three of six in Chapuis et al. did not demonstrate a meaningful reservoir clearance; three of nine patients in García et al. had a weak anti-proliferative response to MMF and no delay in HIV rebound upon ART cessation), we conducted an in vitro study of MMF pharmacodynamics. We titrated the capacity of mycophenolic acid to inhibit spontaneous proliferation of cells from a human T lymphoblastoid cell line (CEM cells)28 and identified a steep Hill slope of −3.7 (Fig. 6a). A Hill slope with absolute value greater than one indicates cooperative binding at the site of drug action and implies a sharp transition from negligible to complete therapeutic effect at a specific drug concentration. These results explain how patients with inadequate MMF dosage could have a limited anti-proliferative effect.
MMF pharmacodynamics. Pure mycophenolic acid (MPA) was added to CEM cells at varying concentrations and proliferation of CEM cells was measured to determine a dose-response curve and Hill slope. CD4+ T cells from stored peripheral blood mononuclear cell samples from 10 participants (4 HIV-infected, 6 HIV-uninfected) were stimulated to proliferate. CD4+ T cells from 3 HIV-negative subjects were sorted into effector memory (EM), central memory (CM), and naïve subsets. Pure MPA was added to these cells at varying concentrations in order to determine IC50s for MPA. (a) Dose-response curve with percentages of CEM cells proliferating at varying doses of MPA. The Hill slope is −3.7. (b) 4 samples from HIV-positive participants and 6 samples from HIV-negative participants had similar IC50s. (c) IC50s were similar among CD4+ effector memory (EM), central memory (CM), and naïve T cell subsets.
We tested the capacity of mycophenolic acid (MPA), the active metabolite of MMF, to inhibit CD4+ T cell proliferation in CD4+ T cells from four HIV-positive and six HIV-negative participants and found similar IC50s (Fig. 6b). Further, CD4+ T cells from three HIV-negative participants were sorted into central-memory, effector-memory, and naïve subsets. Similar proliferation inhibition was observed in all three cell subsets (Fig. 6c). These results suggest a potential for MMF to deplete the HIV reservoir.
We developed a mathematical model of HIV dynamics to study various cure strategies21, 29. We demonstrate that minor reductions in CD4+ T cell proliferation rates would exhibit powerful reductions in the latent reservoir when therapy duration is extended over time. We call this proposed strategy "compound interest cure" due to the correspondence with financial modeling.
Our results are relevant because the HIV cure strategy most rigorously being tested in humans—latency reversal therapy ("shock-and-kill")—may not capitalize on the advantages of a compound interest approach. Promising latency reversing agents are typically dosed over short time-frames due to concerns about toxicity. T cell activation does not always lead to induction of HIV replication providing another potential limitation of latency reversing therapy30. Furthermore, even if these agents exert a large relative impact on the activation rate of memory CD4+ T cells, we predict the reduction in the reservoir may be insignificant given that the natural activation rate is orders of magnitude lower than proliferation and death rates. Latency reversal agents are also being considered in conjunction with other interventions such as engineered antibodies and/or T cells. These combined approaches carry additional unknown toxicities and rely on the effectiveness of latency reversal agents. Most challenging of all, these experimental therapies could be prohibitively expensive to implement globally.
The theoretical potential of the anti-proliferative approach is worthy of a clinical trial given the existence of licensed medications that limit T cell proliferation, including MMF. In line with our prediction that duration is more important than potency, these drugs are dosed over months to years for rheumatologic diseases and preventing rejection after solid organ transplant. The most frequent side effects reported are gastrointestinal symptoms and increased risk of infection though the latter risk is obscured by concurrent use of high-dose glucocorticoids31. MMF has been given to several hundred HIV-infected patients suppressed on ART25, 27, 32,33,34,35,36,37,38,39,40 (reviewed in Supplementary information). In this population, neither opportunistic infections nor adverse events were increased, and CD4+ T cell counts did not decrease significantly during therapy. We hypothesize that whereas MMF decreases proliferation of existing CD4+ T cells, it does not suppress thymic replenishment of these cells. Finally, MMF did not counteract the effects of ART25, 27, and we do not expect viral drug resistance or ongoing viral evolution to occur on anti-proliferative therapy. Despite these reassuring findings, future studies of HIV-infected patients on anti-proliferative agents will require extremely close monitoring for drug toxicity and immunosuppression. In addition, mycophenolic acid has a large Hill coefficient, suggesting a narrow therapeutic range. We suspect that the participants who did not respond to MMF in the clinical studies described above25, 27 required higher drug concentrations.
Our model suggests that slowly proliferating cells in the reservoir could present a barrier to rapid eradication of latently HIV-infected cells. Therefore, anti-proliferative strategies may face a challenge akin to the cancer stem cell paradox, whereby only the rapidly proliferating tumor cells are quickly expunged with chemotherapy. For example, tyrosine kinase inhibitors suppress proliferation of cancer cells in chronic myelogenous leukemia (CML). While many patients achieve "undetectable minimal residual disease," some patients relapse to pre-therapy levels of disease following therapy cessation—perhaps due to slowly proliferating residual cancer stem cells41. Additional limitations could include insufficient anti-proliferative drug delivery to anatomic sanctuaries, certain cellular subsets that are unaffected by treatment, and cytokine-driven feedback mechanisms that compensate for decreased proliferation by increasing memory CD4+ T cell lifespan. These challenges might be countered by combining anti-proliferative agents with other cure therapies. Avoidance of nucleoside and nucleotide reverse transcriptase inhibitors, which may enhance T cell proliferation, could provide an important adjunctive benefit42, 43.
The anti-proliferative approach is attractive because it is readily testable without the considerable research and development expenditures required for other HIV cure strategies. Anti-proliferative approaches require minimal potency relative to latency reversing agents, and T cell anti-proliferative medications are well studied mainstays of organ rejection prevention. Therefore, we propose trials with anti-proliferative agents as an important next step in the HIV cure agenda.
Latent reservoir dynamic model
We based our model (schematic in Fig. 1) on previous HIV dynamics models29, 44. We follow the concentrations [cells/μL] of susceptible CD4+ T cells S, latently infected cells L, actively infected cells A, and plasma viral load V [copies/μL] over time. The system of ordinary differential equations (using the over-dot to denote derivative in time)
$$\begin{array}{rcl}\dot{S} & = & {\alpha }_{S}-{\delta }_{S}S-{\beta }_{\varepsilon }SV\\ \dot{L} & = & {\alpha }_{L}L+\tau {\beta }_{\varepsilon }SV-{\delta }_{L}L-\xi L\\ \dot{A} & = & \mathrm{(1}-\tau ){\beta }_{\varepsilon }SV-{\delta }_{A}A+\xi L\\ \dot{V} & = & \pi A-\gamma V\end{array}$$
tracks these state variables. We define α S [cells/μL-day] as the constant growth rate of susceptible cells, δ S [1/day] as the death rate of susceptible cells, and β ε = (1 − ε)β [μL/virus-day] as the therapy-dependent infectivity. We define ε [unitless] as the ART efficacy, ranging from 0 (meaning no therapy) to 1 (meaning perfect therapy). α L and δ L [1/day] are the proliferation and death rates of latent cells, respectively. The death rate of actively infected cells is δ A , and the proliferation rate of activated cells \({\alpha }_{A}\approx 0\) is likely negligible45. τ [unitless] is the probability of latency given infection, and ξ [1/day] is the rate of transition from latent to actively infected cells. The viral production rate is π [virions/cell-day], which describes the aggregate rate of constant viral leakage and burst upon cell death. γ [1/day] is the HIV clearance rate. Parameter values are given in Table 1.
Additional calculations including derivations of equilibrium solutions and stability analysis as well as further discussion of model parameter derivations are presented in the Supplementary information.
The compound interest formula
In the Supplementary information, we determine the critical drug efficacy ε c , the value of ε above which viral load quickly decays. Moreover, when ε > ε c , we can consider the latent cell equation in isolation:
$$\dot{L}={\alpha }_{L}L-{\delta }_{L}L-\xi L.$$
Defining the initial number of latent cells as L 0 gives
$$L={L}_{0}{e}^{({\alpha }_{L}-{\delta }_{L}-\xi )t}.$$
Equation (3) implies that the clearance rate of latently infected cells is a function of their proliferation, death, and activation rates. Defining the total clearance rate θ L = α L − δ L − ξ, we see a mathematical correspondence to the principle of continuous compound interest with L 0 as the principal investment and θ L as the interest rate:
$$L={L}_{0}{e}^{{\theta }_{L}t}.$$
Experimental measurements indicate an average latent cell half-life of 44 months (θ L = −5.2 × 10−4 per day)1, 26 and an average latent reservoir size L 0 of one-million cells1. Note that when θ L < 0, the latent reservoir is cleared exponentially. Alternatively, if α L exceeds the sum of ξ and δ L , L grows indefinitely.
Composition of the latent reservoir: modeling T cell subsets
We include heterogeneity in T cell phenotype into the model by splitting the differential equation for the latent cells into three differential equations, one for each subtype L i with \(i\in [cm,em,n]\). We ignore transitions between phenotype because the composition of the reservoir is reasonably stable over time12. Our extended model is the system
$${\dot{L}}_{i}={\theta }_{i}{L}_{i}.$$
The total number of latent cells is the sum of the subset populations, \(L={\sum }_{i}\,{L}_{i}\), and solution is
$$L(t)=\sum _{i}\,{L}_{i}\mathrm{(0)}{e}^{{\theta }_{i}t}$$
where θ i = α i − δ i − ξ, and L i (0) are the initial numbers of each subtype.
Simulations assume the same net clearance rate and activation rates among subsets, but different proliferation rates α i and different calculated death rates δ i = α i − θ L − ξ. The initial conditions for each subtype L i (0) are inclusive of several varying measurements in the literature6, 12, 23. We consider Ttm to have the same proliferation rates as Tcm. Similarly, we characterize stem-cell-like memory CD4+ T cells (Tscm) as Tn given their slow turnover rate. Of note, these are conservative estimates that would not favor anti-proliferative therapy. In Fig. 5, we allow the anti-proliferative potency to decrease over time by assuming \({\alpha }_{i}(t)={\alpha }_{i}[1+({\varepsilon }^{AP}-1)\,\exp (-\varphi t)]\) for each T cell subset in equation 5. Here ϕ is the waning potency rate that ranges from 0–20% per month. We assume the initial potency is a 5-fold decrease ε AP = 5, and we use a 1 million cell reservoir having 1% naïve T cells. We solve the equation for each subset numerically with \({\mathtt{ode23s}}\) in Matlab, summing the subset dynamics after solving.
Reservoir reduction targets for cure strategies
We use experimentally derived thresholds to compare potential cure therapies in the framework of our model. Hill et al. employed a stochastic model to estimate that a 2,000-fold reduction in the latent pool would result in HIV suppression off ART for a median of one year. After a 10,000-fold reduction in latent cells, 50% of patients would remain functionally cured (ART-free remission for at least 30 years)21. Pinkevych et al. inferred from analytic treatment interruption data that decreasing the latent reservoir 50–70-fold would lead to HIV remission in 50% of patients for one year22. Using the Pinkevych et al. results, we extrapolate a functional cure threshold as a 2,500-fold decrease in the reservoir size (Supplementary information). Given ongoing debate in the field, we consider all four thresholds—henceforth referred to as Hill 1-yr, Hill cure, Pinkevych 1-yr, and Pinkevych cure.
To examine the full range of possible outcomes we completed a global sensitivity analysis of the model in which all variables were simultaneously varied in the ranges of Table S5 by logarithmically covering Latin Hypercube sampling24. The simulations were carried out in Matlab using \({\mathtt{lhsdesign}}\) and \({\mathtt{ode23s}}\). We correlated each parameter of interest with the time to reach the Hill and Pinkevych cure thresholds. Calling the time-to-cure T, correlations were calculated with the Pearson correlation coefficient: the covariance of T with each parameter of interest p normalized by both the standard deviation of T and that of p, that is \(\rho ={\rm{cov}}(T,p)/{\sigma }_{T}{\sigma }_{p}\). 1,000 simulations were carried out, keeping only the parameter combinations leading to reservoir decay, i.e. those satisfying \({R}_{0}^{ART} < 1\).
Mycophenolic acid anti-proliferation assay methods
Blood samples for the MPA in vitro studies were obtained from ART-treated, HIV-infected and healthy, HIV-negative men at the HIV Vaccine Trials Unit Clinic in Seattle, Washington. All procedures were approved by the Institutional Review Boards of the University of Washington and the Fred Hutchinson Cancer Research Center (IRB 1830 and 5567) and were performed in accordance with institutional guidelines and regulations. Written informed consent was obtained from each donor.
Cells were labeled using the CellTrace Violet Cell Proliferation Kit (Invitrogen) by incubation in 40 μM CellTrace Violet in Roswell Park Memorial Institute (RPMI) cell culture media with penicillin/streptomycin and L-glutamine (Gibco) plus 10% fetal bovine serum (Gemini Bio-Products) (R-10 media) for five minutes at room temperature46 followed by washing twice with R-10. Peripheral blood mononuclear cells were stimulated with 1 μg/mL staphylococcal enterotoxin B (SEB; Sigma-Aldrich) and 10 IU/mL IL-2 (Peprotech). Sorted CD4+ T cell subsets (naïve, effector memory, and central memory) were stimulated with Dynabeads Human T-Activator CD3/CD28 beads (Gibco) at a bead to cell ratio of 1:1 with 10 IU/mL IL-2. CEM cells were not stimulated, as they proliferate continuously. Pure mycophenolic acid (Sigma-Aldrich), the active metabolite of MMF, was added at concentrations ranging from 0.01 to 2.56 μM. Cells were cultured in R-10 for 72 h.
After the culture period, cells were washed and stained with Fixable Live/Dead Yellow (Invitrogen), followed by CD45RA FITC, CD4 PE-Cy5, CCR7 BV785 (all BD), and CD3 ECD (Beckman Coulter) at the minimum saturating doses. Cells were then fixed with 1% paraformaldehyde and acquired on a five-laser BD LSRII flow cytometer (355, 405, 488, 535, and 633 nm). Live, single CD4+ T cells were gated into "proliferated" or "not proliferated" on the basis of CellTrace Violet fluorescence.
The IC50s and Hill slope were calculated using the \({\mathtt{drc}}\) package in \({\mathtt{R}}\) (Supplementary information)47, 48.
Siliciano, J. D. et al. Long-term follow-up studies confirm the stability of the latent reservoir for HIV-1 in resting CD4+ T cells. Nat Med 9, 727–728 (2003).
Finzi, D. et al. Latent infection of CD4+ T cells provides a mechanism for lifelong persistence of HIV-1, even in patients on effective combination therapy. Nat Med 5, 512–517 (1999).
Martin, A. & Siliciano, R. Progress toward HIV eradication: Case reports, current efforts, and the challenges associated with cure. Annu Rev Med 67, 011514–023043 (2016).
Hütter, G. et al. Long-term control of HIV by CCR5Δ32 stem-cell transplantation. N Engl J Med 360, 692–698 (2009).
Bull, M. E. et al. Monotypic human immunodeficiency virus type 1 genotypes across the uterine cervix and in blood suggest proliferation of cells with provirus. J Virol 83, 6020–6028 (2009).
Jaafoura, S. et al. Progressive contraction of the latent HIV reservoir around a core of less-differentiated CD4+ memory T cells. Nat Comm 5, 5407, doi:10.1038/ncomms6407 (2014).
Palmer, S. et al. Low-level viremia persists for at least 7 years in patients on suppressive antiretroviral therapy. Proc Natl Acad Sci USA 105, 3879–3884 (2008).
Article ADS CAS PubMed PubMed Central Google Scholar
Von Stockenstrom, S. et al. Longitudinal genetic characterization reveals that cell proliferation maintains a persistent HIV-1 DNA pool during effective HIV therapy. J Infect Dis 1, 596–607 (2015).
Wagner, T. et al. An increasing proportion of monotypic HIV-1 DNA sequences during antiretroviral treatment suggests proliferation of HIV-infected cells. J Virol 87, 1770–1778 (2013).
Wagner, T. A. et al. Proliferation of cells with HIV integrated into cancer genes contributes to persistent infection. Science 345, 570–573 (2014).
Maldarelli, F. et al. HIV latency. Specific HIV integration sites are linked to clonal expansion and persistence of infected cells. Science 345, 179–83 (2014).
Chomont, N. et al. HIV reservoir size and persistence are driven by T cell survival and homeostatic proliferation. Nat Med 15, 893–900 (2009).
Bui, J. K. et al. Proviruses with identical sequences comprise a large fraction of the replication-competent HIV reservoir. PLoS Path 13, e1006283 (2017).
Bonhoeffer, S., Coffin, J. M. & Nowak, M. A. Human immunodeficiency virus drug therapy and virus load. J Virol 71, 3275–3278 (1997).
Callaway, D. & Perelson, A. HIV-1 infection and low steady state viral loads. Bull Math Biol 64, 29–64 (2002).
Article PubMed MATH Google Scholar
Shen, L. et al. A critical subset model provides a conceptual basis for the high antiviral activity of major HIV drugs. Sci Transl Med 3, 91ra63, doi:10.1126/scitranslmed.3002304 (2011).
Brodin, J. et al. Establishment and stability of the latent HIV-1 DNA reservoir. eLife 5, e18889, doi:10.7554/eLife (2016).
Fuller, D. H. et al. Therapeutic DNA vaccine induces broad T cell responses in the gut and sustained protection from viral rebound and AIDS in SIV-infected rhesus macaques. PLoS One 7, e33715, doi:10.1371/journal.pone.0033715 (2012).
Aubert, M. et al. Successful targeting and disruption of an integrated reporter lentivirus using the engineered homing endonuclease Y2 I-AniI. PLoS One 6, e16825, doi:10.1371/journal.pone.0016825 (2011).
Peterson, C., Younan, P., Jerome, K. & Kiem, H.-P. Combinatorial anti-HIV gene therapy: using a multipronged approach to reach beyond HAART. Gene Ther 20, 695–702 (2013).
Hill, A., Rosenbloom, D., Fu, F., Nowak, M. & Siliciano, R. Predicting the outcomes of treatment to eradicate the latent reservoir for HIV-1. Proc Natl Acad Sci USA 111, 15597, doi:10.1073/pnas.1406663111 (2014).
Pinkevych, M. et al. HIV reactivation from latency after treatment interruption occurs on average every 5–8 days? Implications for HIV remission. PLoS Pathog 11, e1005000, doi:10.1371/journal.ppat.1005000 (2015).
Buzon, M. J. et al. HIV-1 persistence in CD4+ T cells with stem cell-like properties. Nat Med 20, 139–142 (2014).
Marino, S., Hogue, I., Ray, C. & Kirschner, D. A methodology for performing global uncertainty and sensitivity analysis in systems biology. J Theor Biol 254, 178–196 (2008).
Article MathSciNet PubMed PubMed Central Google Scholar
Chapuis, A. G. et al. Effects of mycophenolic acid on human immunodeficiency virus infection in vitro and in vivo. Nat Med 6, 762–768 (2000).
Crooks, A. M. et al. Precise quantitation of the latent HIV-1 reservoir: implications for eradication strategies. J Infect Dis 212, 1361–1365 (2015).
Garca, F. et al. Effect of mycophenolate mofetil on immune response and plasma and lymphatic tissue viral load during and after interruption of highly active antiretroviral therapy for patients with chronic HIV infection: a randomized pilot study. J Acquir Immune Defic Syndr 36, 823–830 (2004).
Foley, G. E. et al. Continuous culture of human lymphoblasts from peripheral blood of a child with acute leukemia. Cancer 18, 522–529 (1965).
Conway, J. & Perelson, A. Residual Viremia in Treated HIV+ Individuals. PLoS Comput Biol 12, e1004677, doi:10.1371/journal.pcbi.1004677 (2016).
Article ADS PubMed PubMed Central Google Scholar
Ho, Y.-C. et al. Replication-competent noninduced proviruses in the latent reservoir increase barrier to HIV-1 cure. Cell 155, 540–551 (2013).
Mok, C. Mycophenolate mofetil for lupus nephritis: an update. Expert Rev Clin Immunol 11, 1353–1364 (2015).
Müller, E., Barday, Z., Mendelson, M. & Kahn, D. HIV-positive–to–HIV-positive kidney transplantation—Results at 3 to 5 years. N Engl J Med 372, 613–620 (2015).
Stock, P. G. et al. Outcomes of kidney transplantation in HIV-infected recipients. N Engl J Med 363, 2004–2014 (2010).
Kaur, R. et al. A placebo-controlled pilot study of intensification of antiretroviral therapy with mycophenolate mofetil. AIDS Res Ther 3, 16, doi:10.1186/1742–6405–3–16 (2006).
Vrisekoop, N. et al. Short communication: no detrimental immunological effects of mycophenolate mofetil and HAART in treatment-naive acute and chronic HIV-1-infected patients. AIDS Res Hum Retrovir 21, 991–996 (2005).
Sankatsing, S. U. et al. Highly active antiretroviral therapy with or without mycophenolate mofetil in treatment-naive HIV-1 patients. AIDS 18, 1925–1931 (2004).
Press, N. et al. Case series assessing the safety of mycophenolate as part of multidrug rescue treatment regimens. HIV Clin Trials 3, 17–20 (2002).
Margolis, D. M. et al. The addition of mycophenolate mofetil to antiretroviral therapy including abacavir is associated with depletion of intracellular deoxyguanosine triphosphate and a decrease in plasma HIV-1 RNA. J Acquir Immune Defic Syndr 31, 45–49 (2002).
Millan, O. et al. Pharmacokinetics and pharmacodynamics of low dose mycophenolate mofetil in HIV-infected patients treated with abacavir, efavirenz and nelfinavir. Clin Pharmacokinet 44, 525–538 (2005).
Jurriaans, S. et al. HIV-1 seroreversion in an HIV-1-seropositive patient treated during acute infection with highly active antiretroviral therapy and mycophenolate mofetil. AIDS 18, 1607–1608 (2004).
Ross, D. M. et al. Safety and efficacy of imatinib cessation for CML patients with stable undetectable minimal residual disease: results from the TWISTER study. Blood 122, 515–522 (2013).
Hladik, F. A new perspective on HIV cure. F1000Res 4, 77, doi:10.12688/f1000research.4529.1 (2014).
Hladik, F. et al. Mucosal effects of tenofovir 1% gel. eLife 4, e04525 (2015).
PubMed Central Google Scholar
Rong, L. & Perelson, A. Modeling latently infected cell activation: Viral and latent reservoir persistence, and viral blips in HIV-infected patients on potent therapy. PLoS Comput Biol 5, e1000533, doi:10.1371/journal.pcbi.1000533 (2009).
Article ADS MathSciNet PubMed PubMed Central Google Scholar
Perelson, A. S., Kirschner, D. E. & De Boer, R. Dynamics of HIV infection of CD4+ T cells. Math Biosci 114, 81–125 (1993).
Article CAS PubMed MATH Google Scholar
Quah, B. J. & Parish, C. R. New and improved methods for measuring lymphocyte proliferation emphin vitro and in vivo using CFSE-like fluorescent dyes. J Immunol Methods 379, 1–14 (2012).
Ritz, C., Baty, F., Streibig, J. C. & Gerhard, D. Dose-response analysis using R. PLoS One 10, e0146021, journal.pone.0146021 (2015).
R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria (2016).
Macallan, D. C. et al. Rapid turnover of effector-memory CD4+ T cells in healthy humans. J Exp Med 200, 255–260 (2004).
Markowitz, M. et al. A novel antiviral intervention results in more accurate assessment of HIV-1 replication dynamics and T-cell decay in vivo. J Virol 77, 5037–5038 (2003).
Luo, R., Piovoso, M., Martinez-Picado, J. & Zurakowski, R. HIV model parameter estimates from interruption trial data including drug efficacy and reservoir dynamics. PLoS One 7, e40198, doi:10.1371/journal.pone.0040198 (2012).
Hockett, R. D. et al. Constant mean viral copy number per infected cell in tissues regardless of high, low, or undetectable plasma HIV RNA. J Exp Med 189, 1545–1554 (1999).
Ramratnam, B. et al. Rapid production and clearance of HIV-1 and hepatitis C virus assessed by large volume plasma apheresis. Lancet 354, 1782–1785 (1999).
We thank Keith Jerome for his helpful reading of the manuscript, the VIDD faculty initiative at the Fred Hutchinson Cancer Research Center, and the NIH for grants R01 AI116292 to F.H., 1DP2DE023321-01 to M.P., and U19 AI096111 and UM1 AI12662 to J.T.S. We also thank Claire N. Levy and Fernanda Calienes for assisting in the mycophenolic acid experiments. The following reagent was obtained through the NIH AIDS Reagent Program, Division of AIDS, NIAID, NIH: CEM CD4+ T cells from Dr. J. P. Jacobs.
Daniel B. Reeves and Elizabeth R. Duke contributed equally to this work.
Florian Hladik and Joshua T. Schiffer jointly supervised this work.
Fred Hutchinson Cancer Research Center, Vaccine and Infectious Diseases Division, Seattle, WA, 98109, USA
Daniel B. Reeves, Elizabeth R. Duke, Martin Prlic, Florian Hladik & Joshua T. Schiffer
University of Washington, Department of Medicine, Seattle, WA, 98195, USA
Elizabeth R. Duke & Joshua T. Schiffer
University of Washington, Departments of Obstetrics and Gynecology, Seattle, WA, 98195, USA
Sean M. Hughes & Florian Hladik
University of Washington, Department of Global Health, Seattle, WA, 98105, USA
Martin Prlic
Fred Hutchinson Cancer Research Center, Clinical Research Division, Seattle, WA, 98109, USA
Joshua T. Schiffer
Daniel B. Reeves
Elizabeth R. Duke
Sean M. Hughes
Florian Hladik
F.H. and J.T.S. posed the initial question to model the effect of anti-proliferation on HIV latency; D.B.R., E.R.D., and J.T.S. developed the computational model; S.M.H. and F.H. devised, and S.M.H. performed, the in vitro mycophenolic acid experiments; D.B.R. and E.R.D. performed calculations and produced figures; E.R.D., M.P., F.H., J.T.S. performed literature review for parameter values. D.B.R., E.R.D., M.P., F.H., S.M.H., and J.T.S. wrote the manuscript.
Corresponding authors
Correspondence to Florian Hladik or Joshua T. Schiffer.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Reeves, D.B., Duke, E.R., Hughes, S.M. et al. Anti-proliferative therapy for HIV cure: a compound interest approach. Sci Rep 7, 4011 (2017). https://doi.org/10.1038/s41598-017-04160-3
The forces driving clonal expansion of the HIV-1 latent reservoir
Runxia Liu
Francesco R. Simonetti
Ya-Chi Ho
Virology Journal (2020)
Pediatric HIV: the Potential of Immune Therapeutics to Achieve Viral Remission and Functional Cure
Stella J. Berendam
Ashley N. Nelson
Sallie R. Permar
Current HIV/AIDS Reports (2020)
Miles P. Davenport
David S. Khoury
Stephen J. Kent
Nature Reviews Immunology (2019)
Viral diversity is an obligate consideration in CRISPR/Cas9 designs for targeting the HIV reservoir
Pavitra Roychoudhury
Harshana De Silva Feelixge
Keith R. Jerome
BMC Biology (2018)
Nature Communications (2018)
|
CommonCrawl
|
Show that $\tan 3x =\frac{ \sin x + \sin 3x+ \sin 5x }{\cos x + \cos 3x + \cos 5x}$
I was able to prove this but it is too messy and very long. Is there a better way of proving the identity? Thanks.
Matthew Conroy
Keneth AdrianKeneth Adrian
$\begingroup$ Better than what? At least give an outline of your own solution. $\endgroup$ – Henning Makholm Feb 26 '12 at 1:51
$\begingroup$ my solution is very messy using a lot of identities.. It is not elegant after all.I am thinking that there could be a better approach or perspective in solving this kind of problem. thanks $\endgroup$ – Keneth Adrian Feb 26 '12 at 1:56
$\begingroup$ @HenningMakholm : Better than any proof that's "too messy and very long". That's what I would presume is meant. $\endgroup$ – Michael Hardy Feb 26 '12 at 2:20
In old fashioned courses in trigonometry, students were required to remember the identities $$\sin A + \sin B = 2 \sin\left(\frac{A+B}{2}\right) \cos\left(\frac{A-B}{2}\right)$$ and $$\cos A + \cos B = 2 \cos\left(\frac{A+B}{2}\right) \cos\left(\frac{A-B}{2}\right)$$
Applying these formulae in the numerator and denominator, choosing $A = x$ and $B = 5x$ leads to the result immediately.
Alex Becker
A. RainaA. Raina
$\begingroup$ A. Raina: it isn't nice to make one feel old :-) I don't want even to think about how many years have lapsed since I studied prostaphaeresis: en.wikipedia.org/wiki/Prosthaphaeresis $\endgroup$ – Francesco Feb 26 '12 at 8:06
$\begingroup$ I'm so old-fashioned that I write "old-fashioned" with a hyphen. $\endgroup$ – Michael Hardy Feb 27 '12 at 18:07
You want to prove $$\frac{\sin 3x}{\cos 3x}=\frac{\sin x+\sin 3x+\sin 5x}{\cos x+\cos 3x+\cos 5x}$$ Or, in other words, that the two vectors $(\cos3x,\sin3x)$ and $(\cos x+\cos 3x+\cos 5x,\sin x+\sin 3x+\sin 5x)$ are parallel. The latter is the sum of $(\cos x,\sin x)$, $(\cos 3x,\sin 3x)$ and $(\cos 5x,\sin5x)$.
Now, $(\cos x,\sin x)$ and $(\cos 5x,\sin5x)$ both have unit length, so by the parallelogram rule, $(\cos x,\sin x)+(\cos 5x,\sin5x)$ is the diagonal of a rhombus, and by symmetry the direction of the diagonal must be halfway between the angles of the sides -- that is $\frac{x+5x}{2}=3x$. So $(\cos x,\sin x)+(\cos 5x,\sin5x)$ lies even with the $(\cos3x,\sin3x)$ term and the sum of all three vectors is parallel to $(\cos3x,\sin3x)$, as required.
This geometric argument mostly closes the case, but note (because that's how I wrote it at first) that it can be made to look slick and algebraic by moving to the complex plane. Then saying that the two vectors are parallel is is the same as saying that $e^{3xi}$ and $e^{xi}+e^{3xi}+e^{5xi}$ are real multiples of each other.
But $e^{xi}+e^{5xi}=e^{3xi}(e^{-2xi}+e^{2xi})$ and the factor in the parenthesis is real because it is the sum of a number and its conjugate. In particular, by Euler's formula, $$e^{xi}+e^{3xi}+e^{5xi} = e^{3xi}(1+2\cos 2x)$$ and the two vectors are indeed parallel and your identity holds -- except when $\cos 2x=-\frac 12$, in which case the fraction to the right of your desired identity is $0/0$.
$\begingroup$ thanks for the solution. I will recall some topics in complex analysis.. I only use trigometric identities alone, that's why it is to messy.. $\endgroup$ – Keneth Adrian Feb 26 '12 at 2:07
$\begingroup$ @Ken: The use of complex numbers here is really just to simplify the notation. The idea is geometrical and could be explained in $\mathbb R^2$: The two vectors $(\cos x,\sin x)$ and $(\cos 5x,\sin 5x)$ both have unit length and both make an angle of $2x$ with the $(\cos 3x,\sin 3x)$, but on different sides. So when we add them, by symmetry we must get something parallel to $(\cos 3x,\sin 3x)$. $\endgroup$ – Henning Makholm Feb 26 '12 at 2:11
$\begingroup$ (But in general, remembering the complex exponential has great simplifying power when manipulating complex trigonometric expressions, also when there isn't an obvious geometric interpretation). $\endgroup$ – Henning Makholm Feb 26 '12 at 2:17
$\begingroup$ Yes, your solution is spectacular, using tools in complex analysis.. I will take time to fully grasp the perspective you've shown. thanks $\endgroup$ – Keneth Adrian Feb 26 '12 at 2:28
$\begingroup$ You keep saying that word ... but there's no complex analysis going on here. I'm just borrowing the exponential as a compact notation for rotations, and for unit vectors pointing in various directions on the plane. $\endgroup$ – Henning Makholm Feb 26 '12 at 2:36
Notice that $\tan 3x = \sin 3x/\cos 3x$, and if $a/b=c/d$ then $a/b=c/d=(a+c)/(b+d)$, so it's enough to prove $$ \tan 3x = \frac{\sin x + \sin 5x}{\cos x + \cos 5x}. $$ So generally, how does one prove $$ \tan\left(\frac{\alpha+\beta}{2}\right) = \frac{\sin\alpha+\sin\beta}{\cos\alpha+\cos\beta}\ ? $$
One can say that $$ \sin \alpha = \sin\left( \frac{\alpha+\beta}{2} + \frac{\alpha-\beta}{2} \right) $$ $$ \sin \beta = \sin\left( \frac{\alpha+\beta}{2} - \frac{\alpha-\beta}{2} \right) $$ and do the same for the two cosines, then apply the formulas for sine of a sum and cosine of sum. After that, it's trivial simplification.
$\begingroup$ @MichaelHardy, Your proof is elegant.. $\endgroup$ – Keneth Adrian Feb 26 '12 at 2:24
$\begingroup$ For $\tan\left(\frac{\alpha+\beta}{2}\right) = \frac{\sin\alpha+\sin\beta}{\cos\alpha+\cos\beta}$ one can also note that the right-hand side is the slope of the diagonal in a rhombus whose sides make a angles of $\alpha$ and $\beta$ to the horizontal. And the direction of that diagonal must, by symmetry, be $\frac{\alpha+\beta}{2}$. $\endgroup$ – Henning Makholm Feb 26 '12 at 2:39
$\begingroup$ @HenningMakholm : That's worth making into another answer. $\endgroup$ – Michael Hardy Feb 26 '12 at 4:11
$\begingroup$ @HenningMakholm : At en.wikipedia.org/wiki/File:Tan.half.svg , I've credited you with acquainting me with this argument. The picture is now used at en.wikipedia.org/wiki/Tangent_half-angle_formula . $\endgroup$ – Michael Hardy Feb 28 '12 at 19:55
$\begingroup$ @MichaelHardy, +1 for "If $a/b = c/d$, then $a/b = c/d = (a + c)/(b + d)$." =) $\endgroup$ – Jose Arnaldo Bebita-Dris Oct 5 '13 at 14:02
The identities for the sum of sines and the sum of cosines yield $$ \frac{\sin(x)+\sin(y)}{\cos(x)+\cos(y)}=\frac{2\sin\left(\frac{x+y}{2}\right)\cos\left(\frac{x-y}{2}\right)}{2\cos\left(\frac{x+y}{2}\right)\cos\left(\frac{x-y}{2}\right)}=\tan\left(\frac{x+y}{2}\right)\tag{1} $$ Equation $(1)$ implies that $$ \frac{\sin(x)+\sin(5x)}{\cos(x)+\cos(5x)}=\tan(3x)=\frac{\sin(3x)}{\cos(3x)}\tag{2} $$ We also have that if $b+d\not=0$, then $$ \frac{a}{b}=\frac{c}{d}\Rightarrow\frac{a}{b}=\frac{c}{d}=\frac{a+c}{b+d}\tag{3} $$ Combining $(2)$ and $(3)$ yields $$ \tan(3x)=\frac{\sin(x)+\sin(3x)+\sin(5x)}{\cos(x)+\cos(3x)+\cos(5x)}\tag{4} $$
robjohn♦robjohn
This is may be what you came up with, but I don't personally think it's all that bad: Cross-multiply and cancel $\sin3x\cos3x$ from each side. You have $$\cos3x\sin x+\cos3x\sin5x=\sin3x\cos x+\sin3x\sin5x$$ $$\sin3x\cos x - \cos3x\sin x=\cos3x\sin5x - \sin3x\cos 5x$$ By the angle addition/subtraction formulas, both sides are equal to $\sin 2x$.
Brett FrankelBrett Frankel
$\begingroup$ No, what i did is expand the left side and use a lot of trigomertic identities, a lot of them.. it took me 4 pages of bond paper to fully prove the statement $\endgroup$ – Keneth Adrian Feb 26 '12 at 2:10
More generally, for any arithmetic sequence, denoting $z=\exp(i x)$ and $2\ell=an+2b$, we have
$$\begin{array}{c l} \blacktriangle & =\frac{\sin(bx)+\sin\big((a+b)x\big)+\cdots+\sin\big((na+b)x\big)}{\cos(bx)+\cos\big((a+b)x\big)+\cdots+\cos\big((na+b)x\big)} \\[2pt] & \color{Red}{\stackrel{1}=} \frac{1}{i}\frac{z^b\big(1+z^a+\cdots+z^{na}\big)-z^{-b}\big(1+z^{-a}+\cdots+z^{-na}\big)}{z^b\big(1+z^a+\cdots+z^{na}\big)+z^{-b}\big(1+z^{-a}+\cdots+z^{-na}\big)} \\[2pt] & \color{LimeGreen}{\stackrel{2}=}\frac{1}{i}\frac{z^b-z^{-b}z^{-na}}{z^b+z^{-b}z^{-na}} \\[2pt] & \color{Blue}{\stackrel{3}=}\frac{(z^\ell-z^{-\ell})/2i}{(z^\ell+z^{-\ell})/2} \\[2pt] & \color{Red}{\stackrel{1}{=}}\frac{\sin (\ell x)}{\cos(\ell x)}. \end{array}$$
Hence $\blacktriangle$ is $\tan(\ell x)$ - observe $\ell$ is the average of the first and last term in the arithmetic sequence.
$\color{Red}{(1)}$: Here we use the formulas $$\sin \theta = \frac{e^{i\theta}-e^{-i\theta}}{2i} \qquad \cos\theta = \frac{e^{i\theta}+e^{-i\theta}}{2}.$$
$\color{LimeGreen}{(2)}$: Here we divide numerator and denominator by $1+z^a+\cdots+z^{na}$.
$\color{Blue}{(3)}$: Multiply numerator and denominator by $z^{na/2}/2$.
Note: there are no restrictions on $a$ or $b$ - they could even be irrational!
anonanon
Not the answer you're looking for? Browse other questions tagged trigonometry or ask your own question.
Proof of tangent half identity
How to simplify these trigonometry terms? (involving sin, cos, tan, cot)
How would I approach this identity: $\cos^3x+\sin^3x=(\cos x+\sin x)\cdot(1-\sin x\cdot\cos x)$?
Prove that $\sin(n x) + \sin((n+2) x) = 2\cos(x)\sin((n+1) x)$?
Why is $\sin(\alpha+\beta)=\sin\alpha\cos\beta+\sin\beta\cos\alpha$
Proof of trigonometric identity $\frac{\cos x+i\sin x+1}{\cos x+i\sin x-1}= -\frac{i}{\tan \frac{x}{2}}$
Find $\sec \theta + \tan \theta$.
Show that $\frac {\sin x} {\cos 3x} + \frac {\sin 3x} {\cos 9x} + \frac {\sin 9x} {\cos 27x} = \frac 12 (\tan 27x - \tan x)$
Show that $4\cos\frac\pi5\cdot\sin\frac\pi{10}=1$
Verify that $\frac{1}{\tan(x) \csc(x)} = \cos(x)$
Given $\tan\beta=\frac{n\sin\alpha\cos\alpha}{1-n\sin^2\alpha}$, show that $\tan(\alpha-\beta)=(1-n)\tan\alpha$
Proving $\cos\frac{2\pi}{7}+\cos\frac{4\pi}{7}+\cos\frac{6\pi}{7}=-\frac{1}{2}$
|
CommonCrawl
|
Ultra-thin enzymatic liquid membrane for CO2 separation and capture
Yaqin Fu1,2,
Ying-Bing Jiang1,2,3,
Darren Dunphy1,2,
Haifeng Xiong ORCID: orcid.org/0000-0002-3964-46771,2,
Eric Coker ORCID: orcid.org/0000-0002-9382-93734,
Stanley S. Chou4,
Hongxia Zhang5,
Juan M. Vanegas4,6,
Jonas G. Croissant1,2,
Joseph L. Cecchi1,
Susan B. Rempe ORCID: orcid.org/0000-0003-1623-21084 &
C. Jeffrey Brinker1,2,4
Nature Communications volume 9, Article number: 990 (2018) Cite this article
Bioinspired materials
An Author Correction to this article was published on 01 June 2018
This article has been updated
The limited flux and selectivities of current carbon dioxide membranes and the high costs associated with conventional absorption-based CO2 sequestration call for alternative CO2 separation approaches. Here we describe an enzymatically active, ultra-thin, biomimetic membrane enabling CO2 capture and separation under ambient pressure and temperature conditions. The membrane comprises a ~18-nm-thick close-packed array of 8 nm diameter hydrophilic pores that stabilize water by capillary condensation and precisely accommodate the metalloenzyme carbonic anhydrase (CA). CA catalyzes the rapid interconversion of CO2 and water into carbonic acid. By minimizing diffusional constraints, stabilizing and concentrating CA within the nanopore array to a concentration 10× greater than achievable in solution, our enzymatic liquid membrane separates CO2 at room temperature and atmospheric pressure at a rate of 2600 GPU with CO2/N2 and CO2/H2 selectivities as high as 788 and 1500, respectively, the highest combined flux and selectivity yet reported for ambient condition operation.
Carbon dioxide (CO2) is the most important anthropogenic greenhouse gas in the atmosphere1,2,3. According to the 2014 report of the World Meteorological Organization4, atmospheric CO2 reached 142% of its pre-industrial level in 2013, primarily because of emissions from the combustion of fossil fuels and production of cement. In November 2016, the Paris Accord was ratified with the goal of maintaining a global temperature rise of only 2 °C above pre-industrial levels during this century. However, the realization of this goal is imperiled by the cost of CO2 sequestration. Seventy percent of the cost of capturing of CO2 involves separation from other gases.
The conventional process for CO2 capture involves reversible absorption3,5, which consumes high amounts of energy and is costly with a high environmental impact3. More efficient and environmentally friendly separation processes are needed, and in this context, membrane separation represents a promising approach due to its greater energy efficiency, processability, and lower maintenance costs5,6,7. Membranes enabling selective and efficient removal of CO2 from fuel gas (containing CO, H2, H2O, and H2S) or flue gas (containing N2, O2, H2O, SO2, NOx, and HCl) could be of great economic value8. An efficient membrane should have both high permeance and selectivity. Permeance is the flux of a specific gas through the membrane, typically reported in Gas Permeation Units (GPUs) (1 GPU = 10−6 cm3 (STP) cm−2 s−1 cm−1 Hg−1). Selectivity is the capacity to separate two or more gases, typically reported as a dimensionless ratio of flux. Porous membranes usually exhibit a high CO2 flux, but due to pore size variability, they often display a poor selectivity. Notable exceptions are zeolite membranes whose sub-nanometer pore size is defined by the zeolite crystallographic lattice and is monodisperse. Recently Korelskiy et al. reported an H-ZSM-5 zeolite membrane exhibiting a CO2/H2 selectivity of ca. 200 and a CO2 permeance of ca. 17,000 when operated at 9 bars and −43 °C9,10. Dense membranes, typically polymers, exhibit moderate selectivity, but the CO2 flux is usually low because of the low solubility and diffusivity of CO2. In general, most existing membranes exhibit a sharp trade-off between flux and selectivity and are so far impractical for CO2 capture applications2,7,11,12.
Three factors govern membrane flux and selectivity: (1) how fast the species to be separated can enter into or exit from the membrane, (2) how selectively it can enter into or exit from the membrane, and (3) how fast it can be transported through the thickness of the membrane. Not surprisingly, biological systems maximize the combination of these factors as separation processes typically take place in an ultra-thin liquid layer aided by enzymes that catalyze the selective and rapid dissolution and regeneration of the target species (increasing solubility and selectivity), and short diffusion distances combined with higher diffusivity within liquid vs. solid media maximize transport rates. For CO2 in particular, the respiratory system of vertebrates is an excellent case in point. Red blood cells employ carbonic anhydrase (CA) enzymes to rapidly and selectively dissolve the CO2 produced by tissues and regenerate the CO2 exhaled from the lung. CAs represent a family of metalloenzymes that catalyze the rapid interconversion of CO2 and water into carbonic acid H2CO3 (Eq. 1), which dissociates to bicarbonate (HCO3–) and protons according to the prevailing species concentrations (Fig. 1 and Supplementary Fig. 1). Carbonic anhydrases are necessarily one of the fastest enzymes with reported catalytic rates ranging from 104 to 106 reactions per second, meaning that one molecule of CA can catalyze the hydration/dissolution of 10,000 to 1,000,000 molecules of CO2 per second13,14.
$${\mathrm{CO}}_2 + {\mathrm{H}}_2{\mathrm{O}} \Leftrightarrow {\mathrm{H}}_2{\mathrm{CO}}_3 \Leftrightarrow {{\mathrm{HCO}}_3}^ - + {\mathrm{H}}^ +$$
Carbonic anhydrase enzyme and its CO2 capture and regeneration mechanism. a Ribbon representation of the carbonic anhydrase (CA) enzyme. b Active site of CA determined by molecular simulations (vide infra). A zinc ion (Zn2+) surrounded by three coordinating histidines and a water molecule comprises the active site. c Depiction of the overall catalytic cycle for CO2 hydration to HCO3– with zinc as the metal in the CA active site. This reaction is driven by a concentration gradient: clockwise when the CO2 concentration is greater than HCO3– and counterclockwise when more HCO3– is present. Deprotonation of the zinc-bound water is thought to be rate limiting
The concept of employing CA for CO2 separation was first reported by Ward and Robb who impregnated a cellulose acetate film with a potassium bicarbonate solution containing CA and observed a factor of six increase in CO2 permeability over potassium bicarbonate alone15. Based on a similar concept, Carbozyme Inc. encapsulated an aqueous CA solution within a microporous polypropylene hollow fiber membrane15 and achieved a five times higher CO2 permeability compared to Ward and Robb's membrane. However, the CO2 flux (18.9 GPU16) still fell far short of that needed for practical CO2 sequestration, since a CO2 capture cost below $20–40 per ton is required by the U.S. Department of Energy17, which translates into a CO2/N2 selectivity higher than 30–50 as well as a CO2 permeance higher than 300–3000 GPU17,18. Inherent problems/limitations of CA membranes developed to date are thickness (10–100 µm,), which establishes the diffusion length and limits flux, and CA concentration, which governs the CO2 dissolution and regeneration rates, but is limited in by the enzyme solubility (typically <1 mM).
Here, in order to overcome the limitations of current CO2 membranes and exceed the DOE requirements for CO2 sequestration, we have developed an ultra-thin, CA-catalyzed, liquid membrane nano-stabilized via capillary forces for CO2 separation (see Fig. 2). It comprises oriented, close-packed arrays of 8 nm diameter hydrophilic cylindrical nanopores (silica mesopores19), whose effective thickness (i.e., the hydrophilic pore length/depth) is defined by oxygen plasma treatment to be ~18 nm. Through capillary condensation, the pores are filled with water plus CA enzymes confined and stabilized to high pressures by nanoconfinement (approximately the capillary pressure, ~35 atmospheres) exerted by water within a hydrophilic 18 nm diameter nanopore). Due to the exceptional thinness of the membrane and the high effective concentration of CA within the close-packed arrangement of nanopores, we demonstrate (under approximately ambient conditions of pressure and temperature) unprecedented values of combined CO2 flux (as high as 2600 GPU) and CO2/N2 selectivity (as high as 788). Because the CO2 selectivity derives from that of the confined CA enzyme, the enzymatic liquid membrane also exhibits high CO2/H2 selectivity (as high as 1500).
Enzymatic liquid membrane design and mechanism of CO2 capture and separation. a The membrane is fabricated by formation of ~1-µm-deep oriented arrays of 8 nm diameter cylindrical silica [SiO2] mesopores within the larger 50–150-nm pore channels of a 50-μm-thick porous alumina [Al2O3] Whatman© Anodisc support. b Using atomic layer deposition and oxygen plasma processing (described in text and Fig. 4) the silica mesopores are engineered to be hydrophobic (trimethylsilyl (Si(CH3)3) surface groups) except for an 18-nm-deep region at the pore surface, which is hydrophilic (≡Si-OH surface groups). Via capillary condensation, CA enzymes and water spontaneously fill the hydrophilic mesopores to form an array of nano-stabilized CA enzymes with an effective CA concentration >10× of that achievable in solution. CA catalyzes the capture and dissolution of CO2 as carbonic acid (HCO3–) moieties at the upstream surface and regeneration of CO2 at the downstream surface (see Fig. 1c). The high concentration of CA and short diffusion path length maximizes capture efficiency and flux
Ultra-thin hydrophilic nanoporous membrane fabrication
The enzymatic liquid membrane was fabricated using a four step-process (Figs. 3 and 4). Step 1 involved the fabrication of an architecture that both stabilizes water and can accommodate CA enzymes (vide infra). The oriented Anodisc pores were thus sub-divided into smaller, oriented, 8 nm diameter cylindrical pores via deposition of P123 block copolymer templated mesoporous silica using the so-called evaporation-induced self-assembly (EISA20,21, see Methods). In this process, the Anodisc pore channels are filled to a depth of about 1 µm with a cylindrical hexagonal P123/silica mesophase (space group p6mm), which when confined to a cylindrical channel orients parallel to the channel axis (see Fig. 3c–f). Calcination at 400 °C is used to remove the P123 template resulting in oriented 8 nm diameter nanopores (see Fig. 3c, d) whose pore surfaces are terminated with hydrophilic surface silanol groups (≡Si-OH). Note that surfactant removal can be accomplished at room temperature by UV/ozone or oxygen plasma treatment22. Hydrophilic 8 nm diameter nanopores are large enough to accommodate CA (~5.5 nm in diameter) within a confined water layer and small enough to spontaneously fill with water above ~75% relative humidity (RH) (vide infra). However, the thickness of the resulting nano-stabilized liquid membrane would be ~1 µm far exceeding that of natural membranes. In order to reduce the effective thickness of the nano-stabilized liquid membrane, we conducted two steps of surface modification (Steps 2 and 3, Fig. 4). In step 2, using an atomic layer deposition (ALD) apparatus, we treated the membrane with ozone to maximize the surface silanol coverage and then conducted five cycles of alternating (hexamethyldisilizane (HMDS) and trimethylchlorosilane (TMCS)) and H2O vapor exposures to quantitatively replace hydrophilic surface silanol groups with hydrophobic trimethylsilyl groups (Si(CH3)3. In step 3, we then exposed the membrane to a remote oxygen plasma for 5 s to re-convert hydrophobic trimethylsilyl groups to hydrophilic silanol groups at the immediate membrane surface. The mechanism of this plasma-nanopore-modification has been described by us previously21,23. Briefly, reactive radicals generated in a low-pressure oxygen plasma are mainly charged ions that cannot penetrate deeply into the nanoporous support, because the plasma Debye length (~20 cm under our conditions) is much larger than the pore size (~8 nm). In order to confirm the hydrophilicity of the plasma-modified nanoporous membrane surface and the hydrophobicity of the HMDS-modified surface, the water contact angle was measured with a Biolin Scientific Theta Optical Tensiometer. Fig. 5a shows the hydrophilic surface to have a contact angle of nearly 0° (note since the water droplet for the measurement is about 0.05 ml, not all water can be adsorbed in the nanopores, and some excessive free water remains on the surface) consistent with a superhydrophilic surface stemming from the hydrophilic surface chemistry and nanoscale roughness24. In comparison, the water contact angle of the HMDS-modified surface was ~150° consistent with a superhydrophobic surface stemming from the hydrophobic surface chemistry plus nanoscale roughness25.
Electron microscopy images of the membrane hierarchical macro-structure and nano-structure. a Cross-sectional SEM image of the Anodisc support showing oriented ~50-nm-wide pore channels near the top surface (scale bar: 5 μm). b Plan-view TEM image of focused ion beam (FIB)-sectioned Anodisc surface showing complete filling of all Anodisc pore channels with ordered arrays of silica mesopores (scale bar: 100 nm). (Note: FIB sectioning served to etch the alumina leaving only the silica mesopore arrays. Silica mesopore arrays not perfectly aligned normal to imaging axis appear as stripe patterns). c Cross-sectional TEM image of the Anodisc surface showing oriented arrays of 8 nm diameter cylindrical mesopores filling the Anodisc pores (scale bar: 100 nm). d, e Higher magnification cross-sectional TEM image showing oriented array of 8 nm diameter cylindrical mesopores filling a single Anodisc pore (scale bar d: 100 nm; scale bar e: 50 nm). f Plan-view TEM image of silica mesopore array at membrane surface showing hexagonal close packing of cylindrical mesopores (scale bar: 20 nm)
Design steps of the enzymatic liquid membrane. Beginning with a 50-µm-thick Anodisc support, Step 1 comprises the formation of oriented arrays of 8 nm diameter cylindrical silica mesopores within the 50–150 nm diameter Anodisc pores via evaporation-induced self-assembly followed by calcination to remove the P123 surfactant. In Step 2, three alternating cycles of atomic layer deposition (ALD) of HMDS ((CH3)3-Si-N-Si(CH3)3) + TMCS (Cl-Si(CH3)3) followed by water are conducted to convert the hydrophilic silanol-terminated mesopore surfaces to hydrophobic Si-O-Si(CH3)3 surfaces throughout the 1 µm length of the mesopore. In Step 3 a remote oxygen plasma treatment is used to regenerate hydrophilic silanol groups to a depth of 18 nm on the top surface. In Step 4 an aqueous solution of CA is introduced on the top surface. Through capillary condensation, water plus enzymes fill the mesoporous silica array. a images represent the processing steps and b images represent the corresponding surface chemistries
Hydrophilicity depth characterization of the enzymatic liquid membrane. a Representation of the opposing hydrophilic/phobic character of the membrane surfaces and corresponding water contact angle characterization. b Control experiments designed to probe the hydrophilicity depth of the 'amphiphilic' membrane. The hydrophilic surface modified membrane (top row) showed a limited (18-nm-deep) atomic layer deposition (ALD) of titanium oxide (TiO2), mapping the depth of silanol groups (scale bar: 50 nm). In contrast, the fully hydrophilic membrane (bottom row) shows TiO2 deposition throughout its thickness. The effective membrane thickness of the enzymatic water-membrane for CO2 capture and separation is thus determined to be ~18 nm
In order to estimate the depth of the hydrophilic plasma-modified surface layer, we compared TiO2 ALD on the original hydrophilic mesoporous silica membrane with TiO2 ALD on the HMDS plus oxygen plasma-modified 'amphiphilic' membrane, using conventional TiCl4 and H2O vapor as the TiO2 ALD precursors. It is well established that TiO2 ALD requires a hydrophilic (normally hydroxylated) surface to initiate deposition; therefore, the formation of TiO2 can be used to 'map' the hydrophilic surface chemistry. Fig. 5b shows the EDS-based Ti elemental mapping of cross-sectional samples, where the brightness corresponds to the Ti concentration. The bottom row is a cross-section of the original mesoporous silica membrane, where we observe Ti deposition throughout the ~250-nm-thick section (membrane top surface is on top) as expected from the hydroxylated surface chemistry. The top row shows that Ti ALD on the HMDS-plasma-modified amphiphilic membrane is confined to an ~18-nm-deep hydroxylated region on the immediate surface—this depth establishes the effective thickness of the confined liquid membrane to be only 18 nm (vide infra).
Sub-20-nm-thick enzymatic liquid membrane fabrication
Having successfully fabricated an ultra-thin hydrophilic nanoporous layer on the hydrophobic support, we next introduced CA enzymes into the hydrophilic nanopores by simple immersion of the sample in an aqueous enzyme solution with a CA concentration of 0.05 mM (Step 4, Fig. 4) After moderate bath sonication for 10 min, the sample was withdrawn from the solution and allowed to 'dry' in a horizontal configuration. During this evaporation process, the CA enzyme solution is concentrated and stabilized within the hydrophilic nanopores via capillary forces to form an ultra-thin liquid membrane containing CA enzymes. Since the superhydrophobic pores repel water, the thickness of the CA containing liquid membrane is defined by the thickness of the hydrophilic nanoporous layer, which was determined to be about 18 nm (Fig. 5b).
Direct observation of the formation and thickness of the liquid membrane is challenging. However, by measurement of the mass of water adsorbed within a defined area of the amphiphilic nanoporous membrane, we can calculate the effective liquid membrane thickness according to its geometry. In order to perform this experiment, we used a quartz crystal microbalance (QCM) to measure the mass of water adsorbed within the nanoporous membrane deposited onto the active area of the QCM and processed identically to the membrane deposited on the Anodisc support, i.e., by HMDS/TMCS ALD followed by plasma processing. To confirm the structural similarity of the films deposited on the QCM and AO support surfaces, we performed grazing-incidence small-angle scattering (GISAXS). Fig. 6a, b compares the respective GISAXS data where we observe nearly identical patterns confirming the structural similarity of the samples. Then we introduced coated-QCM devices into an environmental chamber and performed water adsorption isotherms. Fig. 6d compares the H2O adsorption isotherms of nanoporous silica films processed before and after plasma processing, where 0% RH corresponds to samples purged using pure dry N2 for more than 1 h. For the original HMDS/TMCS-treated hydrophobic nanoporous silica membrane (referred to as 'hydrophobic' in Fig. 6d), the mass of the sample shows a small increase with increasing RH, probably due to water vapor adsorption by randomly scattered hydrophilic micropores that are inaccessible to HMDS/TMCS molecules during ALD. For the membrane prepared by HMDS/TMCS ALD followed by plasma irradiation (referred to as 'amphiphilic' in Fig. 6d), the mass of water adsorbed increases abruptly at about 75% RH consistent with spontaneous water absorption by capillary condensation and the formation of the nano-stabilized liquid membrane (vide infra). The 4.82 µg mass increment at RH = 75% corresponds to a volume of 4.82 × 10−6 cm3 of water. Assuming a 50% volumetric porosity of the nanoporous silica membrane (as is typical for P123-templated mesoporous silica) and using the geometric surface area of 4.91 cm2 for the 25 mm diameter QCM sensor, we calculate the corresponding water layer thickness to be 19.6 nm, which is in reasonable agreement with the 18 nm thickness observed according to the TiO2-ALD control experiments (Fig. 5b).
GISAXS and water adsorption isotherm characterization of hydrophobic and amphiphilic membranes. a GISAXS characterization of the hydrophobic and amphiphilic-surface-modified membrane showing a similar mesoporous structure. b Linecuts of GISAXS showing the similarity of two membranes. c Photograph of a 25-mm-wide quartz crystal microbalance (QCM) mass sensor coated with a P123-templated mesoporous silica film. d QCM mass increase as a function of % relative humidity (RH) for the hydrophobic and amphiphilic membranes showing sudden water adsorption in the amphiphilic membrane (at RH = 75%) due to the capillary condensation of water vapor on the hydrophilic top surface. The mass of condensed water vapor establishes the water volume, from which we calculate the average thickness of the nano-stabilized liquid membrane geometrically assuming that the amphiphilic-surface modified membrane uniformly coats the QCM device
In order to prove the formation and the air-tightness of the liquid membrane, the permeance of N2 (maintained at 95% RH) through the membrane (prepared as described above) was measured using a bubble flow rate meter for a 1 atm pressure difference. The permeance of N2 through the membrane was almost undetectable, whereas, in contrast, the N2 permeance through the completely hydrophobic sample (i.e., prepared without plasma irradiation, and thereby, having no stabilized water layer) was measured to be 340 sc cm cm−2 atm−1. As a further control, we also measured the permeance of CO2 (maintained at 95% RH) through the membrane prepared as described above, but without the CA enzymes, i.e., through the ultra-thin stabilized water layer. In this case the CO2 permeance was undetectable. These results indicate that the ultra-thin CA containing liquid membrane is continuous and essentially defect-free. One conceivable concern might be how to ensure that the liquid membrane is stable and will not 'dry out' in real-world applications. As previously discussed, this concern is alleviated by maintaining the membrane at a sufficient relative humidity where, due to capillary condensation, the uniformly sized hydrophilic nanopores remain water-filled. According to the Kelvin equation, capillary condensation for a hydrophilic pore occurs at a relative humidity RH defined by: ln(RH) = −(2γVm/rRT), where γ and Vm are the surface tension and the molar volume of water, r is the pore radius, T is the temperature in Kelvin, and the R constant (8.32 J mol−1 K−1). For the 8 nm diameter pores of our membrane, the Kelvin equation predicts condensation to occur at an RH equal to or exceeding 75%, which is consistent with the water adsorption 'step' observed in Fig. 6d. A typical flue gas comprises 6.2 wt% H2O if it is from a coal-fired plant and 14.6 wt% H2O if from a gas-fired plant. Both are much higher than the saturated water vapor concentration at 40 °C (~50 g H2O kg–1 air or 0.5 wt% H2O). Therefore, the humidity requirement to maintain membrane stability can be easily satisfied if the membrane is used to capture CO2 from power plant flue gas or used in any moderate humidity environment (see Supplementary Discussion).
Another potential concern is that of the liquid membrane strength, e.g., will the liquid membrane be ruptured when applying pressurized gas for separation? Here, the uniform nano-sized dimensions of the hydrophilic pores assure mechanical stability: the capillary pressure of water condensed within a pore can be calculated according P = 2γcosθ/d (where γ is the water-air surface tension and d is the pore diameter). For water confined within 8 nm diameter hydrophilic pores, where the contact angle θ equals zero, the capillary pressure is about 35 atm (Supplementary Discussion). Therefore, under regular operations like CO2 capture from flue gas, where the gas pressure is typically less than several atmospheres, the capillary pressure is more than sufficient to stabilize the membrane and prevent its displacement into the hydrophobic portion of the membrane nanopores.
Enzymatic liquid membrane performance
So far, we have demonstrated an 'air-tight', ultra-thin, stable, enzyme-containing liquid membrane formed on an Anodisc support. Next, we measured the CO2 permeance of the enzymatic liquid membrane fabricated with mammalian or extremophile CA enzymes at various temperatures and pH values (Fig. 7a, b). We then determined and compared our experimentally observed CO2/N2 separation efficiency and CO2 flux performance with other as-reported CO2 membranes, and the corresponding data are plotted in Fig. 7f.
CO2 separation applications of the enzymatic liquid membrane. a, b Comparison of the CO2 permeance of Bovine CA and Desulfovibrio vulgaris CA enzymatic liquid membranes as a function of the pH. c, d Comparison of the CO2 permeance of Bovine CA and Desulfovibrio vulgaris CA enzymatic liquid membranes as a function of the temperature. e FTIR spectrum of the enzymatic liquid membrane indicating an effective areal density (EAD) of 8.0 × 1011 CA cm–2 that is consistent with the loading of two CA enzymes per nanopore. f Selectivity vs. permeance plot (log 5 scale) of the enzymatic liquid membrane compared with various membranes. The green stars are performance range for our ultra-thin enzymatic liquid membrane; the red triangle is the performance for the Carbozyme CA membrane19; the blue dots, the ocean blue circle, the yellow prism, the light blue pentagon, the orange square, and the black hexagon are CO2 membrane performance data respectively from references7,49,50,51,52,53,10. The selectivity of the ZSM-5 membranes (black hexagon)10 increases from 17 to 210 as the temperature decreases from 37 to −43 °C. Error bars represent 95% confidence intervals for experiments performed with n = 3
Figure 7a compares the CO2 permeances at different temperatures for two types of CA enzymes: CA derived from mammalian bovine erythrocytes and CA derived from Desulfovibrio vulgaris—an extremophile bacteria that survives under conditions of 5 °C and pH 10. For the Bovine CA enzyme, the permeance, resulting from CA mediated CO2 dissolution (Eq. 1) followed by diffusion across the 18-nm-thick liquid membrane and ex-solution (reverse of Eq. 1) at the hydrophobic interface is temperature dependent and, as expected, is maximized at mammalian body temperature, 30–40 °C. For membranes containing the Desulfovibrio vulgaris CA enzyme, the CO2 permeance is practically temperature independent exceeding that of the bovine CA membrane at low and high temperatures, but found to be less than that of bovine CA at 30–40 °C. Our observed temperature dependent CA activity is in good agreement with that reported by Hooks and Rehm26. Fig. 7b plots CO2 permeance as a function of pH for bovine CA membranes and Desulfovibrio vulgaris CA membranes. Similar to the temperature dependence, the bovine CA membranes performed best at neutral pH, whereas, the Desulfovibrio vulgaris CA membranes exhibited a very moderate pH dependence over the pH range 2–10 and exhibited higher CO2 permeance at both lower and higher pHs.
Figure 7f compares the CO2 separation and permeance performance of our ultra-thin enzymatic liquid membrane to that of other classes of CO2 membranes. The liquid membrane was operated at 37 °C and pH 7.5 with only a chemical potential driving force. The feed gas composition was 20 vol% CO2 in N2 maintained near ambient pressure (36 cm Hg (0.48 bar)) and the collection side comprised a Ca(OH)2 aqueous solution to capture CO2 and maintain a constant chemical potential driving force (see Methods and Supplementary Fig. 3 for setup). With the exception of the high permeance ZSM-5 membrane operated at 9 bars over the temperature range 37 to −43 °C9, other membranes do not have sufficient permeance to satisfy practical CO2 separation requirements. In addition, there is always a sharp compromise between permeance and selectivity. In contrast, our ultra-thin enzymatic liquid membrane exhibits a combination of high CO2 permeance (up to 2600 GPU) and high CO2/N2 selectivity (500–788, see gas chromatography results in Supplementary Fig. 4). To demonstrate the overall utility of our membrane for CO2 separation, we further assessed its ability to perform CO2/H2 separation using a 43% H2 and 57% CO2 gas mixture maintained at ambient pressure (see Supplementary Fig. 5 for setup). In this case we determined CO2/H2 separation factors as high as 1500 (Supplementary Fig. 6), while maintaining CO2 permeances in the same range as for CO2/N2 separations. The stability of the membrane was also demonstrated for a period of three months (Supplementary Fig. 7).
In order to explain the enhanced performance of our CA catalytic membrane, we must reconsider the three steps governing the flux and the selectivity of any membrane considered previously: CO2 capture (step I), HCO3− transport (step II), and CO2 release (step III). In our case, CA enzymes catalyze the selective and rapid dissolution and regeneration of the target species (steps I and III); short diffusion distances combined with the inherent three orders of magnitude higher diffusivity within liquid vs. polymers commonly used for CO2 membranes6,14 maximize transport rates (step II). Liquid membranes containing CA have been reported previously for CO2 separation by Ward and Robb in the 1960s15 and more recently by Carbozyme Inc27. However, the inherent mechanical weakness of the water layer in their membrane configurations limited their membrane thicknesses, to be only as thin as 10–100 microns, about a hundred times thicker than most polymer membranes; therefore, negating the potential advantage of the liquid membrane compared to a polymer membrane. Here, through nanoconfinement, we have created a mechanically stable liquid membrane only ~18-nm-thick. Furthermore, compared to the Ward and Robb and Carbozyme membranes another advantage of our membrane is the high enzyme concentration achieved by confinement within the close-packed array of hydrophilic nanopores (see Fig. 4b). CA enzyme solubility in liquid membranes is in general lower than 0.2 mM. For example, Carbozyme was able to use a CA concentration of only 0.16 mM (5 g l–1) and Ward and Robb were able to use a CA concentration of only 0.06 mM (2 g l–1). In contrast, the high density of hydrophilic nanopores (3.92 × 1011 nanopores per cm2, Supplementary Discussion) in our membrane, if filled with CA, would allow attainment of a significantly higher local CA concentration. To prove this point we performed FTIR spectroscopy of the CA-filled membrane prepared on an IR transparent silicon substrate in the same manner as for the QCM measurements (Fig. 7e). Based on the molar extinction coefficient of the Amide I absorption band at 1640 cm−1 attributed uniquely to the CA enzyme, we calculated a molar concentration of CA that corresponded to a loading of on average of 2 CA enzymes per nanopore yielding an effective CA concentration within the membrane of 3.7 mM (100 mg ml–1), or, an effective areal density of 8.0 × 1011 CA cm–2. This CA concentration is ten times greater than that achievable in solution (~10 mg ml–1) and correspondingly accelerates the rates of selective CO2 dissolution and release from the membrane.
The CO2 permeance was then estimated via the calculated CA enzyme areal density. Here we first considered which step, among steps I–III, is rate-limiting for CO2 permeance. Based on the equilibrium described by Eq. 1, CAs catalyze the dissolution of CO2 from the feed side to form bicarbonates that dissociate into carbonic acids and diffuse through the water layer and eventually be enzymatically converted back to CO2 on the 'downstream' side. Hence, the CO2 capture (step I) as well as the CO2 release (step III) are both dependent on the activity of CA enzymes, whereas the HCO3− transport (step II) does not and is a function of the diffusion coefficient of carbonate species in water. In regards to the diffusion transport (step II), given the known CO2 permeability in pure water15 (210 × 10–9 cm3 (STP) cm sec–1 cm–2 cm–1 Hg–1), the permeance of the designed 20-nm-thick water-membrane should be of 210 × 10–9 cm3 (STP) cm sec–1 cm–2 cm–1 Hg–1 divided by the thickness, which is 0.1 cm3 s–1 cm–2 cm–1 Hg–1, or 105 GPU. Now, since a CO2 permeance of 105 GPU is much larger than the permeance observed (Fig. 7f), it follows that the CA-catalyzed steps I or III are rate limiting. Correspondingly, we can estimate the flux from the experimentally determined areal density of CA (8 × 1011 molecules cm–2) assuming the native enzymatic activity of CA (106 reactions per second15). This results in a calculated CO2 permeance of 8 × 1017 molecules sec–1 cm–2 corresponding to a volumetric flux of 0.03 cm3 sec–1 cm–2. At the 36 cm Hg driving pressure (see Methods), this corresponds to a CO2 permeance of 833 GPU, which is within the measured data range (500–2600 GPU, Fig. 7f), but considerably lower than the highest permeance measured.
Molecular dynamics simulations of nanoconfined CA enzymes
We reasoned that the variability in the measured permeance and the discrepancy between the calculated 'theoretical' value and the highest measured CO2 permeance could be a consequence of nanoconfinement effects that might influence/enhance the enzymatic activity. In order to test this idea, we performed molecular dynamics simulations of the CA enzyme confined within 8 nm diameter silanol-terminated mesopores, under conditions that mimic the operational conditions of the liquid membrane, to characterize atomic-level details of the system. As shown in Fig. 8a, we simulated one or more CA enzymes in a rectangular silica nanopore (inner dimensions of 8 × 8 × 10 nm) that is filled with water at pH 7 (the average silanol density = 5.9 Si–OH nm–2 with 16.5% ionization, see Methods).
Molecular dynamics simulations of carbonic anhydrase confinement within individual mesopore channels of the enzymatic liquid membrane. a, b Setup for the molecular dynamics simulations of CA enzymes confined within a silica nanopore. c CA enzymes rapidly (<100 ns) adsorb to the surface of the silica nanopore and remain in contact with the pore walls for the duration of the simulation, as shown by H-bond contacts between the pore and CA in c. d Root-mean-squared deviation (RMSD) data of the protein backbone and active site atoms shows that the structure of the CA enzymes adsorbed to the pore remains stable during the course of the simulation and close to the crystal structure. e Ribbon representation (left) and close-up of the active site of the CA enzyme show close similarity between the average structure simulated in the nanopore and crystal structure. f Average RMSD data (over the last 50 ns of 300 ns simulations) for 1, 2, and 4 CA enzymes per pore (to simulate varying crowded conditions) shows that the structure of the enzyme is highly robust and resembles that of the free CA structure in solution. Error bars represent 95% confidence intervals for experiments performed with n = 3
The simulations revealed that, initially placed in the center of the nanopore, the CA enzymes rapidly (<100 ns) diffuse toward the walls of the pore (Fig. 8b) and form hydrogen bonds that are sustained throughout the simulation (Fig. 8c). Adsorption to the walls of the nanopore is expected due to the large number of polar and charged (positive and negative) residues on the surface of CA, and is consistent with previous studies showing binding of polypeptides to different silica surfaces28,29. However, adsorbed CA enzymes retain some mobility and are able to move along the silica surface. Different portions of the enzyme contact the pore walls at different times with the active site remaining accessible to the solution and permitting substrate and product molecules to readily diffuse in and out. The structure of the CA enzyme in the nanopore is highly robust, as shown by the root-mean-squared deviation (RMSD) of the backbone and active site atoms compared to the CA crystal structure (Fig. 8d–f), and does not appear to be negatively affected by adsorption to the nanopore. Furthermore, the CA RMSD data for simulations in the nanopore closely resemble the values obtained for the free enzyme in solution (Fig. 8f), even for the case of crowded confinement (2–4 CA enzymes in the nanopore with an effective concentration greater than 150 mg ml–1 within individual nanopores). These results indicate that the enzymatic activity of CA confined within silica nanopores should not be diminished by adsorption and/or crowding. Further, we cannot rule out the possibility that the (effective) CA enzymatic-specific activity could be increased due to molecular crowding in the nanopores as experimentally observed for other confined enzymes28,29, or that the effective binding affinity could be enhanced (decreased Michaelis constant) due to excluded volume effects29,30. This may explain the generally higher levels of CO2 flux measured experimentally compared to values calculated assuming native enzymatic activities.
Separation processes in natural biological systems typically take place in an aqueous environment at ambient pressure driven by the prevailing chemical potential gradient. Oftentimes separations are aided by enzyme catalysis, and the thickness of biological membranes is normally on the nanometer scale. To implement these natural design strategies for CO2 capture and separation, we have fabricated an ultra-thin, enzymatic, nano-stabilized liquid membrane. By using nature's design principles of ultra-thin membranes and enzymatic aqueous media, we achieved a combination CO2 flux and CO2/N2 selectivity under ambient pressure and temperature conditions greatly exceeding that of conventional polymer or inorganic membranes. The membrane design employs a regular array of close-packed 8 nm diameter hydrophilic nanopores whose depth is only 18 nm to confine and stabilize water plus CA enzymes. The high density of CA-filled nanopores establishes an effective CA concentration ten times greater than possible in aqueous solution. At low pressure, the CA enzyme array catalyzes the rapid and selective capture of CO2 via dissolution to form carbonic acid H2CO3 on the upstream side and the conversion of bicarbonate (HCO3–) to CO2, which is released on the downstream side. The short water-filled channels minimize diffusional constraints. Altogether, this design maximizes the three steps governing the flux and the selectivity of a CO2 membrane: CO2 capture (step I), HCO3− transport (step II), and CO2 release (step III) and enables our enzymatic liquid membrane to exceed Department of Energy standards for CO2 sequestration technologies. Because selectivity is dependent on the exquisite catalytic activity of CA, it should effectively separate CO2 from any gas or gas mixture as we demonstrated for H2/CO2. By simple replacement of CA enzymes with alternate enzymes, we propose that our ultra-thin, enzymatic, nano-stabilized liquid membrane concept could be readily adapted to other separation processes.
Concerning stability, the membrane is predicted to be mechanically stable because the capillary pressure of water condensed within uniform hydrophilic 8 nm nanopores is ~35 atmospheres. This should prevent water displacement under operations like CO2 capture from flue gas, where the gas pressure is typically less than several atmospheres. The small uniform pore size also confers environmental stability. Based on the Kelvin equation, the membrane should remain water-filled if the RH is maintained above 75%, which is less than that of flue gas streams, which are typically oversaturated in water. For mammalian-derived CA enzymes, the optimal operation temperature would be ca. 37 °C (compatible with flue gas CO2 sequestration), but extremophile enzymes could enable higher temperature operation albeit with less efficiency. Finally, we should consider whether the membrane would be de-activated by impurities known to be present in flue gas. Here, Lu et al.31 employed CA to promote the adsorption of CO2 from a gas stream containing major flue gas impurities into a polycarbonate solution and concluded that the concentrations of up to 0.9 mol l−1 SO2−, 0.2 mol l−1 NO3−, and 0.7 mol l−1 Cl−, (which exceed the concentrations of typical flue impurities) did not influence the kinetics of absorption from a CA-loaded potassium carbonate solution. Taken together these considerations suggest that the enzymatic liquid membrane is stable enough for use in CO2 capture from flue gas. Additionally, based on its low pressure/temperature performance, it could be considered for other applications like CO2 sequestration in manned space flights.
Concerning cost and scaleability, the unit operations of our membrane synthesis, viz, EISA of ordered mesoporous silica films via dip-coating or spin-coating32,33,34,35, ALD23,36,37, and plasma processing23,36 are all scaleable and used today in the microelectronics industry and in roll-to-roll printing operations (see for example ref. 38). For demonstration purposes and to compare with other reported membranes, we used a costly commercial 25 mm diameter anodic alumina substrate (Anodisc) for our support. Further, to rigorously control chemistry, we employed calcination to remove surfactant templates and multiple steps of ALD to modify (hydrophobize) the mesoporous silica pore surfaces. To achieve scaleability and reduce costs, the Anodisc could be replaced with tubular alumina supports as employed previously by us for microporous silica membranes39,40, and by Korelskiy et al. for zeolite membranes10. Here it is noteworthy that based on their high flux and selectivity, modules of zeolite membranes prepared on tubular alumina supports were found to be 33% cheaper than a commercial spiral-wound polymer membrane unit for separation of 300 tons of CO2 per day during operation at 10 bars and room temperature. Our membranes have ten times lower flux, but ten times greater selectivity and operate at atmospheric pressure, so similar cost reductions might be expected. However, by replacing calcination with oxygen plasma treatment22, and ALD with CVD or other large-scale vapor phase methods, it is conceivable that enzymatic liquid membranes could be processed on low cost hollow fiber polymer supports, which would dramatically reduce the cost of CO2 capture technologies.
The membranes were fabricated on Whatman© Anodisc porous anodic alumina disc supports purchased from Whatman International Ltd. Bovine CA enzymes were purchased from Sigma-Aldrich, St Louis MO and the Desulfovibrio vulgaris CA enzyme was provided from Codexis, Inc.
Fabrication of ultra-thin enzymatic liquid membranes
The Whatman© Anodisc porous support is 50-µm-thick and is composed of oriented asymmetric vertical channels that are perpendicular to the disc surface. The channel diameters taper from 200 nm in diameter on the bottom surface to 50–100 nm in diameter on the top surface (see Fig. 3). The support was treated with UV/ozone to fully hydroxylate the alumina surface and insure wetting and covalent bonding with the 'sol-gel derived' silica mesophase (vide infra). In order to fabricate oriented 8 nm diameter cylindrical pores within the channels of the Anodisc, we prepared a Pluronic P123 block copolymer containing silica sol following our reported procedure20,21. The sol was applied to the support by spin-coating at 3000 rpm where capillary action followed by EISA20,21 resulted in the formation of a hexagonal silica/P123 mesophase oriented within the Anodisc pore channel. After two successive spin-coating depositions, the samples were aged at 50 °C for 12 h. To remove the P123 pore template, the samples were calcined at 400 °C for 2 h using a heating rate of 1 °C min–1. This resulted in 8 nm diameter cylindrical nanopores aligned within the 50–100 nm pores of the Anodisc as shown in Fig. 3.
To enable the formation of an ultra-thin, stabilized liquid membrane, we first exposed the Anodisc to ozone irradiation to maximize the coverage of hydroxyl groups on all the nanopore surfaces. This was followed by three cycles of alternating HMDS + TMCS/H2O vapor exposure at 180 °C in an Angstrom-depTM ALD system to convert the hydrophilic surface hydroxyl groups to hydrophobic trimethylsilyl groups. Following that, the hydrophobic porous support was placed into the plasma chamber of an Angstrom-depTM III plasma-ALD system, and the top surface was irradiated by an oxygen plasma for 5 s, converting only an 18-nm-deep thickness of the hydrophobic nanopores to hydrophilic hydroxyl terminated silica nanopores. In order to load CA enzymes into the nanopore channels, the membrane was 'floated' hydrophilic face down on a 0.05 mM CA solution and bath sonicated gently for 10 min. Then the samples were removed from the solution, inverted, and maintained in a horizontal configuration on a clean surface until all excess water on the membrane evaporated.
Structural and physical characterization
Focused ion beam and scanning electron microscopy (FIB/SEM) experiments were carried out on a FEI Q3D dual beam FIB/SEM system, with 30 kV/3 nA initial voltage/current followed by 8 kV/25 pA final polishing voltage/current for ion beam mode, and 5 kV/24 pA for scanning electron microscopy mode. Transmission electron microscopy images were acquired using a JEOL2010F HRTEM, and Ti-mapping was acquired using the same TEM with a Gatan EELS system. GISAXS was performed using a Bruker Nanostar on samples prepared on Anodisc substrates fabricated as indicated above or on Si substrates prepared as described for Fourier-transform infrared analysis (vide infra). Quartz crystal microbalance analyses were performed using a QCM200-5MHz QCM manufactured by Stanford Research Systems. A home-built, air-tight environmental chamber equipped with gas flow controllers to perform the H2O isotherms. Fourier-transform infrared spectroscopy was performed using a Thermo Scientific Nicolet 6700 Fourier-transform infrared spectrometer. A P123-templated silica film was deposited onto intrinsic, IR transparent single crystal Si substrates (400-µm-thick, double-polished) by spin-coating; this film was then processed in an identical manner as the Anodisc supported P123-templated film described above and loaded with CA.
CO2 separation performance measurement
CO2 permeance and CO2/N2 or CO2/H2 selectivity measurements were performed using a home-made test cell designed to accommodate a 25 mm diameter sample and to be immerged into a water bath for needed temperature control. The feed gas was first introduced through a water bubbler heated at 90 °C to achieve a saturated humidity. In the permeance vs. temperature and pH measurements (Fig. 7a, b), the feed gas was compressed pure CO2 with a relative pressure of 36 cm Hg or 0.48 bar. Control experiments of CO2 or Ar permeance were performed using liquid membranes prepared without CA, and CO2 and Ar were found to be undetectable using a bubble flowmeter. For the CO2/H2 separation procedure (See Supplementary Fig. 5), gas membranes were delivered in a sealed stainless-steel vessel, and used as is without further modification. A cross-flow configuration was used for H2 permeation measurements. Feed gas composition was fixed at 43% H2 and 57% CO2. The quantity of gas permeating across the membrane was calculated by the difference in gas flow at the inlet vs. the exhaust, with a typical cross-flow rate of 0.21 ccm. Gas permeated across the membrane was then carried by an Ar gas (8.01 ccm) into a calibrated Inficon 3000 Micro GC gas analyzer for quantitative measurement discrimination.
Molecular dynamics simulations
The simulations were performed with the GROMACS software package27. The CHARMM36 force field41,42 was used to model the bovine CA enzyme (Protein Data Bank accession number 1V9E43) under different conditions relevant to CO2 separation, including interaction with silica nanopores. The silica nanopore atoms were modeled with the CHARMM36-compatible INTERFACE force field44,45. Protonation states of amino acids of the CA enzyme were selected according to the results of PROPKA analysis at pH 746. A rectangular silica nanopore was built based on the structure of the alpha-cristobalite unit cell. The pore's outer dimensions are 11 × 12 × 10 nm and its internal dimensions are 8 × 8 × 10 nm. The average surface silanol density of the pore is 5.9 Si–OH nm–2, which provides a reasonable model of the amorphous silica surface used in the experimental membranes47,48. A percentage (16.5%) of the surface silanols were ionized to match the pH 7 conditions. Sodium (Na+) ions were added to counter the negative charge of the ionized silanol groups. No additional salt molecules, either Na+ or Cl−, were added to the simulation, except to produce an overall neutral charge simulation system. A vertical water-filled space exists between periodic images of the simulation cell of height 6 nm, giving the CA enzyme the ability to exit the nanopore. Three CA-nanopore systems were simulated with one, two, and four enzymes within the pore to observe possible crowding effects. A free CA enzyme in solution was also simulated for reference. All systems were simulated at room temperature (298 K) for 300 ns using a Nose–Hoover thermostat. The simulation volume for pore systems was adjusted during the early stages of the simulation to obtain an average pressure of 1 atm, and subsequently simulated at constant volume. The free CA enzyme was simulated at constant 1 atm pressure using a Parrinello–Rahman barostat.
All relevant data are available from the authors on request.
The original version of this Article contained an error in the spelling of the author Stanley S. Chou, which was incorrectly given as Stan Chou. This has now been corrected in both the PDF and HTML versions of the Article.
Forster, P. et al. In Climate Change 2007: The Physical Science Basis (eds Solomon, S. et al.) Ch. 2 (Cambridge University Press, Cambridge, 2007).
International Energy Agency. CO 2 Emissions from Fuel Combustion (International Energy Agency, Paris, 2016).
World Meteorological Organization. WMO Greenhouse Gas Bulletin: The State of Greenhouse Gases in the Atmosphere Based on Global Observations Through 2013. Bulletin no:10 (World Meteorological Organization, Geneva, 2014).
Aaron, D. & Tsouris, C. Separation of CO2 from flue gas: a review. Sep. Sci. Technol. 40, 321–348 (2005).
Rao, A. B. & Rubin, E. S. A technical, economic, and environmental assessment of amine-based CO2 capture technology for power plant greenhouse gas control. Environ. Sci. Technol. 36, 4467–4475 (2002).
ADS Article PubMed CAS Google Scholar
Khalilpour, R. et al. Membrane-based carbon capture from flue gas: a review. J. Clean. Prod. 103, 286–300 (2015).
Kentish, S. E., Scholes, C. A. & Stevens, G. W. Carbon dioxide separation through polymeric membrane systems for flue gas applications. Recent Pat. Chem. Eng. 1, 52–66 (2008).
Chow, J. C. et al. Separation and capture of CO2 from large stationary sources and sequestration in geological formations. J. Air Waste Manag. Assoc. 53, 1172–1182 (2003).
Korelskiy, D. et al. Efficient ceramic zeolite membranes for CO2/H2 separation. J. Mater. Chem. A 3, 12500–12506 (2015).
Zhou, M., Korelskiy, D., Ye, P., Grahn, M. & Hedlund, J. A uniformly oriented MFI membrane for improved CO2 separation. Angew. Chem. Int. Ed. 53, 3492–3495 (2014).
Wang, S. et al. Advances in high permeability polymer-based membrane materials for CO2 separations. Ener. Environ. Sci. 9, 1863–1890 (2016).
Kenarsari, S. D. et al. Review of recent advances in carbon dioxide separation and capture. RSC Adv. 3, 22739–22773 (2013).
Yao, K., Wang, Z., Wang, J. & Wang, S. Biomimetic material –poly(N-vinylimidazole)–zinc complex for CO2 separation. Chem. Commun. 48, 1766–1768 (2012).
Bao, L. & Trachtenberg, M. C. Facilitated transport of CO2 across a liquid membrane: comparing enzyme, amine, and alkaline. J. Membr. Sci. 280, 330–334 (2006).
Ward, W. J. & Robb, W. L. Carbon dioxide-oxygen separation: facilitated transport of carbon dioxide across a liquid film. Science 156, 1481–1484 (1967).
Trachtenberg, M. C., Cowan, R. M., Smith, D. A. & Sider, I. L. Development of Biomimetic Membranes for Near Zero PC Power Plant Emissions. DOE Project DE‐ FC26‐07NT43084 (Carbozyme, Inc., New Jersey, 2011).
Alivisatos, P. et al. Basic Research Needs For Carbon Capture: Beyond 2020. Report of the Basic Energy Sciences Workshop for Carbon Capture: Beyond 2020. https://science.energy.gov/~/media/bes/pdf/reports/files/Basic_Research_Needs_for_Carbon_Capture_rpt.pdf (2010).
Toy, L., Kataria, A. & Gupta, R. CO 2 Cap ture Membrane Process For Power Plant Flue Gas (RTI International, North Carolina, 2012).
Croissant, J. G., Fatieiev, Y., Almalik, A. & Khashab, N. M. Mesoporous silica and organosilica nanoparticles: physical chemistry, biosafety, delivery strategies, and biomedical applications. Adv. Healthcare Mater. 7, 1700831 (2017).
Brinker, C. J., Lu, Y., Sellinger, A. & Fan, H. Evaporation-induced self-assembly: nanostructures made easy. Adv. Mater. 11, 579–585 (1999).
Pang, J. et al. Directed aerosol writing of ordered silica nanostructures on arbitrary surfaces with self‐assembling inks. Small 4, 982–989 (2008).
Clark, T. et al. A new application of UV−ozone treatment in the preparation of substrate-supported, mesoporous thin films. Chem. Mater. 12, 3879–3884 (2000).
Jiang, Y.-B. et al. Sub-10 nm thick microporous membranes made by plasma-defined atomic layer deposition of a bridged silsesquioxane precursor. J. Am. Chem. Soc. 129, 15446–15447 (2007).
Wang, J. et al. Superhydrophilic antireflective periodic mesoporous organosilica coating on flexible polyimide substrate with strong abrasion-resistance. ACS Appl. Mater. Interfaces 9, 5468–5476 (2017).
Singh, S., Houston, J., van Swol, F. & Brinker, C. J. Superhydrophobicity: drying transition of confined water. Nature 442, 526–526 (2006).
Hooks, D. O. & Rehm, B. H. Surface display of highly-stable Desulfovibrio vulgaris carbonic anhydrase on polyester beads for CO2 capture. Biotechnol. Lett. 37, 1415–1420 (2015).
Trachtenberg, M. C. Enzyme Facilitated Carbon Dioxide Capture. In Carbon Capture and Storage: Technology Innovation and Market Viability, February 23 (Agrion, 2011).
Emami, F. S. et al. Prediction of specific biomolecule adsorption on silica surfaces as a function of pH and particle size. Chem. Mater. 26, 5725–5734 (2014).
Patwardhan, S. V. et al. Chemistry of aqueous silica nanoparticle surfaces and the mechanism of selective peptide adsorption. J. Am. Chem. Soc. 134, 6244–6256 (2012).
Lei, C., Shin, Y., Liu, J. & Ackerman, E. J. Entrapping enzyme in a functionalized nanoporous support. J. Am. Chem. Soc. 124, 11242–11243 (2002).
Lu, Y., Ye, X., Zhang, Z., Khodayari, A. & Djukadi, T. Development of a carbonate absorption-based process for post-combustion CO2 capture: the role of biocatalyst to promote CO2 absorption rate. Energy Procedia 4, 1286–1293 (2011).
Lu, Y. et al. Continuous formation of supported cubic and hexagonal mesoporous films by sol–gel dip-coating. Nature 389, 364–368 (1997).
ADS Article CAS Google Scholar
Fan, H. et al. Rapid prototyping of patterned functional nanostructures. Nature 405, 56–60 (2000).
Doshi, D. A. et al. Optically defined multifunctional patterning of photosensitive thin-film silica mesophases. Science 290, 107–111 (2000).
ADS Article PubMed CAS PubMed Central Google Scholar
Lu, Y. et al. Evaporation-induced self-assembly of hybrid bridged silsesquioxane film and particulate mesophases with integral organic functionality. J. Am. Chem. Soc. 122, 5258–5261 (2000).
Jiang, Y.-B., Liu, N., Gerung, H., Cecchi, J. L. & Brinker, C. J. Nanometer-thick conformal pore sealing of self-assembled mesoporous silica by plasma-assisted atomic layer deposition. J. Am. Chem. Soc. 128, 11018–11019 (2006).
Fu, Y. et al. Atomic layer deposition of l-alanine polypeptide. J. Am. Chem. Soc. 136, 15821–15824 (2014).
Qiang, Z. et al. Large-scale roll-to-roll fabrication of ordered mesoporous materials using resol-assisted cooperative assembly. ACS Appl. Mater. Interfaces 7, 4306–4310 (2015).
Xomeritakis, G., Tsai, C., Jiang, Y. & Brinker, C. Tubular ceramic-supported sol–gel silica-based membranes for flue gas carbon dioxide capture and sequestration. J. Membr. Sci. 341, 30–36 (2009).
Xomeritakis, G. et al. Aerosol-assisted deposition of surfactant-templated mesoporous silica membranes on porous ceramic supports. Microporous Mesoporous Mater. 66, 91–101 (2003).
Best, R. B. et al. Optimization of the additive CHARMM all-atom protein force field targeting improved sampling of the backbone ϕ, ψ and side-chain χ1 and χ2 dihedral angles. J. Chem. Theory Comput. 8, 3257–3273 (2012).
MacKerell, A. D. Jr et al. All-atom empirical potential for molecular modeling and dynamics studies of proteins. J. Phys. Chem. B. 102, 3586–3616 (1998).
Saito, R., Sato, T., Ikai, A. & Tanaka, N. Structure of bovine carbonic anhydrase II at 1.95 Å resolution. Acta Crystallogr. Sect. D. Biol. Crystallogr. 60, 792–795 (2004).
Heinz, H., Lin, T.-J., Kishore Mishra, R. & Emami, F. S. Thermodynamically consistent force fields for the assembly of inorganic, organic, and biological nanostructures: the INTERFACE force field. Langmuir 29, 1754–1765 (2013).
Emami, F. S. et al. Force field and a surface model database for silica to simulate interfacial properties in atomic resolution. Chem. Mater. 26, 2647–2658 (2014).
Søndergaard, C. R., Olsson, M. H., Rostkowski, M. & Jensen, J. H. Improved treatment of ligands and coupling effects in empirical Discussion and rationalization of pKa values. J. Chem. Theory Comput. 7, 2284–2295 (2011).
Tsige, M. et al. Interactions and structure of poly(dimethylsiloxane) at silicon dioxide surfaces: electronic structure and molecular dynamics studies. J. Chem. Phys. 118, 5132–5142 (2003).
Lorenz, C. D. et al. Simulation study of the silicon oxide and water interface. J. Comput. Theor. Nanosci. 7, 2586–2601 (2010).
Merkel, T. et al. Membrane Process To Capture CO 2 From Coal-Fired Power Plant Flue Gas (National Energy Technology Laboratory, 2009).
Casillas, C. et al. Pilot testing of a membrane system for post-combustion CO2 capture DE-FE0005795. In Proc. 2015 NETL CO 2 Capture Technology Meeting (National Energy Technology Laboratory, Pittsburgh, 2015).
Hasse, D., Kulkarni, S., Sanders, E., Corson, E. & Tranier, J.-P. CO 2 capture by sub-ambient membrane operation. Energy Procedia 37, 993–1003 (2013).
Baker, R. W. & Lokhandwala, K. Natural gas processing with membranes: an overview. Ind. Eng. Chem. Res. 47, 2109–2121 (2008).
Vora, S. D. DOE/NETL Advanced CO 2 Capture R&D Program: Technology Update (National Energy Technology Laboratory, Pittsburgh, 2013).
C.J.B., S.B.R., J.M.V., E.C., and S.C. acknowledge support by the Sandia National Laboratory Laboratory-Directed Research and Development Program. C.J.B., Y.-B.J., and Y.F. acknowledge support from the US Department of Energy, Office of Science, Division of Catalysis Science under Grant No. DE-FG02-02ER15368 and the Air Force Office of Scientific Research under Grant No. FA9550-14-1-0066. C.J.B. also acknowledges support from the Department of Energy Office of Science, Division of Materials Science and Engineering. This work was supported, in part, by the National Science Foundation under Cooperative Agreement No. EEC-1647722. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF. This work was performed, in part, at the Center for Integrated Nanotechnologies (CINT), an Office of Science User Facility operated for the U.S. DOE's Office of Science by Los Alamos National Laboratory (Contract DE-AC52 06NA25296). Sandia National Laboratories (SNL) is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honey-well International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.
Department of Chemical and Biological Engineering, University of New Mexico, Albuquerque, NM, 87131, USA
Yaqin Fu, Ying-Bing Jiang, Darren Dunphy, Haifeng Xiong, Jonas G. Croissant, Joseph L. Cecchi & C. Jeffrey Brinker
Center for Micro-Engineered Materials, University of New Mexico, Albuquerque, NM, 87131, USA
Yaqin Fu, Ying-Bing Jiang, Darren Dunphy, Haifeng Xiong, Jonas G. Croissant & C. Jeffrey Brinker
Department of Earth and Planetary Sciences, University of New Mexico, Albuquerque, NM, 87131, USA
Ying-Bing Jiang
Sandia National Laboratories, Albuquerque, NM, 87185, USA
Eric Coker, Stanley S. Chou, Juan M. Vanegas, Susan B. Rempe & C. Jeffrey Brinker
Angstrom Thin Film Technologies LLC, Albuquerque, NM, 87113, USA
Hongxia Zhang
Department of Physics, University of Vermont, Burlington, VT, 05405, USA
Juan M. Vanegas
Yaqin Fu
Darren Dunphy
Haifeng Xiong
Eric Coker
Stanley S. Chou
Jonas G. Croissant
Joseph L. Cecchi
Susan B. Rempe
C. Jeffrey Brinker
Y.-B.J., S.B.R., J.L.C., and C.J.B. invented the enzymatic liquid membrane concept. Y.F. fabricated the membrane in the laboratories of C.J.B. and Y.-B.J. with the help of H.X.Z. who performed the atomic layer deposition. Y.F., S.C., and Y.-B.J. designed and executed the gas permeation experiments. D.D., H.F.X., and E.C. performed physicochemical characterization studies. J.M.V. designed and executed the modeling study in collaboration with S.B.R. C.J.B. wrote the manuscript, with contributions from S.B.R.; J.G.C. and C.J.B. revised the manuscript.
Correspondence to Ying-Bing Jiang or C. Jeffrey Brinker.
The authors declare no competing interests.
Peer Review File
Fu, Y., Jiang, YB., Dunphy, D. et al. Ultra-thin enzymatic liquid membrane for CO2 separation and capture. Nat Commun 9, 990 (2018). https://doi.org/10.1038/s41467-018-03285-x
DOI: https://doi.org/10.1038/s41467-018-03285-x
Hydration Mimicry by Membrane Ion Channels
Mangesh I. Chaudhari
, Juan M. Vanegas
, L.R. Pratt
, Ajay Muralidharan
& Susan B. Rempe
Annual Review of Physical Chemistry (2020)
Liquid-based porous membranes
Zhizhi Sheng
, Jian Zhang
, Jing Liu
, Yunmao Zhang
, Xinyu Chen
& Xu Hou
Chemical Society Reviews (2020)
Biocatalytic membrane: Go far beyond enzyme immobilization
Jianquan Luo
, Siqing Song
, Hao Zhang
, Huiru Zhang
, Jinxuan Zhang
& Yinhua Wan
Engineering in Life Sciences (2020)
Electrostatic Interactions between Acid-/Base-Containing Polymer Nanoparticles and Proteins: Impact of Polymerization pH
Ryutaro Honda
, Tomohiro Gyobu
, Hideto Shimahara
, Yoshiko Miura
& Yu Hoshino
ACS Applied Bio Materials (2020)
Organic–inorganic hybrids for CO2 sensing, separation and conversion
Matthias Rebber
, Christoph Willa
& Dorota Koziej
Nanoscale Horizons (2020)
Nature Energy | News & Views
Embedded enzymes catalyse capture
Sandra Kentish
Editors' Highlights
Top Articles of 2019
Nature Communications ISSN 2041-1723 (online)
|
CommonCrawl
|
Items where Year is 2009
Group by: Authors | Type | No Grouping
Jump to: A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | R | S | T | U | V | W | X | Y | Z | Č
Number of items: 540.
Abbott, S. (2009). Social capital and health: the problematic roles of social networks and social surveys. Health Sociology Review, 18(3), pp. 297-306.
Akerman, S., Reyes-Aldasoro, C. C., Fisher, M. view all authors, Pettyjohn, K. L., Björndahl, M. A., Evans, H. & Tozer, G. M. (2009). Microflow of fluorescently labelled red blood cells in tumours expressing single isoforms of VEGF and their response to VEGF-R tyrosine kinase inhibition. Paper presented at the 2nd Micro and Nano Flows Conference (MNF2009), 01-09-2009 - 02-09-2009, London, Uk.
Al-Mohammed, H.I (2009). Development of a Breathing Monitor and Training System, And the Analysis of Methods of Training Patients to Regulate their Breathing when Undergoing Radiotherapy for Lung cancer. (Unpublished Doctoral thesis, City, University of London)
Alberdi, E., Strigini, L., Leach, K. view all authors, Ryan, P., Palanque, P. & Winckler, M. (2009). Gaining assurance in a voter-verifiable voting system. Paper presented at the 2009 Second International Conference on Dependability, 18 - 23 Jun 2009, Athens, Greece.
Alberdi, E., Strigini, L., Povyakalo, A. A. view all authors & Ayton, P. (2009). Why Are People's Decisions Sometimes Worse with Computer Support?. COMPUTER SAFETY, RELIABILITY, AND SECURITY, PROCEEDINGS, 5775, pp. 18-31. doi: 10.1007/978-3-642-04468-7_3
Aldrovandi, Silvio (2009). Memory and Judgment Bias in Retrospective Evaluations. (Unpublished Doctoral thesis, City University London)
Alevizos, C. (2009). SYMEX: A Systems Theory based Framework for Workflow Modelling and Execution. (Unpublished Doctoral thesis, City, University of London)
Allbon, E. (2009). Innovative involvement not embarrassing intervention: Using technology to connect with students without treading on virtual toes. Legal Information Management, 9(4), pp. 240-245. doi: 10.1017/S1472669609990478
Allefeld, C. ORCID: 0000-0002-1037-2735, Atmanspacher, H. & Wackermann, J. (2009). Mental states as macrostates emerging from brain electrical dynamics. Chaos, 19(1), 015102. doi: 10.1063/1.3072788
Anderson, J., Simpson, A., Essen, C. view all authors, Clark, M., Cook, J., Edwards, L., Fox, L., Light, I., MacMahon, A., Malihi-Shoja, L., Patel, R., Samociuk, S., Tang, L. & Westerby, N. (2009). Involving Service Users and Carers in Education: The Development Worker Role: Guidelines for Higher Education Institutes. Mental Health in Higher Education.
Andriotis, Adamantios (2009). Investigation of Cavitation inside Multi-hole Injectors for large Diesel Engines and its Effect on the Near-nozzle Spray Structure. (Unpublished Doctoral thesis, City University London)
Antai, D. (2009). Inequitable childhood immunization uptake in Nigeria: a multilevel analysis of individual and contextual determinants. BMC Infectious Diseases, 9(181), doi: 10.1186/1471-2334-9-181
Assis, P. E. G. & Fring, A. (2009). From real fields to complex Calogero particles. Journal of Physics A: Mathematical and General, 42(42), doi: 10.1088/1751-8113/42/42/425206
Assis, P. E. G. & Fring, A. (2009). Integrable models from PT-symmetric deformations. Journal of Physics A: Mathematical and Theoretical, 42(10), doi: 10.1088/1751-8113/42/10/105206
Ayers, S. (2009). Posttraumatic stress disorder after childbirth: Analysis of symptom presentation and sampling. Journal of Affective Disorders, 119, pp. 200-204. doi: 10.1016/j.jad.2009.02.029
Ayers, S., Copland, C. & Dunmore, E. (2009). A preliminary study of negative appraisals and dysfunctional coping strategies associated with post-traumatic stress disorder symptoms following myocardial infarction. British Journal of Health Psychology, 14(3), pp. 459-471. doi: 10.1348/135910708X349343
Ayers, S. & Ford, E. (2009). Birth trauma: Widening our knowledge of postnatal mental health. European Health Psychologist, 11(2), pp. 16-19.
Ayers, S., Ford, E. & Alder, B. (2009). Reproductive issues. In: Alder, B., Abraham, C. & Van Teijlingen, E. (Eds.), Psychology and Sociology Applied to Medicine: An Illustrated Colour Text. (pp. 6-7). UK: Elsevier Health Sciences.
Bagchi, B. & Fring, A. (2009). Comment on "Non-Hermitian Quantum Mechanics with Minimal Length Uncertainty". Symmetry, Integrability and Geometry: Methods and Applications (SIGMA), 5(089), doi: 10.3842/SIGMA.2009.089
Bagchi, B. & Fring, A. (2009). Minimal length in quantum mechanics and non-Hermitian Hamiltonian systems. Physics Letters A, 373(47), pp. 4307-4310. doi: 10.1016/j.physleta.2009.09.054
Ballotta, L. (2009). Pricing and capital requirements for with profit contracts: modelling considerations. Quantitative Finance, 9(7), pp. 803-817. doi: 10.1080/14697680802452068
Ballotta, L. & Haberman, S. (2009). Investment Strategies and Risk Management for Participating Life Insurance Contracts. London: SSRN.
Banal-Estanol, A. & Rupérez Micola, A. (2009). Composition of Electricity Generation Portfolios, Pivotal Dynamics, and Market Prices. Management Science, 55(11), pp. 1813-1831. doi: 10.1287/mnsc.1090.1067
Banerjee, R. & Ayers, S. (2009). Development in Early Infancy. (3 ed.) In: Alder, B, Abraham, C & Teijlingen, EV (Eds.), Psychology and Sociology Applied to Medicine: An Illustrated Colour Text. (pp. 8-9). UK: Elsevier Health Sciences.
Barelli, M. (2009). The Role of Soft Law in the International Legal System: the case of the United Nations Declaration on the Rights of Indigenous Peoples. International and Comparative Law Quarterly, 58(4), pp. 957-983. doi: 10.1017/S0020589309001559
Baronchelli, A., Barrat, A. & Pastor-Satorras, R. (2009). Glass transition and random walks on complex energy landscapes. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, 80(2), doi: 10.1103/PhysRevE.80.020102
Baronchelli, A. & Pastor-Satorras, R. (2009). Effects of mobility on ordering dynamics. Journal of Statistical Mechanics: Theory and Experiment, 2009(11), doi: 10.1088/1742-5468/2009/11/L11001
Basdekis, I., Karampelas, P., Doulgeraki, V. view all authors & Stephanidis, C. (2009). Designing Universally Accessible Networking Services for a Mobile Personal Assistant. Lecture Notes in Computer Science, 5615, pp. 279-288. doi: 10.1007/978-3-642-02710-9_31
Bawden, D. (2009). Darwin, Hooker and the documentation of Victorian science. Journal of Documentation, 65(3), pp. 337-338.
Bawden, D. (2009). Documentation in depressed times. Journal of Documentation, 65(1), p. 5.
Bawden, D. (2009). Everyday practices of documentation, and the influence of information science. Journal of Documentation, 65(5), pp. 717-718.
Bawden, D. (2009). Naming of parts (and things). Journal of Documentation, 65(6), pp. 869-870.
Bawden, D. (2009). Sharing knowledge and information, three views of the future. Paper presented at the Inforum 2009, May 2009, Prague, Czech Republic.
Bawden, D. (2009). The end of expertise?. Journal of Documentation, 65(2), pp. 185-186.
Bawden, D. (2009). An obsession with its own future? The library, the web and the phonographotek. Journal of Documentation, 65(4), pp. 537-538.
Bawden, D. & Robinson, L. (2009). The dark side of information: overload, anxiety and other paradoxes and pathologies. Journal of Information Science, 35(2), pp. 180-191. doi: 10.1177/0165551508095781
Beber, A., Brandt, M. W. & Kavajecz, K. A. (2009). Flight-to-Quality or Flight-to-Liquidity? Evidence from the Euro-Area Bond Market. Review of Financial Studies, 22(3), pp. 925-957. doi: 10.1093/rfs/hhm088
Beccaria, M. & Forini, V. ORCID: 0000-0001-9726-1423 (2009). Four loop reciprocity of twist two operators in N=4 SYM. Journal of High Energy Physics, 2009, 111.. doi: 10.1088/1126-6708/2009/03/111
Beccaria, M. & Forini, V. ORCID: 0000-0001-9726-1423 (2009). Qcd-like properties of anomalous dimensions in the N=4 supersymmetric Yang-Mills theory. Theoretical and Mathematical Physics, 159(3), doi: 10.1007/s11232-009-0059-6
Beccaria, M., Forini, V. ORCID: 0000-0001-9726-1423, Lukowski, T. view all authors & Zieme, S. (2009). Twist-three at five loops, Bethe ansatz and wrapping. Journal of High Energy Physics, 2009, 129.. doi: 10.1088/1126-6708/2009/03/129
Beccaria, M., Forini, V. ORCID: 0000-0001-9726-1423, Tirziu, A. view all authors & Tseytlin, A. A. (2009). Structure of large spin expansion of anomalous dimensions at strong coupling. Nuclear Physics B, 812(1-2), pp. 144-180. doi: 10.1016/j.nuclphysb.2008.12.013
Begaj Qerimi, L. (2009). Geotechnical centrifuge model testing for pile foundation re-use. (Unpublished Doctoral thesis, City, University of London)
Ben-Gad, M. (2009). Economic 'impact' is a poor basis for funding decisions. Research Fortnight,
Ben-Gad, M. (2009). The two sector endogenous growth model: an atlas (09/02). London, UK: Department of Economics, City University London. doi: 09/02
Benetos, E., Holzapfel, A. & Stylianou, Y. (2009). Pitched Instrument Onset Detection based on Auditory Spectra. Paper presented at the 10th International Society for Music Information Retrieval Conference, ISMIR 2009, 26 - 30 Oct 2009, Kobe, Japan.
Benton, A. L. ORCID: 0000-0002-2685-4114 (2009). On the Ground: Candidate Appearances and Events during the 2006 Mexican Presidential Campaign. Politica y Gobierno(2), pp. 135-176.
Bettingen, J-F. & Luedicke, M. K. (2009). Can brands make us happy? A research framework for the study of brands and their effects on happiness. Advances in Consumer Research, 36, pp. 308-315.
Bhalla, A., Lampel, J., Henderson, S. view all authors & Watkins, D. (2009). Exploring alternative strategic management paradigms in high-growth ethnic and non-ethnic family firms. Small Business Economics, 32(1), pp. 77-94. doi: 10.1007/s11187-007-9064-z
Biais, B. & Mariotti, T. (2009). Credit, wages, and bankruptcy laws. Journal of the European Economic Association, 7(5), pp. 939-973. doi: 10.1162/JEEA.2009.7.5.939
Bilotta, E. & Stallebrass, S. E. (2009). Prediction of stresses and strains around model tunnels with adjacent embedded walls in overconsolidated clay. Computers and Geotechnics, 36(6), pp. 1049-1057. doi: 10.1016/j.compgeo.2009.03.015
Binnersley, J., Woodcock, A., Kyriacou, P. A. view all authors & Wallace, L. M. (2009). Establishing user requirements for a patient held electronic record system in the United Kingdom. Human Factors and Ergonomics Society, 53rd Annual Meeting, 2, pp. 714-717. ISSN 1071-1813
Bird, H., Boykoff, M., Goodman, M. K. view all authors, Monbiot, G. & Littler, J. (2009). The media and climate change. Soundings: A Journal of Politics and Culture, 43, pp. 47-64. doi: 10.3898/136266209790424595
Blasco, J., Hernandez-Castro, J. C., Tapiador, J. M. E. view all authors, Ribagorda, A. & Orellana-Quiros, M. A. (2009). Steganalysis of Hydan. IFIP Advances in Information and Communication Technology, 297, pp. 132-142. doi: 10.1007/978-3-642-01244-0_12
Bleisch, S., Dykes, J. & Nebiker, S. (2009). Building bridges between methodological approaches: a meta-framework linking experiments and applied studies in 3D geovisualization research. Paper presented at the GIS Research UK 17th Annual Conference (GISRUK 2009), 1 - 3 Apr 2009, University of Durham, Durham, UK.
Bloomfield, R. E., Chozos, N. & Salako, K. ORCID: 0000-0003-0394-7833 (2009). Current capabilities, requirements and a proposed strategy for interdependency analysis in the UK. Paper presented at the 4th International Workshop, CRITIS 2009, 30 September - 2 October 2009, Bonn, Germany. doi: 10.1007/978-3-642-14379-3_16
Bogle, V. (2009). Randomised controlled trial to test the efficacy of motivational interviewing and implementation intentions for a physical activity referral scheme. (Unpublished Doctoral thesis, City University)
Bogosian, A., Moss-Morris, R., Yardley, L. view all authors & Dennison, L. (2009). Experiences of partners of people in the early stages of multiple sclerosis. Multiple Sclerosis, 15(7), pp. 876-884. doi: 10.1177/1352458508100048
Bowers, L., Allan, T., Simpson, A. view all authors, Jones, J. & Whittington, R. (2009). Morale is high in acute inpatient psychiatry. Social Psychiatry and Psychiatric Epidemiology, 44(1), pp. 39-46. doi: 10.1007/s00127-008-0396-z
Bowers, L., Allan, T., Simpson, A. view all authors, Jones, J., van der Merwe, M. & Jeffery, D. (2009). Identifying Key Factors Associated with Aggression on Acute Inpatient Psychiatric Wards. Issues in Mental Health Nursing, 30(4), pp. 260-271. doi: 10.1080/01612840802710829
Bowers, L., Jones, J. & Simpson, A. (2009). The demography of nurses and patients on acute psychiatric wards in England. Journal Of Clinical Nursing, 18(6), pp. 884-892. doi: 10.1111/j.1365-2702.2008.02362.x
Bowler, D. M., Gaigg, S. B. & Gardiner, J. M. (2009). Free Recall Learning of Hierarchically Organised Lists by Adults with Asperger's Syndrome: Additional Evidence for Diminished Relational Processing. Journal of Autism and Developmental Disorders, 39(4), pp. 589-595. doi: 10.1007/s10803-008-0659-2
Bowler, D. M., Limoges, E. & Mottron, L. (2009). Different Verbal Learning Strategies in Autism Spectrum Disorder: Evidence from the Rey Auditory Verbal Learning Test. Journal of Autism and Developmental Disorders, 39(6), pp. 910-915. doi: 10.1007/s10803-009-0697-4
Bowyer, S., Caraher, M., Eilbert, K. view all authors & Carr-Hill, R. (2009). Shopping for food: lessons from a London borough. British Food Journal, 111(4-5), pp. 452-474. doi: 10.1108/00070700910957294
Boyes, R., Slabaugh, G. G. & Beddoe, G. (2009). Fast pseudo-enhancement correction in CT colonography using linear shift-invariant filters. Paper presented at the 16th IEEE International Conference on Image Processing (ICIP), 2009., 07-11-2009 - 10-11-2009, Cairo, Eygpt. doi: 10.1109/ICIP.2009.5414016
Boyes, R., Slabaugh, G. G. & Beddoe, G. (2009). Fast pseudo-enhancement correction in CT colonography using linear shift-invariant filters. In: 16th IEEE International Conference on Image Processing (ICIP), 2009. (pp. 2509-2512). IEEE. doi: 10.1109/ICIP.2009.5414016
Bradley, S. & McAuliffe, E. (2009). Mid-level providers in emergency obstetric and newborn health care: factors affecting their performance and retention within the Malawian health system.. Human Resources for Health, 7, 14 - ?. doi: 10.1186/1478-4491-7-14
Brainerd, C. J., Reyna, V. F. & Howe, M. L. (2009). Trichotomous Processes in Early Memory Development, Aging, and Neurocognitive Impairment: A Unified Theory. Psychological Review, 116(4), pp. 783-832. doi: 10.1037/a0016963
Bratt, G. A., Edlund, M., Cullberg, M. view all authors, Hejdeman, B., Blaxhult, A. & Eriksson, L. E. (2009). Sexually transmitted infections (STI) in men who have sex with men (MSM). Open Infectious Diseases Journal, 3, pp. 118-127. doi: 10.2174/1874279301004010118
Brockbank, A. (2009). The role of reflective dialogue in transormational reflective learning. (Unpublished Doctoral thesis, City University)
Broom, M. (2009). Balancing risks and rewards: the logic of violence. Frontiers in Behavioral Neuroscience, 3(51), doi: 10.3389/neuro.08.051.2009
Broom, M., Kiss, I. Z. & Rafols, I. (2009). Can epidemic models describe the diffusion of research topics across disciplines?. Paper presented at the 12th International Conference of the International Society for Scientometrics and Informetrics, 14-07-2009 - 17-07-2009, Rio de Janeiro.
Broom, M., Luther, R. M. & Rychtar, J. (2009). A Hawk-Dove game in kleptoparasitic populations. Journal of Combinatorics, Information and System Sciences, 4, pp. 449-462.
Broom, M. & Rychtar, J. (2009). A game theoretical model of kleptoparasitism with incomplete information. Journal of Mathematical Biology, 59(5), pp. 631-649. doi: 10.1007/s00285-008-0247-2
Broom, M., Rychtar, J. & Stadler, B. (2009). Evolutionary Dynamics on Small-Order Graphs. Journal of Interdisciplinary Mathematics, 12, pp. 129-140.
Buffin, D. G. (2009). UK pesticides policy - a policy paradigm in transition?. (Unpublished Doctoral thesis, City University London)
Butcher, A. (2009). Men's constructions of their experiences of breaking up with women: A qualitative study.. (Unpublished Doctoral thesis, City, University of London)
Butt, Z. & Haberman, S. (2009). llc: a collection of R functions for fitting a class of Lee-Carter mortality models using iterative fitting algorithms (Actuarial Research Paper No. 190). London, UK: Faculty of Actuarial Science & Insurance, City University London. doi: Actuarial Research Paper No. 190
Camm, E.M (2009). Narcissistic Vulnerabilities Experienced within the Processes of Change and Development. (Unpublished Doctoral thesis, City, University of London)
Caraher, M., Crawley, H. & Lloyd, S. (2009). Nutrition policy across the UK: Briefing Paper. London: The Caroline Walker Trust.
Caraher, M., Lloyd, S. & Madelin, T. (2009). Cheap as Chicken: Fast Food Outlets in Tower Hamlets (2). London: Centre for Food Policy, City University. doi: 2
Caraher, M. & Wu, M. (2009). Evaluation of Good Food Training for London: Final Report December 2009. London: Centre for Food Policy School of Community and Health Sciences, City University.
Caraher, M., Wu, M. & Seeley, A. (2009). ACA chefs adopt a school: An evaluation (9781900804431). London: Centre for Food Policy, City University. doi: 9781900804431
Carlton, J., Radosavlijevic, D. & Whitworth, S. (2009). Rudder-Propeller-Hull Interaction: The Results of Some Recent Research, In-Service Problems and their Solutions. Paper presented at the First International Symposium on Marine Propulsors smp'09,, June 2009, Trondheim, Norway.
Castelló, X., Baronchelli, A. & Loreto, V. (2009). Consensus and ordering in language dynamics. European Physical Journal B (The), 71(4), pp. 557-564. doi: 10.1140/epjb/e2009-00284-2
Castro-Alvaredo, O. & Doyon, B. (2009). Bi-partite Entanglement Entropy in Massive QFT with a Boundary: the Ising Model. Journal of Statistical Physics, 134(1), pp. 105-145. doi: 10.1007/s10955-008-9664-2
Castro-Alvaredo, O. & Doyon, B. (2009). Bi-partite entanglement entropy in massive (1+1)-dimensional quantum field theories. Journal of Physics A: Mathematical and General, 42(50), doi: 10.1088/1751-8113/42/50/504006
Castro-Alvaredo, O. & Fring, A. (2009). A spin chain model with non-Hermitian interaction: the Ising quantum spin chain in an imaginary field. Journal of Physics A: Mathematical and Theoretical, 42(46), doi: 10.1088/1751-8113/42/46/465211
Chalaby, J. (2009). Broadcasting in a Post-National Environment: The Rise of Transnational TV Groups. Critical Studies in Television, 4(1), pp. 39-64. doi: 10.7227/CST.4.1.5
Charles, P. J., Howe, J. M. & King, A. (2009). Integer polyhedra for program analysis. Algorithmic Aspects in Information and Management, Lecture Notes in Computer Science, 5564, pp. 85-99. doi: 10.1007/978-3-642-02158-9_9
Chen, T. & Zeng, Y. (2009). Classification of traffic flows into QoS classes by unsupervised learning and KNN clustering. KSII Trans. on Internet and Information Systems, 3(2), pp. 134-146. doi: 10.3837/tiis.2009.02.001
Chen, Y. (2009). Essays on the Role of Informed Trading in Stock Markets. (Unpublished Doctoral thesis, City University London)
Chuang, J. & Lazarev, A. (2009). Abstract Hodge decomposition and minimal models for cyclic algebras. Letters in Mathematical Physics, 89(1), pp. 33-49. doi: 10.1007/s11005-009-0314-7
Clare, A., ap Gwilym, O., Seaton, J. view all authors & Thomas, S. (2009). Price and Momentum as Robust Tactical Approaches to Global Equity Investing. London: Cass Business School.
Cocks, N., Sautin, L., Kita, S. view all authors, Morgan, G. & Zlotowitz, S. (2009). Gesture and speech integration: an exploratory study of a man with aphasia. International Journal of Language and Communication Disorders, 44(5), pp. 795-804. doi: 10.1080/13682820802256965
Coe, R. (2009). Team Work and Conflict During Elective Procedures in English National Health Service Operating Theatres. (Unpublished Doctoral thesis, City, University of London)
Collantes-Celador, G. (2009). Becoming 'European' through Police Reform: a Successful Strategy in Bosnia and Herzegovina?. Crime, Law and Social Change, 51(2), pp. 231-242. doi: 10.1007/s10611-008-9157x
Collins, D. A. (2009). Canada's Prohibition of Automated Bank Machine Withdrawal Charges as a Violation of the WTO GATS. Manchester Journal of International Economic Law, 4(3),
Collins, D. A. (2009). Efficient Breach, Reliance and Contract Remedies at the WTO. Journal of World Trade, 43(2), pp. 225-244.
Collins, D. A. (2009). Health Protection at the World Trade Organization: The J-Value as a Universal Standard for Reasonableness of Regulatory Precautions. JOURNAL OF WORLD TRADE, 43(5), pp. 1071-1091.
Collins, D. A. (2009). Reliance Remedies at the International Centre for the Settlement of Investment Disputes. Northwestern Journal of International Law and Business, 29(1), pp. 195-216.
Collins, R. (2009). Trust and trustworthiness in the fourth and fifth estates. International Journal of Communication, 3, pp. 61-86.
Comuzzi, M., Kotsokalis, C., Spanoudakis, G. view all authors & Yahyapour, R. (2009). Establishing and Monitoring SLAs in complex Service Based Systems. In: Damiani, E., Zhang, J. & Chang, R. (Eds.), 2009 IEEE International Conference on Web Services. Los Alamitos, California, I & II. (pp. 783-790). IEEE. doi: 10.1109/ICWS.2009.47
Comuzzi, M. & Spanoudakis, G. (2009). A Framework for Hierarchical and Recursive Monitoring of Service Based Systems. In: Sasaki, H., Bellot, G. O., Ehmann, M. view all authors & Dini, O. (Eds.), Fourth International Conference on Internet and Web Applications and Services, 2009. ICIW '09. (pp. 383-388). IEEE. doi: 10.1109/ICIW.2009.63
Cook, J. L., Saygin, A. P., Swain, R. view all authors & Blakemore, S. J. (2009). Reduced sensitivity to minimum-jerk biological motion in autism spectrum conditions. Neuropsychologia, 47(14), pp. 3275-3278. doi: 10.1016/j.neuropsychologia.2009.07.010
Cooper, J., Levington, A., Abbott, S. view all authors & Meyer, J. (2009). Partnerships for skills training in the care home sector. Primary Health Care Research and Development, 10(4), pp. 284-289. doi: 10.1017/S146342360999020X
Cowell, R. (2009). Efficient maximum likelihood pedigree reconstruction. Theoretical Population Biology, 76(4), pp. 285-291. doi: 10.1016/j.tpb.2009.09.002
Cowell, R. (2009). Exploration of a novel bootstrap technique for estimating the distribution of outstanding claims reserves in general insurance (Actuarial Research Paper No. 192). London, UK: Faculty of Actuarial Science & Insurance, City University London. doi: Actuarial Research Paper No. 192
Cowell, R. (2009). Validation of an STR peak area model. Forensic Science International: Genetics, 3(3), pp. 193-199. doi: 10.1016/j.fsigen.2009.01.006
Cowell, R., Lauritzen, S. L. & Mortera, J. (2009). Probabilistic expert systems for handling artifacts in complex DNA mixtures (Statistical Research Paper No. 31). London, UK: Faculty of Actuarial Science & Insurance, City University London. doi: Statistical Research Paper No. 31
Cox, A., De Visscher, M. & Martin, P. (2009). The blocks of the Brauer algebra in characteristic zero. Representation Theory, 13, pp. 272-308. doi: 10.1090/S1088-4165-09-00305-7
Cox, A., De Visscher, M. & Martin, P. (2009). A geometric characterisation of the blocks of the Brauer algebra. Journal of the London Mathematical Society, 80(2), pp. 471-494. doi: 10.1112/jlms/jdp039
Cruise, P.A. (2009). The role of culture in organisational and individual personnel selection decisions. (Unpublished Doctoral thesis, City University London)
Dassiou, X. (2009). The Water Industry, Competition and Climate Change. Paper presented at the CCRP Research Workshop, 09-07-2009 - 10-07-2009, Aston University, UK.
Dassiou, X. & Glycopantis, D. (2009). Symposium on Modern Market Structure. The Journal of Economic Asymmetries, 6(2), pp. 1-5.
Dassiou, X., Glycopantis, D. & Stavropoulou, C. (2009). Bundling in General Markets and in Health Care Systems. THE JOURNAL OF ECONOMIC ASYMMETRIES, 6(2), pp. 47-68.
Dassiou, X. & Stern, J. (2009). Infrastructure Contracts: Trust and Institutional Updating. Review of Industrial Organization, 35(1-2), pp. 171-216. doi: 10.1007/s11151-009-9221-4
De Martino, A., Egger, R. & Gogolin, A. O. (2009). Phonon-phonon interactions and phonon damping in carbon nanotubes. Physical Review B (PRB), 79(20), doi: 10.1103/PhysRevB.79.205408
Dell'Anna, L. & De Martino, A. (2009). Multiple magnetic barriers in graphene. Physical Review B (PRB), 79(4), doi: 10.1103/PhysRevB.79.045420
Dell'Anna, L. & De Martino, A. (2009). Wavevector-dependent spin filtering and spin transport through magnetic barriers in graphene. Phys. Rev. B 80, 155416 (2009), 80(15), doi: 10.1103/PhysRevB.80.155416
Denis, A. (2009). Editorial: Pluralism in Economics Education. International Review of Economics Education, 8(2), pp. 6-22.
Devlin, N., Parkin, D. & Browne, J. (2009). Using the EQ-5D as a performance measurement tool in the NHS (09/03). London, UK: Department of Economics, City University London. doi: 09/03
Devlin, N., Tsuchiya, A., Buckingham, K. view all authors & Tilling, C. (2009). Does the value of quality of life depend on duration? (09/07). London, UK: Department of Economics, City University London. doi: 09/07
Devlin, N., Tsuchiya, A., Buckingham, K. view all authors & Tilling, C. (2009). A uniform Time Trade Off method for states better and worse than dead: feasibility study of the 'lead time' approach (09/08). London, UK: Department of Economics, City University London. doi: 09/08
Dewhurst, S., Bould, E., Knott, L. view all authors & Thorley, C. (2009). The roles of encoding and retrieval processes in associative and categorical memory illusions. Journal of Memory & Language, 60(1), pp. 154-164. doi: 10.1016/j.jml.2008.09.002
Dholakia, U. M., Blazevic, V., Wiertz, C. view all authors & Algesheimer, R. (2009). Communal Service Delivery: How Customers Benefit From Participation in Firm-Hosted Virtual P3 Communities. Journal of Service Research, 12, pp. 208-226. doi: 10.1177/1094670509338618
Dhunput, A. (2009). Oil Transport in Piston Ring Assemblies. (Unpublished Doctoral thesis, City University London)
Di Domenico, M. & Fleming, P. (2009). 'It's A Guesthouse Not a Brothel': Policing Sex in The Home-Workplace. Human Relations, 62(2), pp. 245-269. doi: 10.1177/0018726708100359
Dimitrakopoulos, E. G., Kappos, A. J. & Makris, N. (2009). Dimensional analysis of yielding and pounding structures for records without distinct pulses. Soil Dynamics and Earthquake Engineering, 29(7), pp. 1170-1180. doi: 10.1016/j.soildyn.2009.02.006
Dimitrakopoulos, E. G., Makris, N. & Kappos, A. J. (2009). Dimensional analysis of the earthquake-induced pounding between adjacent structures. Earthquake Engineering and Structural Dynamics, 38(7), pp. 867-886. doi: 10.1002/eqe.872
Douiri, A., Siddique, M., Ye, X. view all authors, Beddoe, G. & Slabaugh, G. G. (2009). Enhanced detection in CT colonography using adaptive diffusion filtering. Progress in Biomedical Optics and Imaging - Proceedings of SPIE, 7259, p. 725923. doi: 10.1117/12.811563
Draghici, C. (2009). The 'Global War on Terror'. In: Sriram, C. L., Martin-Ortega, O. & Herman, J. (Eds.), War, Conflict and Human Rights: Theory and Practice. (pp. 138-159). Routledge, Taylor & Francis Group.
Draghici, C. (2009). International organisations and anti-terrorist sanctions: no accountability for human rights violations?. Critical Studies on Terrorism, 2(2), pp. 293-312. doi: 10.1080/17539150903021563
Draghici, C. (2009). "Las competencias personales del Estado". (1 ed.) In: Derecho internacional público. (pp. 261-278). Barcelona: Huygens Editorial.
Draghici, C. (2009). L'applicazione della dottrina 'clean hands' all'esercizio della protezione diplomatica (The Application of the 'Clean Hands' Doctrine to the Exercise of Diplomatic Protection). (2009 ed.) In: Panella, L. (Ed.), La protezione diplomatica: sviluppi e prospettive (Diplomatic protection: developments and prospects). . Torino: Giappichelli.
Draghici, C. (2009). Suspected terrorists' rights between the fragmentation and merger of legal orders: reflections in the margin of the Kadi ECJ appeal judgment. Washington University Global Studies Law Review, 8(4), pp. 627-658.
Draghici, C. (2009). Terror and Beyond: Moral and Normative Dilemmas. International Studies Review, 11(4), pp. 755-759. doi: 10.1111/j.1468-2486.2009.00895.x
Draghici, C. (2009). Trading Justice for Security? UN Anti-terrorism, Due Process Rights and the Role of the Judiciary: Lessons for policy makers. University of East London, Centre for Human Rights In Conflict, Policy,
Duncan, N. J. (2009). Teaching ethics pervasively or in discrete modules?. London: The City Law School of City University London.
Duncan, N.J., Baughan, P., Dymiotis-Wellington, C. view all authors, Halsall, S., Litosseliti, L. & Vielba, C. (2009). Promoting good academic practice through the curriculum and project work. Paper presented at the Teaching and Learning 7th London Scholarship of International Conference, London.
Dykes, J., Lloyd, D. & Radburn, R. (2009). Using the Analytic Hierarchy Process to prioritise candidate improvements to a geovisualization application. Paper presented at the GIS Research UK 17th Annual Conference (GISRUK 2009), 1 - 3 Apr 2009, University of Durham, Durham, UK.
Earle, E. A. (2009). Portfolio of Practice. (Unpublished Doctoral thesis, City University London)
Eccles, M. P., Hawthorne, G., Johnston, M. view all authors, Hunter, M., Steen, N., Francis, J., Hrisos, S., Elovainio, M. & Grimshaw, J. M. (2009). Improving the delivery of care for patients with diabetes through understanding optimised team work and organisation in primary care: Study protocol. Implementation Science, 4, 22 - ?. doi: 10.1186/1748-5908-4-22
Eccles, M. P., Hrisos, S., Francis, J. view all authors, Steen, N., Bosch, M. & Johnston, M. (2009). Can the collective intentions of individual professionals within healthcare teams predict the team's performance: developing methods and theory. Implementation Science, 4, 24 - ?. doi: 10.1186/1748-5908-4-24
Eckhardt, H.S (2009). Gas analysis in the deep ultraviolet wavelength region using fibre-optics and spectrophotometric detection. (Unpublished Doctoral thesis, City, University of London)
Edirisingha, P. & Fothergill, J. (2009). Balancing e-lectures with podcasts: a case study of an undergraduate engineering module. Engineering Education, 4(2), pp. 14-24.
Einbond, A., Schwarz, D. & Bresson, J. (2009). Corpus-Based Transcription as an Approach to the Compositional Control of Timbre. Paper presented at the International Computer Music Conference, ICMC 2009, August 15-21, 2009, Montreal, Quebec, Canada.
Elliott, C. & de Than, C. (2009). Restructuring the Homicide Offences to Tackle Violence, Discrimination and Drugs in a Modern Society. King's Law Journal, 20(1), pp. 69-88. doi: 10.1080/09615768.2009.11427721
Emms, P. & Haberman, S. (2009). Optimal management of an insurer's exposure in a competitive general insurance market. North American Actuarial Journal, 13(1), pp. 77-105. doi: 10.1080/10920277.2009.10597541
Endress, A., Cahill, D., Block, S. view all authors, Watumull, J. & Hauser, M. D. (2009). Evidence of an evolutionary precursor to human language affixation in a nonhuman primate. Biology Letters, 5(6), pp. 749-751. doi: 10.1098/rsbl.2009.0445
Endress, A. & Hauser, M. D. (2009). Syntax-induced pattern deafness. Proceedings of the National Academy of Sciences of the United States of America, 106(49), pp. 21001-21006. doi: 10.1073/pnas.0908963106
Endress, A. & Mehler, J. (2009). Primitive computations in speech processing. Quarterly Journal of Experimental Psychology, 62(11), pp. 2187-2209. doi: 10.1080/17470210902783646
Eriksson, L. E. & Nilsson Schönnesson, L. (2009). Nurse led drop-in clinic for structured HIV counselling and rapid testing [in Swedish]. Stockholm, Sweden: South Stockholm General Hospital.
Fahey, E. (2009). Going Back to Basics: Re-embracing the Fundamentals of the Free Movement of Persons in Metock. Legal Issues of Economic Integration, 36(1), pp. 83-89.
Fahey, E. (2009). Interpretive legitimacy and the distinction between "social assistance" and "work seekers allowance". European Law Review, 34,
Fairfax, H. R. J. (2009). The experience of mindfulness in Western therapeutic encounters; practitioner's perspective. (Unpublished Doctoral thesis, City University London)
Falconieri, S., Murphy, A. & Weaver, D. (2009). Underpricing and Ex Post Value Uncertainty. Financial Management, 38(2), pp. 285-300. doi: 10.1111/j.1755-053X.2009.01036.x
Falk, R., Falk, R. & Ayton, P. (2009). Subjective Patterns of Randomness and Choice: Some Consequences of Collective Responses. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(1), pp. 203-224. doi: 10.1037/0096-1523.35.1.203
Farion, K., Michalowski, W., Wilk, S. view all authors, O'Sullivan, D., Rubin, S. & Weiss, D. (2009). Clinical decision support system for point of care use--ontology-driven design and software implementation. Methods of Information in Medicine, 48(4), pp. 381-390. doi: 10.3414/ME0574
Farran, E. K., Blades, M., Boucher, J. view all authors & Tranter, L. J. (2009). How do individuals with Williams syndrome learn a route in a real-world environment?. Developmental Science, 13(3), pp. 454-468. doi: 10.1111/j.1467-7687.2009.00894.x
Fernandez-Luna, J. M., Huete, J. F. & MacFarlane, A. (2009). Introduction to the special issue on teaching and learning in information retrieval. Information Retrieval, 12(2), pp. 99-101. doi: 10.1007/s10791-009-9090-3
Fernandez-Luna, J. M., Huete, J. F., MacFarlane, A. view all authors & Efthimiadis, E. N. (2009). Teaching and learning in information retrieval. Information Retrieval, 12(2), pp. 201-226. doi: 10.1007/s10791-009-9089-9
Ferrell, J. & Greer, C. ORCID: 0000-0002-8623-702X (2009). Editorial: Global collapse and cultural possibility. Crime, Media, Culture, 5(1), pp. 5-7. doi: 10.1177/1741659008102059
Ferriani, S., Cattani, G. & Baden-Fuller, C. ORCID: 0000-0002-0230-1144 (2009). The relational antecedents of project-entrepreneurship: Network centrality, team composition and project performance. Research Policy, 38(10), pp. 1545-1558. doi: 10.1016/j.respol.2009.09.001
Filippopoulos, P. (2009). Counselling Psychology training in the United Kingdom for Greek students who completed their undergraduate training in Greece: themes when comparing the two different organisational settings. European Journal of Counselling Psychology, 1(2), pp. 3-15.
Fleming, P. & Costas, J. (2009). Beyond dis-identification: A discursive approach to self-alienation in contemporary organizations. Human Relations, 62(3), pp. 353-378. doi: 10.1177/0018726708101041
Fonseca, J., O'Sullivan, C. & Coop, M. R. (2009). Image segmentation techniques for granular materials. Paper presented at the 6th International Conference on Micromechanics of Granular Media, 13-07-2009 - 17-07-2009, Colorado, USA. doi: 10.1063/1.3179898
Ford, E. & Ayers, S. (2009). Stressful events and support during birth: The effect on anxiety, mood, and perceived control. Journal of Anxiety Disorders, 23(2), pp. 260-268. doi: 10.1016/j.janxdis.2008.07.009
Ford, E., Ayers, S. & Wright, D. B. (2009). Measurement of maternal perceptions of Support and Control in Birth (SCIB). Journal of Women's Health, 18(2), pp. 245-252. doi: 10.1089/jwh.2008.0882
Forster, B. & Gillmeister, H. (2009). VIEWING FINGERS OF THE SAME HAND CAN DISTURB TACTILE ATTENTIONAL SELECTION. PSYCHOPHYSIOLOGY, 46, S121 - S121.
Forster, B., Sambo, C.F. & Pavone, E.F. (2009). ERP correlates of tactile spatial attention differ under intra- and intermodal conditions. BIOLOGICAL PSYCHOLOGY, 82(3), pp. 227-233. doi: 10.1016/j.biopsycho.2009.08.001
Fothergill, J. (2009). Conclusions from the Melville Report on the Changing Learner Experience (CLEx) Enquiry. Paper presented at the EDEN 2009, 10-06-2009 - 13-06-2009, Gdansk, Poland.
Fothergill, J. (2009). A renaissance of audio: podcasting approaches for learning on campus and beyond. Paper presented at the EDEN 2009, 10-13 June 2009, Gdansk, Poland.
Francis, J., Tinmouth, A., Stanworth, S. view all authors, Grimshaw, J. M., Johnston, M., Hyde, C., Brehaut, J., Stockton, C., Fergusson, D. & Eccles, M. P. (2009). Using theories of behaviour to understand transfusion prescribing in three clinical contexts in two countries: Development work for an implementation trial (protocol). Implementation Science, 4, 70 - ?.
Freeman, E. D. & Verghese, P. (2009). Peeling Plaids Apart: Context Counteracts Cross-Orientation Contrast Masking. PLoS ONE, 4(12), e8123. doi: 10.1371/journal.pone.0008123
Fring, A. (2009). Particles versus fields in PT-symmetrically deformed integrable systems. Pramana, 73(2), pp. 363-373. doi: 10.1007/s12043-009-0128-2
Fu, F. (2009). Progressive collapse analysis of high-rise building with 3-D finite element modeling method. Journal of Constructional Steel Research, 65(6), pp. 1269-1278. doi: 10.1016/j.jcsr.2009.02.001
Fuertes, A., Izzeldin, M. & Kalotychou, E. (2009). On forecasting daily stock volatility: The role of intraday information and market conditions. International Journal of Forecasting, 25(2), pp. 259-281. doi: 10.1016/j.ijforecast.2009.01.006
Gaigg, S. B. & Bowler, D. M. (2009). Brief Report: Attenuated Emotional Suppression of the Attentional Blink in Autism Spectrum Disorder: Another Non-Social Abnormality?. Journal of Autism and Developmental Disorders, 39(8), pp. 1211-1217. doi: 10.1007/s10803-009-0719-2
Gaigg, S. B. & Bowler, D. M. (2009). Illusory Memories of Emotionally Charged Words in Autism Spectrum Disorder: Further Evidence for Atypical Emotion Processing Outside the Social Domain. Journal of Autism and Developmental Disorders, 39(7), pp. 1031-1038. doi: 10.1007/s10803-009-0710-y
Galanis, S. ORCID: 0000-0003-4286-7449 (2009). Syntactic foundations for unawareness of theorems. In: TARK '09 Proceedings of the 12th Conference on Theoretical Aspects of Rationality and Knowledge. (pp. 136-145). New York: ACM. ISBN 9781605585604 doi: 10.1145/1562814.1562835
Gallagher, A. L. & Chiat, S. (2009). Evaluation of speech and language therapy interventions for pre-school children with specific language impairment: a comparison of outcomes following specialist intensive, nursery-based and no intervention. International Journal of Language & Communication Disorders, 44(5), pp. 616-638. doi: 10.1080/13682820802276658
Galvao Jr, A. F. & Montes-Rojas, G. (2009). Instrumental variables quantile regression for panel data with measurement errors (09/06). London, UK: Department of Economics, City University London. doi: 09/06
Galvao Jr, A. F., Montes-Rojas, G. & Olmo, J. (2009). Threshold quantile autoregressive models (09/05). London, UK: Department of Economics, City University London. doi: 09/05
Galvao Jr, A. F., Montes-Rojas, G. & Park, S. Y. (2009). Quantile autoregressive distributed lag model with an application to house price returns (09/04). London, UK: Department of Economics, City University London. doi: 09/04
Gardner, B., Davidson, R., McAteer, J. view all authors, Michie, S. & Evidence into Recommendations study group, . (2009). A method for studying decision-making by guideline development groups. Implementation Science, 4, 48 - ?. doi: 10.1186/1748-5908-4-48
Gash, V. (2009). Sacrificing their Careers for their Families? An Analysis of the Family Pay Penalty in Europe. Social Indicators Research, 93(3), pp. 569-586. doi: 10.1007/s11205-008-9429-y
Gashi, I., Popov, P. T. & Stankovic, V. (2009). Uncertainty explicit assessment of off-the-shelf software: A Bayesian approach. Information and Software Technology, 51(2), pp. 497-511. doi: 10.1016/j.infsof.2008.06.003
Gashi, I., Stankovic, V., Leita, C. view all authors & Thonnard, O. (2009). An Experimental Study of Diversity with Off-The-Shelf AntiVirus Engines. Paper presented at the Eighth IEEE International Symposium on Network Computing and Applications, 9 - 11 July 2009, Cambridge, MA, USA.
Gatzidis, C., Brujic-Okretic, V. & Mastroyanni, M. (2009). Evaluation of Non-photorealistic 3D Urban Models for Mobile Device Navigation.. Paper presented at the Virtual and Mixed Reality: Third International Conference, 19 - 24 July 2009, San Diego, CA, USA. doi: 10.1007/978-3-642-02771-0_20
Gavaises, M., Andriotis, A., Papoulias, D. view all authors, Mitroglou, N. & Theodorakakos, A. (2009). Characterization of string cavitation in large-scale Diesel nozzles with tapered holes. Physics of Fluids, 21(5), 052107. doi: 10.1063/1.3140940
Gboney, W. (2009). Econometric assessment of the impact of power sector reforms in Africa: A study of the generation, transmission and distribution sectors. (Unpublished Doctoral thesis, City University London)
Giaralis, A. & Spanos, P. D. (2009). Determination of design spectrum compatible evolutionary spectra via Monte Carlo peak factor estimation. Paper presented at the 7th International Probabilistic Workshop (7th IPW), 25th - 26th November 2009, Delft, The Netherlands.
Giaralis, A. & Spanos, P. D. (2009). Wavelet-based response spectrum compatible synthesis of accelerograms-Eurocode application (EC8). Soil Dynamics and Earthquake Engineering, 29(1), pp. 219-235. doi: 10.1016/j.soildyn.2007.12.002
Gill, R. (2009). Mediated intimacy and postfeminism: A discourse analytic examination of sex and relationships advice in a women's magazine. Discourse and Communication, 3(4), pp. 345-369. doi: 10.1177/1750481309343870
Goncalves de Assis, P. E (2009). Non-Hermitian Hamiltonians in Field Theory. (Doctoral thesis, City University London)
Goodall, A. H. (2009). Highly cited leaders and the performance of research universities. Research Policy, 38(7), pp. 1079-1092. doi: 10.1016/j.respol.2009.04.002
Gould, D. J., Chudleigh, J. H., Moralejo, D. view all authors & Drey, N. (2009). Interventions to improve hand hygiene compliance in patient care. Cochrane Database of Systematic Reviews(4), doi: 10.1002/14651858.CD005186.pub2
Grandori, A. & Furnari, S. (2009). Types of Complementarity, Combinative Organization Forms and Structural Heterogeneity: Beyond Discrete Structural Alternatives. In: Morroni, M. (Ed.), Corporate Governance, Organization and the Firm: Co-operation and Outsourcing in a Globalised Market. (pp. 63-86). London, UK: Edward Elgar.
Grattan, K. T. V., Agrawal, A., Kejalakshmy, N. view all authors & Rahman, B. M. (2009). Soft Glass Equiangular Spiral Photonic Crystal Fiber for Supercontinuum Generation. IEEE Photonics Technology Letters, 21(22), pp. 1722-1724. doi: 10.1109/LPT.2009.2032523
Gray, J., He, Y., Ilderton, A. view all authors & Lukas, A. (2009). STRINGVACUA: A Mathematica Package for Studying Vacuum Configurations in String Phenomenology. Computer Physics Communications Package, 180(1), pp. 107-119. doi: 10.1016/j.cpc.2008.08.009
Griffiths, M. K., Reyes-Aldasoro, C. C., Savas, D. view all authors & Greenfield, T. (2009). IOME, A Toolkit for Distributed and Collaborative Computational Science and Engineering. Paper presented at the UK e-science All Hands Meeting, 07-12-2009 - 09-12-2009, Oxford, UK.
Guillemain, Henri (2009). Fibre optic sensing techniques for the detection of lead (II) ions. (Unpublished Doctoral thesis, City University London)
Haberman, S. & Renshaw, A. E. (2009). On age-period-cohort parametric mortality rate projections. Insurance: Mathematics and Economics, 45(2), pp. 255-270. doi: 10.1016/j.insmatheco.2009.07.006
Hackett, Addy (2009). An investigation into Stress and Coaching-needs in the National Health Service and UK Hospices. (Unpublished Doctoral thesis, City University London)
Haddad, M. (2009). Depression in adults with a chronic physical health problem: treatment and management. International Journal of Nursing Studies, 46(11), pp. 1411-1414. doi: 10.1016/j.ijnurstu.2009.08.007
Haddad, M. (2009). Mental health and older people. In: Newell, R. & Gournay, K. (Eds.), Mental Health Nursing: An evidence-based approach. (pp. 288-321). Churchill-Livingston.
Haddad, M., Walters, P. & Tylee, A. (2009). Mood disorders in primary care. Psychiatry, 8(2), pp. 71-75. doi: 10.1016/j.mppsy.2008.11.001
Hadjisavvas, V., Damianou, C., Ioannides, K. view all authors, Mylonas, N., Couppis, A., Kyriacou, P. A., Iosif, D., HadjiCharalambous, T. & Parea, G. (2009). Penetration of high intensity focused ultrasound in vitro and in vivo rabbit brain using MR imaging. Paper presented at the 2009 9th International Conference on Information Technology and Applications in Biomedicine, 4-7 Nov 2009, Larnaca, Cyprus.
Haenschel, C., Bittner, R. A., Waltz, J. view all authors, Haertling, F., Wibral, M., Singer, W., Linden, D. E. J. & Rodriguez, E. (2009). Cortical oscillatory activity is critical for working memory as revealed by deficits in early-onset schizophrenia. Journal of Neuroscience, 29(30), pp. 9481-9489. doi: 10.1523/JNEUROSCI.1428-09.2009
Hampton, J. A., Storms, G., Simmons, C. L. view all authors & Heussen, D. (2009). Feature integration in natural language concepts. Memory & Cognition, 37(8), pp. 1150-1163. doi: 10.3758/MC.37.8.1150
Harb, Z. (2009). The July 2006 War and the Lebanese blogsphere: towards an alternative media tool in covering wars. Journal of Media Practice, 10(2-3), pp. 255-258. doi: 10.1386/jmpr.10.2-3.255_3
Harding, C. ORCID: 0000-0002-5192-2027 (2009). Involving adult service users with learning disabilities in the training of speech and language therapy students. International Journal of Teaching and Learning in Higher Education, 20(2), pp. 207-213.
Harding, C. (2009). An evaluation of the benefits of non-nutritive sucking for premature infants as described in the literature. ARCHIVES OF DISEASE IN CHILDHOOD, 94(8), pp. 636-640. doi: 10.1136/adc.2008.144204
Harrison, M. D. & Broom, M. (2009). A game-theoretic model of interspecific brood parasitism with sequential decisions. Journal of Theoretical Biology, 256(4), pp. 504-517. doi: 10.1016/j.jtbi.2008.08.033
Hatzis, N. (2009). Neutrality, Proselytism and Religious Minorities at the European Court of Human Rights and the US Supreme Court. Harvard International Law Journal, 49, pp. 120-131.
Hatzopoulos, P. & Haberman, S. (2009). A parameterized approach to modeling and forecasting mortality. Insurance: Mathematics and Economics, 44(1), pp. 103-123. doi: 10.1016/j.insmatheco.2008.10.008
He, Y., Jejjala, V. & Minic, D. (2009). Eigenvalue Density, Li's Positivity, and the Critical Strip (VPI-IPNAS-09-03). Blacksburg, USA: Virgina Tech, IPNAS. doi: VPI-IPNAS-09-03
Hebing, M. (2009). Refugee Stories in Britain: Narratives of Personal Experiences in a Network of Power Relations. (Unpublished Doctoral thesis, City University London)
Helleiner, E. & Pagliari, S. (2009). The End of Self-Regulation? Hedge Funds and Derivatives in Global Financial Governance. In: Helleiner, E, Pagliari, S & Zimmerman, H (Eds.), Global Finance in Crisis: The Politics of International Regulatory Change. . Taylor & Francis.
Herberts, C. (2009). The Application of Health Psychology to Smoking Cessation within a Deprived London Borough. (Unpublished Doctoral thesis, City University London)
Hickey, M., Kyriacou, P. A., Samuels, N. view all authors, Randive, N., Chang, S. H., Maney, K. & Langford, R. M. (2009). Photoplethysmographic signals recorded from human abdominal organs using a fibreoptic probe. British Journal of Anaesthesia, 102(4), 579P-580P. doi: 10.1093/bja/aep007
Hickey, M., Samuels, N., Randive, N. view all authors, Langford, R. M. & Kyriacou, P. A. (2009). Development and evaluation of a photometric fibre-optic sensor for monitoring abdominal organ photoplethysmographs and blood oxygen saturation. SPIE Proceedings, 7503, ISSN 0277-786X doi: 10.1117/12.834307
Hickey, M., Samuels, N., Randive, N. view all authors, Langford, R. M. & Kyriacou, P. A. (2009). In-Vivo Evaluation of a Fiber-Optic Splanchnic Photoplethysmographic Sensor during Open Laparotomy. Engineering in Medicine and Biology Society, 2009. EMBC 2009. Annual International Conference of the IEEE, pp. 1505-1508. doi: 10.1109/IEMBS.2009.5334159
Hickey, M., Samuels, N., Randive, N. view all authors, Langford, R. M. & Kyriacou, P. A. (2009). Photoplethysmographic signals from human splanchnic organs using a new fibre-optic sensor. Paper presented at the Annual National Conference of the Institute of Physics and Engineering in Medicine (IPEM), 14-16 Sep 2009, Liverpool, UK.
Hilari, K. & Byng, S. (2009). Health-related quality of life in people with severe aphasia. International Journal of Language & Communication Disorders, 44(2), pp. 193-205. doi: 10.1080/13682820802008820
Hilari, K., Lamping, D. L., Smith, S. C. view all authors, Northcott, S., Lamb, A. & Marshall, J. (2009). Psychometric properties of the Stroke and Aphasia Quality of Life Scale (SAQOL-39) in a generic stroke population. Clinical Rehabilitation, 23(6), pp. 544-557. doi: 10.1177/0269215508101729
Howe, J. M. & King, A. (2009). Closure Algorithms for Domains with Two Variables Per Inequality (TR/2009/DOC/01). . doi: TR/2009/DOC/01
Howe, J. M. & King, A. (2009). Logahedra: A new weakly relational domain. Lecture Notes in Computer Science, 5799, pp. 306-320. doi: 10.1007/978-3-642-04761-9_23
Howe, M. L., Wimmer, M. C., Gagnon, N. view all authors & Plumpton, S. (2009). An associative-activation theory of children's and adults' memory illusions. Journal of Memory and Language, 60(2), pp. 229-251. doi: 10.1016/j.jml.2008.10.002
Howell, P., Davis, S. & Williams, R. M. (2009). The effects of bilingualism on speakers who stutter during late childhood. Archives of Disease in Childhood, 94, pp. 42-46. doi: 10.1136/adc.2007.134114
Howell, S., Tripoliti, E. & Pring, T. (2009). Delivering the Lee Silverman Voice Treatment (LSVT) by web camera: a feasibility study. International Journal of Language & Communication Disorders, 44(3), pp. 287-300. doi: 10.1080/13682820802033968
Hrisos, S., Dickinson, H. O., Eccles, M. P. view all authors, Francis, J. & Johnston, M. (2009). Are there valid proxy measures of clinical behaviour?. Implementation Science, 4, 37 - ?. doi: 10.1186/1748-5908-4-37
Hrisos, S., Eccles, M. P., Francis, J. view all authors, Bosch, M., Dijkstra, R., Johnston, M., Grol, R., Kaner, E. F. S. & Steen, I. (2009). Using psychological theory to understand the clinical management of type 2 diabetes in Primary Care: a comparison across two European countries. BMC Health Services Research, 9, 140 - ?. doi: 10.1186/1472-6963-9-140
Hrisos, S., Eccles, M. P., Francis, J. view all authors, Dickinson, H. O., Kaner, E. F. S., Beyer, F. & Johnston, M. (2009). Are there valid proxy measures of clinical behaviour? a systematic review. Paper presented at the UK Society for Behavioural Medicine Annual Scientific Meeting, 10 Dec 2007, Warwick, UK. doi: 10.1186/1748-5908-4-37
Hu, J., Kaparias, I. & Bell, M. G. H. (2009). Spatial econometrics models for congestion prediction with in-vehicle route guidance. IET Intelligent Transport Systems, 3(2), pp. 159-167. doi: 10.1049/iet-its:20070062
Imai, S., Jain, N. & Ching, A. (2009). Bayesian Estimation of Dynamic Discrete Choice Models. Econometrica, 77(6), pp. 1865-1899. doi: 10.3982/ECTA5658
Jackson, J., Bradford, B., Hohl, K. view all authors & Farrall, S. (2009). Does the fear of crime erode public confidence in policing?. Policing: A Journal of Policy and Practice, 3(1), pp. 100-111. doi: 10.1093/police/pan079
Jaimovich, E. & Merella, V. (2009). The role of quality ladders in a Ricardian model of trade with nonhomothetic preference. (09/01). London, UK: Department of Economics, City University London. doi: 09/01
Jain, N. (2009). Lender learning and entry under demand uncertainty. Economics Bulletin, 29(1), pp. 100-107.
Jarrett, L. (2009). Being and becoming a reflective practitioner, through guided reflection, in the role of a spasticity management nurse specialist. (Unpublished Doctoral thesis, City University London)
Jarzabkowski, P. & Balogun, J. (2009). The practice and process of delivering integration through strategic planning. Journal of Management Studies, 46(8), pp. 1255-1288. doi: 10.1111/j.1467-6486.2009.00853.x
Jarzabkowski, P. & Spee, A. P. (2009). Strategy-as-practice: A review and future directions for the field. International Journal of Management Reviews, 11(1), pp. 69-95. doi: 10.1111/j.1468-2370.2008.00250.x
Jenkins, V. (2009). Parents with learning disabilities: a counselling psychology perspective.. (Unpublished Doctoral thesis, City University London)
Jessa, Zahra (2009). Improving the detection of correctable low vision in older people. (Unpublished Doctoral thesis, City University London)
Jimeno-Yepes, A., Jimenez-Ruiz, E. ORCID: 0000-0002-9083-4599, Berlanga-Llavori, R. view all authors & Rebholz-Schuhmann, D. (2009). Reuse of terminological resources for efficient ontological engineering in Life Sciences. BMC BIOINFORMATICS, 10(Sup 10), S4. doi: 10.1186/1471-2105-10-S10-S4
John, D., Gatzidis, C., Liarokapis, F. view all authors, Boucouvalas, A. & Brujic-Okretic, V. (2009). A Framework for the Development of Online, Location-Specific, Expressive 3D Social Worlds. Paper presented at the Games and Virtual Worlds for Serious Applications, 2009.
Jokipii, T.K. (2009). Bank Capital Management. (Unpublished Doctoral thesis, City University London)
Jones, A. (2009). Proximity and Power within Investment Relationships: the case of the UK Private Equity industry. Geoforum, 40(5), pp. 809-819. doi: 10.1016/j.geoforum.2009.09.002
Jones, A. (2009). Theorising Global Business Spaces. Geografiska Annaler B: Human Geography, 91(3), pp. 203-218. doi: 10.1111/j.1468-0467.2009.00315.x
Jones, P. (2009). Counselling psychology and cancer. (Unpublished Doctoral thesis, City University London)
Jäger, H., Steels, L., Baronchelli, A. view all authors, Briscoe, E., Christiansen, M. H., Griffiths, T., Jager, G., Kirby, S., Komarova, N., Richerson, P. J. & Triesch, J. (2009). What can mathematical, computational and robotic models tell us about the origins of syntax? In: Bickerton, D. & Szathmáry, E. (Eds.), Biological Foundations and Origin of Syntax. (pp. 385-410). USA: MIT Press.
Kaishev, V. K. & Dimitrova, D. S. (2009). Dirichlet Bridge Sampling for the Variance Gamma Process: Pricing Path-Dependent Options.. Management Science, 55, pp. 483-496. doi: 10.1287/mnsc.1080.0953
Kajan, K. (2009). Finite Element Modelling and Investigation of High Speed, Large Force and Long Lifetime Electromagnetic Actuators. (Unpublished Doctoral thesis, City University)
Kaparias, I. & Bell, M. G. H. (2009). Testing a reliable in-vehicle navigation algorithm in the field. IET Intelligent Transport Systems, 3(3), pp. 314-324. doi: 10.1049/iet-its.2008.0075
Karamanidou, E. (2009). The discursive legitimation of asylum policies in Greece and Ireland. (Unpublished Doctoral thesis, City University London)
Karampelas, P., Basdekis, I. & Stephanidis, C. (2009). Web user interface design strategy: Designing for device independence. Paper presented at the 5th International Conference, UAHCI 2009, 19-24 Jul 2009, San Diego, USA.
Kargbo, A.K. (2009). THE POST-1986 UK INSOLVENCY SYSTEM: A STUDY OF MODE OF RESOLUTION AND OF COMPANY OUTCOME. (Unpublished Doctoral thesis, City University London)
Kejalakshmy, N., Rahman, B. M., Agrawal, A. view all authors, Tanvir, H. M. & Grattan, K. T. V. (2009). Metal-Coated Defect-Core Photonic Crystal Fiber for THz Propagation. IEEE/OSA Journal of Lightwave Technology, 27(11), pp. 1631-1637. doi: 10.1109/JLT.2009.2020919
Kerrouche, A., Boyle, W. J. O., Sun, T. view all authors, Grattan, K. T. V., Schmidt, J. W. & Taljsten, B. (2009). Strain Measurement Using Embedded Fiber Bragg Grating Sensors Inside an Anchored Carbon Fiber Polymer Reinforcement Prestressing Rod for Structural Monitoring. IEEE Sensors Journal, 9(11), pp. 1456-1461. doi: 10.1109/JSEN.2009.2018355
Kessar, R. (2009). On duality inducing automorphisms and sources of simple modules in classical groups. Journal of Group Theory, 12(3), pp. 331-349. doi: 10.1515/JGT.2008.081
Kessar, R. & Linckelmann, M. (2009). On two theorems of Flavell. Archiv der Mathematik, 92(1), pp. 1-6. doi: 10.1007/s00013-008-2911-6
Khalili, N., Wood, J. & Dykes, J. (2009). Mapping the geography of social networks. Paper presented at the GIS Research UK, 17th Annual Conference, 1 - 3 Apr 2009, University of Durham, Durham, UK.
Khan, S. H., Aristovich, K. Y. & Borovkov, A. I. (2009). Solution of the Forward Problem in Magnetic-Field Tomography (MFT) Based on Magnetoencephalography (MEG). IEEE TRANSACTIONS ON MAGNETICS, 45(3), pp. 1416-1419. doi: 10.1109/TMAG.2009.2012653
Kim, S., Nouri, J. M., Yan, Y. view all authors & Arcoumanis, C. (2009). Effects of intake flow on the spray structure of a multi-hole injector in a DISI engine. International Journal of Automotive Technology, 10(3), doi: 10.1007/s12239-009-0032-2
Kliman, Elizabeth Angela (2009). Bereavement and Disability: Implications for the Therapeutic Encounter. (Unpublished Doctoral thesis, City University London)
Kloukinas, C. (2009). Better abstractions for reusable components & architectures. Paper presented at the 31st International Conference on Software Engineering (ICSE-Companion 2009), 16 - 24 May 2009, Vancouver, BC, Canada. doi: 10.1109/ICSE-COMPANION.2009.5070981
Komninos, N. & Douligeris, C. (2009). LIDF: Layered intrusion detection framework for ad-hoc networks. Ad Hoc Networks, 7(1), pp. 171-182. doi: 10.1016/j.adhoc.2008.01.001
Kontopoulos, G. S. (2009). The Value Relevance of Accounting Information in the UK, the Netherlands, Germany and France: Effects Arising from the Adoption of International Financial Reporting Standards. (Unpublished Doctoral thesis, Cass Business School, City University)
Kotecha, A., O'Leary, N., Melmoth, D. R. view all authors, Grant, S. & Crabb, D. P. (2009). The Functional Consequences of Glaucoma for Eye-Hand Coordination. Investigative Ophthalmology & Visual Science, 50(1), pp. 203-213. doi: 10.1167/iovs.08-2496
Koutrakos, P. (2009). Case C-205/06, commission v. Austria, judgment of the Court (Grand Chamber) of 3 March 2009, not yet reported; Case C-249/06, commission v. Sweden, judgment of the Court (Grand Chamber) of 3 March 2009. Common Market Law Review, 46(6), pp. 2059-2076.
Koutrakos, P. (2009). The application of EC law to defence-related industries—changing interpretations of Article 296 EC. In: Barnard, C. (Ed.), The Outer Limits of European Union Law. (pp. 307-327). Hart Publishing.
Kulesza, T., Wong, W-K., Stumpf, S. view all authors, Perona, S., White, R., Burnett, M., Oberst, I. & Ko, A. J. (2009). Fixing the program my computer learned: barriers for end users, challenges for the machine. In: Conati, C., Bauer, M., Oliver, N. view all authors & Weld, D. S. (Eds.), Proceedings of the 14th international conference on Intelligent user interfaces. (pp. 187-196). ACM. doi: 10.1145/1502650.1502678
Kulier, R., Coppus, S. F., Zamora, J. view all authors, Hadley, J., Malick, S., Das, K., Weinbrenner, S., Meyerrose, B., Decsi, T., Horvath, A. R., Nagy, E., Emparanza, J. I., Arvanitis, T. N., Burls, A., Cabello, J. B., Kaczor, M., Zanrei, G., Pierer, K., Stawiarz, K., Kunz, R., Mol, B. W. & Khan, K. S. (2009). The effectiveness of a clinically integrated e-learning course in evidence-based medicine: a cluster randomised controlled trial. BMC Medical Education, 9, 21 - ?. doi: 10.1186/1472-6920-9-21
Kusev, P., van Schaik, P., Ayton, P. view all authors, Dent, J. & Chater, N. (2009). Exaggerated Risk: Prospect Theory and Probability Weighting in Risky Choice. Journal of Experimental Psychology: Learning Memory and Cognition, 35(6), pp. 1487-1505. doi: 10.1037/a0017039
Kwon, S.D., Yang, H.D. & Rowley, C. (2009). The Purchasing Performance of Organizations Using e-Marketplaces. British Journal Of Management, 20(1), pp. 106-124. doi: 10.1111/j.1467-8551.2007.00555.x
Kyriacou, M. (2009). Foreign Exchange Market Microstructure and Forecasting. (Unpublished Doctoral thesis, City University London)
Kyriacou, P. A., Crerar-Gilbert, A., Langford, R. M. view all authors & Jones, D. P. (2009). Measurement of photoplethysmographic signals in human abdominal organs. Measurement, 42(7), pp. 1027-1031. doi: 10.1016/j.measurement.2009.03.004
Kyriacou, P. A., Pancholi, M. & Yeh, J. (2009). Investigation of the in-vitro loading on an artificial spinal disk prosthesis. Journal of Physics: Conference Series, 178(1), e012023. doi: 10.1088/1742-6596/178/1/012023
Kyriacou, P. A., Shafqat, K. & Pal, S. K. (2009). Pilot investigation of photoplethysmographic signals and blood oxygen saturation values during blood pressure cuff-induced hypoperfusion. Measurement, 42(7), pp. 1001-1005. doi: 10.1016/j.measurement.2009.02.005
Lang, T. (2009). Reshaping the Food System for Ecological Public Health. Journal of Hunger & Environmental Nutrition, 4(3-4), pp. 315-335. doi: 10.1080/19320240903321227
Lang, T. (2009). What President Obama can do in the world. Public Health Nutrition, 12(4), pp. 581-583. doi: 10.1017/S1368980009005436
Laudicella, M., Cookson, R., Jones, A. M. view all authors & Rice, N. (2009). Health care deprivation profiles in the measurement of inequality and inequity: an application to GP fundholding in the English NHS. Journal of Health Economics, 28(6), pp. 1048-1061. doi: 10.1016/j.jhealeco.2009.07.001
Leventides, J. & Karcanias, N. (2009). Zero assignment of matrix pencils by additive structured transformations. LINEAR ALGEBRA AND ITS APPLICATIONS, 431(8), pp. 1380-1396. doi: 10.1016/j.laa.2009.05.033
Ley, I., Haggard, P. & Yarrow, K. (2009). Optimal integration of auditory and vibrotactile information for judgments of temporal order. Journal of Experimental Psychology: Human Perception and Performance, 35(4), pp. 1005-1019. doi: 10.1037/a0015021
Linckelmann, M. (2009). On H* (C{script}; k×) for fusion systems. Homology, Homotopy and Applications, 11(1), pp. 203-218.
Linckelmann, M. (2009). On dimensions of block algebras. Mathematical Research Letters, 16(6), pp. 1011-1014.
Linckelmann, M. (2009). On graded centres and block cohomology. Proceedings of the Edinburgh Mathematical Society, 52(2), pp. 489-514. doi: 10.1017/S0013091507001137
Linckelmann, M. (2009). Trivial source bimodule rings for blocks and p-permutation equivalences. Transactions of the American Mathematical Society, 361(3), pp. 1279-1316. doi: 10.1090/S0002-9947-08-04577-7
Linckelmann, M. (2009). The orbit space of a fusion system is contractible. Proceedings of the London Mathematical Society, 98(1), pp. 191-216. doi: 10.1112/plms/pdn029
Linckelmann, M. & Mazza, N. (2009). The Dade group of a fusion system. Journal of Group Theory, 12(1), pp. 55-74. doi: 10.1515/JGT.2008.060
Lind, S. E. & Bowler, D. M. (2009). Delayed self-recognition in children with autism spectrum disorder. Journal of Autism and Developmental Disorders, 39(4), pp. 643-650. doi: 10.1007/s10803-008-0670-7
Lind, S. E. & Bowler, D. M. (2009). Language and theory of mind in autism spectrum disorder: The relationship between complement syntax and false belief task performance. Journal of Autism and Developmental Disorders, 39(6), pp. 929-937. doi: 10.1007/s10803-009-0702-y
Lind, S. E. & Bowler, D. M. (2009). Recognition memory, self-other source memory, and theory-of-mind in children with autism spectrum disorder. Journal of Autism and Developmental Disorders, 39(9), pp. 1231-1239. doi: 10.1007/s10803-009-0735-2
Linton, O., Nielsen, J. P. & Nielsen, S.F. (2009). Non-parametric regression with a latent time series. ECONOMETRICS JOURNAL, 12(2), pp. 187-207. doi: 10.1111/j.1368-423X.2009.00278.x
Littler, J. & Gilbert, J. (2009). Beyond Gesture, Beyond Pragmatism. In: Pugh, J. (Ed.), What is Radical Politics Today? (pp. 127-135). Basingstoke, UK: Palgrave Macmillan.
Liu, H. & Verrall, R. J. (2009). A Bootstrap Estimate of the Predictive Distribution of Outstanding Claims for the Schnieper Model. ASTIN Bulletin, 39(2), pp. 677-689. doi: 10.2143/AST.39.2.2044653
Liu, T., Fothergill, J., Dodd, S. J. view all authors & Nilsson, U. H. (2009). Influence of semicon shields on the dielectric loss of XLPE cables. CEIDP: 2009 ANNUAL REPORT CONFERENCE ON ELECTRICAL INSULATION AND DIELECTRIC PHENOMENA, pp. 395-398. ISSN 0084-9162
Llorente, Lourdes (2009). Optical aberrations in ametropic eyes and their change with corneal refractive surgery. (Unpublished Doctoral thesis, City, University of London)
Lloyd, David (2009). Evaluating human-centered approaches for geovisualization. (Unpublished Doctoral thesis, City University London)
Lockett, R. D. (2009). Instabilities and soot formation in high pressure explosion flames. Paper presented at the The British-French Flame Days, 02 - 03 April 2009, Lille, France.
Lockett, R. D., Liverani, L., Thaker, D. view all authors & Arcoumanis, C. (2009). The characterisation of diesel cavitating flow using time-resolved light scattering. Paper presented at the IMechE Conference on Injection Systems for IC Engines, 13 - 14 May 2009, London, UK.
Longbottom, R., Fruttiger, M., Douglas, R. H. view all authors, Martinez-Barbera, J. P., Greenwood, J. & Moss, S. E. (2009). Genetic ablation of retinal pigment epithelial cells reveals the adaptive response of the epithelium and impact on photoreceptors. Proceedings of the National Academy of Sciences of the United States of America (PNAS) ISSN 1091-6490, 106(44), pp. 18728-18733. doi: 10.1073/pnas.0902593106
Lorenzoli, D. & Spanoudakis, G. (2009). Detection of Security and Dependability Threats: A Belief Based Reasoning Approach. In: Falk, R., Goudalo, W., Chen, E. Y. view all authors, Savola, R. & Popescu, M. (Eds.), Emerging Security Information, Systems and Technologies, 2009. SECURWARE '09. Third International Conference on. (pp. 312-320). IEEE. doi: 10.1109/SECURWARE.2009.55
Lu, C. (2009). Essays on Cross-Sectional Asset Pricing. (Unpublished Doctoral thesis, Cass Business School)
Lu, W., MacFarlane, A. & Venuti, F. (2009). Okapi-based XML indexing. Aslib Proceedings; New Information Perspectives, 61(5), pp. 483-499. doi: 10.1108/00012530910989634
Lunde, A. M., De Martino, A., Schulz, A. view all authors, Egger, R. & Flensberg, K. (2009). Electron-electron interaction effects in quantum point contacts. New Journal of Physics, 11, doi: 10.1088/1367-2630/11/2/023031
Ma, Q. & Yan, S. (2009). QALE-FEM for numerical modelling of non-linear interaction between 3D moored floating bodies and steep waves. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, 78(6), pp. 713-756. doi: 10.1002/nme.2505
Ma, Q. & Zhou, J. (2009). MLPG_R Method for Numerical Simulation of 2D Breaking Waves. CMES: Computer Modeling in Engineering & Sciences, 43(3), pp. 277-304. doi: 10.3970/cmes.2009.043.277
MacFarlane, A. (2009). Models Performance Issues in Parallel Computing for Information Retrieval. In: Goker, A. S. & Davies, J. (Eds.), Information Retrieval: Searching in the 21st Century. (pp. 255-271). John Wiley & Sons Inc.
MacFarlane, A. & Tuson, A. (2009). Local search: A guide for the information retrieval practitioner. Information Processing and Management, 45(1), pp. 159-174. doi: 10.1016/j.ipm.2008.09.002
Madle, G. (2009). Impact-ED : A new model of digital library impact evaluation. (Unpublished Doctoral thesis, City University London)
Maiorano, F. (2009). Regulation and Performance: Evidence from the Telecommunications Industry. (Unpublished Doctoral thesis, City University London)
Mamdouhi, H., Khatun, S. & Zarrin, J. (2009). Bluetooth wireless monitoring, managing and control for inter vehicle in vehicular Ad-Hoc networks. Journal of Computer Science, 5(12), pp. 922-929. doi: 10.3844/jcssp.2009.922.929
Marche, T. A., Howe, M. L., Lane, D. G. view all authors, Owre, K. P. & Briere, J. L. (2009). Invariance of cognitive triage in the development of recall in adulthood. Memory, 17(5), pp. 518-527. doi: 10.1080/09658210902939355
Marchi, A. (2009). Internal flow and spray characteristics of an outwards opening pintle-type gasoline-injector. (Unpublished Doctoral thesis, City University London)
Marien, P., Verhoeven, J., Wackenier, P. view all authors, Engelborghs, S. & De Deyn, P. P. (2009). Foreign accent syndrome as a developmental motor speech disorder. Cortex, 45(7), pp. 870-878. doi: 10.1016/j.cortex.2008.10.010
Marshall, J. (2009). Framing ideas in aphasia: the need for thinking therapy. International Journal of Language & Communication Disorders, 44(1), pp. 1-14. doi: 10.1080/13682820802683507
Masri, M. (2009). Book Review of From Coexistence to Conquest: International Law and the Origins of the Arab-Israeli Conflict, 1891-1949, by Victor Kattan. Palestine Yearbook of International Law, 15, pp. 455-458.
Mathieson, L., Hirani, S. P., Epstein, R. view all authors, Baken, R. J., Wood, G. & Rubin, J. S. (2009). Laryngeal manual therapy: a preliminary study to examine its treatment effects in the management of muscle tension dysphonia. The Journal of Voice, 23(3), pp. 353-366. doi: 10.1016/j.jvoice.2007.10.002
Matos, C. (2009). Comparing Media Systems: the role of the public media in the digital age. The Global Studies Journal, 2(3),
Mayhew, L. (2009). Increasing longevity and the economic value of healthy ageing and working longer. UK: Pensions Institute.
Mayhew, L. (2009). On the effectiveness of care co-ordination services aimed at preventing hospital admissions and emergency attendances. Health Care Management Science, 12(3), pp. 269-284. doi: 10.1007/s10729-008-9092-5
Mayhew, L. (2009). The market potential for privately financed long term care products in the UK (Actuarial Research Paper No. 188). London, UK: Faculty of Actuarial Science & Insurance, City University London. doi: Actuarial Research Paper No. 188
Mayhew, L., Richardson, J. & Rickayzen, B. D. (2009). A study into the detrimental effects of obesity on life in the UK. Institute and Faculty of Actuaries.
Mayhew, L. & Smith, D. (2009). Whither human survival and longevity or the shape of things to come (Actuarial Research Paper No. 189). London, UK: Faculty of Actuarial Science & Insurance, City University London. doi: Actuarial Research Paper No. 189
McCusker, J. P., Phillips, J. A., Gonzalez-Beltran, A. N. view all authors, Finkelstein, A. ORCID: 0000-0003-2167-9844 & Krauthammer, M. (2009). Semantic web data warehousing for caGrid. BMC Bioinformatics, 10(10), S2.. doi: 10.1186/1471-2105-10-S10-S2
McManus, S. ORCID: 0000-0003-2711-0819, Meltzer, H., Brugha, T. view all authors, Bebbington, P. E. & Jenkins, R. (2009). Adult psychiatric morbidity in England: results of a household survey. Leeds, UK: Health and Social Care Information Centre.
McNamara, A. M., Goodey, R.J. & Taylor, R.N. (2009). Apparatus for centrifuge modelling of top down basement construction with heave reducing piles. International Journal of Physical Modelling in Geotechnics, 9(1), pp. 1-14. doi: 10.1680/ijpmg.2009.9.1.01
Melmoth, D. R., Finlay, A. L., Morgan, M. J. view all authors & Grant, S. (2009). Grasping Deficits and Adaptations in Adults with Stereo Vision Losses. Investigative Visual Science & Opthalmology, 50(8), pp. 3711-3720. doi: 10.1167/iovs.08-3229
Melmoth, D. R., Tibber, M.S., Grant, S. view all authors & Morgan, M. J. (2009). The Poggendorff illusion affects manual pointing as well as perceptual judgements. Neuropsychologia, 47(14), pp. 3217-3224. doi: 10.1016/j.neuropsychologia.2009.07.024
Mena, S., de Leede, M., Baumann, D. view all authors, Black, N., Lindeman, S. & McShane, L. (2009). Advancing the business and human rights agenda: Dialogue, empowerment, and constructive engagement. Journal of Business Ethics, 93(1), pp. 161-188. doi: 10.1007/s10551-009-0188-8
Mera, M. (2009). An Interview with Canadian-Armenian Filmmaker Atom Egoyan. Ethnomusicology Forum, 18(1), pp. 73-82. doi: 10.1080/17411910902790416
Mera, M. (2009). Invention/Re-invention. Music, Sound and the Moving Image, 3(1), pp. 1-20. doi: 10.3828/msmi.3.1.1
Mera, M. (2009). Reinventing Question Time. In: Scott, D. B. (Ed.), The Ashgate research companion to popular musicology. (pp. 59-83). Aldershot: Ashgate Pub Co.
Mera, M. & Winters, B. (2009). Film and Television Music Sources in the UK and Ireland. Brio: Journal of the International Association of Music Libraries, Archives and Documentation Centres., 46(2), pp. 37-65.
Mesiti, M., Jimenez-Ruiz, E. ORCID: 0000-0002-9083-4599, Sanz, I. view all authors, Berlanga-Llavori, R., Perlasca, P., Valentini, G. & Manset, D. (2009). XML-based approaches for the integration of heterogeneous bio-molecular data. BMC BIOINFORMATICS, 10, doi: 10.1186/1471-2105-10-S12-S7
Mesnard, A. & Seabright, P. (2009). Escaping infectious diseases through migration? Quarantine measures under incomplete information about infection risk. Journal of Public Economics, 93(7/8), pp. 931-938. doi: 10.1016/j.jpubeco.2009.05.001
Miranda, M. D. M., Nielsen, J. P. & Sperlich, S. (2009). One Sided Crossvalidation for Density Estimation. In: Gregoriou, G.N. (Ed.), Operational Risk Towards Basel III: Best Practices and Issues in Modeling, Management and Regulation. (pp. 177-196). New Jersey: John Wiley and Sons.
Mitchell, A., Farrand, P., James, H. view all authors, Luke, R., Purtell, R. & Wyatt, K. (2009). Patients' experience of transition onto haemodialysis: a qualitative study.. Journal of Renal Care, 35(2), pp. 99-107. doi: 10.1111/j.1755-6686.2009.00094.x
Mitchell, V-W., Balabanis, G., Schlegelmilch, B. B. view all authors & Cornwell, T. B. (2009). Measuring Unethical Consumer Behavior Across Four Countries. Journal of Business Ethics, 88(2), pp. 395-412. doi: 10.1007/s10551-008-9971-1
Mitroglou, N., Nouri, J. M. & Arcoumanis, C. (2009). Spray structure from double fuel injection in multihole injectors for gasoline direct-injection engines. Atomization and Sprays, 19(6), pp. 529-545. doi: 10.1615/AtomizSpr.v19.i6.30
Mondragon, E. ORCID: 0000-0003-4180-1261, Murphy, R. A. & Murphy, V. A. (2009). Rats do learn XYX rules. Animal Behaviour, 78(4), e3-e4. doi: 10.1016/j.anbehav.2009.07.013
Montana, R. (2009). Paradigms of judicial supervision and co-ordination between police and prosecutors: the Italian case in a comparative perspective. European Journal of Crime, Criminal Law and Criminal Justice, 17(4), pp. 309-333. doi: 10.1163/157181709X470974
Montana, R. (2009). Prosecutors and the definition of the crime problem in Italy: balancing the impact of moral panics. Criminal Law Forum, 20(4), pp. 471-494. doi: 10.1007/s10609-009-9108-y
Montana, R. (2009). Pubblico ministero e pratiche di selezione del crimine. Cultura giuridica e rappresentazioni di senso comune: perché si può ancora sperare. Antigone. Quadrimestrale di critica al sistema penale e penitenziario, 4(2-3), pp. 274-298.
Motshegwa, T. (2009). Distributed Termination Detection For Multiagent Protocols. (Unpublished Doctoral thesis, City, University of London)
Motson, N. (2009). Essays on hedge fund risk, return and incentives. (Unpublished Doctoral thesis, City University London)
Mulligan, K., Etheridge, A., Kassoumeri, L. view all authors, Wedderburn, L. R. & Newman, S. P. (2009). Do Mothers and Fathers Hold Similar Views About Their Child's Arthritis?. Arthritis Care and Research, 61(12), pp. 1712-1718. doi: 10.1002/art.25008
Munira, S. (2009). Momentum return: is it a compensation for risk?. (Unpublished Doctoral thesis, City, University of London)
Murphy, R., Mondragon, E. ORCID: 0000-0003-4180-1261 & Murphy, V. A. (2009). Covariation, Structure and Generalization: Building Blocks of Causal Cognition. International Journal of Comparative Psychology, 22(1), pp. 61-74.
Myerson, J. (2009). Invasion BBC Radio 4.
Myerson, J. (2009). Number 10 (Series 3) BBC Radio 4.
Neal, S. & McLaughlin, E. (2009). Researching up: interviews, emotionality and policy making elites. Journal of Social Policy, 38(4), pp. 689-707. doi: 10.1017/S0047279409990018
Nguyen, T.H., Lin, Y. C., Chen, C. T. view all authors, Surre, F., Venugopalan, T., Sun, T. & Grattan, K. T. V. (2009). Fibre optic chloride sensor based on fluorescence quenching of an acridinium dye. Proceedings of SPIE, 7503, doi: 10.1117/12.835607
Nicholls, Kate (2009). Researching relationships: Unpacking the discursive organisation of infidelity and monogamy in personal relationships. (Unpublished Doctoral thesis, City University London)
Nightingale, P, Murray, G, Cowling, M. view all authors, Baden-Fuller, C., Mason, C, Siepel, J, Hopkins, M & Dannreuther, C (2009). From funding gaps to thin markets.UK Government support for early-stage venture capital. NESTA.
Nikolopoulos, D. S. & Pothos, E. M. (2009). Dyslexic participants show intact spontaneous categorization processes. Dyslexia, 15(3), pp. 167-186. doi: 10.1002/dys.375
Nikolopoulos, N., Theodorakakos, A. & Bergeles, G. (2009). Off-centre binary collision of droplets: A numerical investigation. INTERNATIONAL JOURNAL OF HEAT AND MASS TRANSFER, 52(19-20), pp. 4160-4174. doi: 10.1016/j.ijheatmasstransfer.2009.04.011
Noble, H. (2009). Opting not to dialyse: A practitioner research study to explore patient experience. (Unpublished Doctoral thesis, City University London)
O'Leary, C.I (2009). The correction of borderline refractive and heterophoric anomalies. (Unpublished Doctoral thesis, City, University of London)
Olmo, J. (2009). Extreme Value Theory Filtering Techniques for Outlier Detection (09/09). London, UK: Department of Economics, City University London. doi: 09/09
Olmo, J., Pilbeam, K. & Pouliot, W. (2009). Detecting the Presence of Informed Price Trading Via Structural Break Tests (09/10). London, UK: Department of Economics, City University London. doi: 09/10
Ostroff, N (2009). The Influence of Weimar Culture on Pop Music in the 1970s and '80s. (Unpublished Doctoral thesis, City, University of London)
Ous, T. & Arcoumanis, C. (2009). The formation of water droplets in an air-breathing PEMFC. International Journal of Hydrogen Energy, 34(8), pp. 3476-3487. doi: 10.1016/j.ijhydene.2009.02.037
Owen, T. & Meyer, J. ORCID: 0000-0001-5378-2761 (2009). Minimising the Use of 'Restraint' in Care Homes: Challenges, Dilemmas and Positive approaches. London, UK: Social Care Institute for Excellence.
Pace, I. (2009). Coldness and Cruelty as Performance in Deleuze's Proust. In: Bryden, M. & Topping, M. (Eds.), Beckett's Proust/Deleuze's Proust. (pp. 183-198). Palgrave Macmillan.
Pace, I. (2009). Notation, Time and the Performer's Relationship to the Score in Contemporary Music. In: Crispin, D. (Ed.), Unfolding Time. (pp. 151-192). Leuven University Press.
Pace, I. (2009). Performance as Analysis, Analysis as Performance. Paper presented at the From Analysis to Music, 27-05-2009, Orpheus Institute, Ghent, Belgium.
Pace, I. (2009). Verbal Discourse as Aesthetic Arbitrator in Contemporary Music. In: Heile, B. (Ed.), The Modernist Legacy. (pp. 81-99). Farnham: Ashgate Publishing, Ltd..
Pagliari, S. & Helleiner, E. (2009). Crisis and the Reform of International Financial Regulation. In: Helleiner, E, Pagliari, S & Zimmerman, H (Eds.), Global Finance in Crisis: The Politics of International Financial Regulation. . Taylor & Francis.
Paine, J. (2009). Heroin Addiction and Longing to Belong. (Unpublished Doctoral thesis, City University London)
Papakonstantinou, S. & Brujic-Okretic, V. (2009). Framework for context-aware smartphone applications. The Visual Computer, 25(12), pp. 1121-1132. doi: 10.1007/s00371-009-0391-8
Papakonstantinou, S. & Brujic-Okretic, V. (2009). Prototyping a Context-Aware Framework for Pervasive Entertainment Applications. Paper presented at the Games and Virtual Worlds for Serious Applications.
Papanikolaou, V.K. & Kappos, A. J. (2009). Numerical study of confinement effectiveness in solid and hollow reinforced concrete bridge piers: Methodology. Computers & Structures, 87(21-22), pp. 1427-1439. doi: 10.1016/j.compstruc.2009.05.004
Parfitt, Y. & Ayers, S. (2009). The effect of postnatal symptoms of post-traumatic stress and depression on the couple's relationship and parent-baby bond. Journal of Reproductive and Infant Psychology, 27(2), pp. 127-142. doi: 10.1080/02646830802350831
Parker, P. M. (2009). What should we assess in practice?. Journal Of Nursing Management, 17(5), pp. 559-569. doi: 10.1111/j.1365-2834.2009.01025.x
Parmar, D. (2009). Community-based health insurance: improving household economic indicators?. Paper presented at the 2nd Scientific Meeting Centre de Recherche en Sante de Nouna, 03-12-2009 - 05-12-2009, Nouna, Burkina Faso.
Parmar, I. (2009). Foreign policy fusion: Liberal interventionists, conservative nationalists and neoconservatives - The new alliance dominating the US foreign policy establishment. International Politics, 46(2-3), doi: 10.1057/ip.2008.47
Patterson, F., Carr, V., Zibarras, L. D. view all authors, Burr, B., Berkin, L., Plint, S., Irish, B. & Gregory, S. (2009). New machine-marked tests for selection into core medical training: evidence from two validation studies. Clinical Medicine, 9(5), pp. 417-420.
Paul, M., Hennig-Thurau, T., Gremler, D. D. view all authors, Gwinner, K. P. & Wiertz, C. (2009). Toward a theory of repeat purchase drivers for consumer services. Journal of the Academy of Marketing Science, 37(2), pp. 215-237. doi: 10.1007/s11747-008-0118-9
Paulson, Susan Mary (2009). An Exploration of How Various 'Cultures of Dance' Construct Experiences of Health and Growing Older. (Unpublished Doctoral thesis, City University London)
Pearson, J.A. (2009). Portfolio of doctorate in health psychology. (Unpublished Doctoral thesis, City University London)
Perkins, A. M., Ettinger, U., Davis, R. view all authors, Foster, R., Williams, S. C. R. & Corr, P. J. (2009). Effects of lorazepam and citalopram on human defensive reactions: Ethopharmacological differentiation of fear and anxiety. Journal of Neuroscience, 29(40), pp. 12617-12624. doi: 10.1523/JNEUROSCI.2696-09.2009
Petrakova, N., Gudmundsdotter, L., Yermalovich, M. view all authors, Belikov, S., Eriksson, L. E., Pyakurel, P., Johansson, O., Biberfeld, P., Andersson, S. & Isaguliants, M. (2009). Autoimmunogenicity of the helix-loop-helix DNA-binding domain. Molecular Immunology, 46(7), pp. 1467-1480. doi: 10.1016/j.molimm.2008.12.013
Phillips, J. P., George, K., Kyriacou, P. A. view all authors & Langford, R. M. (2009). Investigation of Photoplethysmographic Changes using a Static Compression Model of Spinal Cord Injury. Paper presented at the EMBC 2009. Annual International Conference of the IEEE, 3-6 Sept. 2009, Minneapolis, MN. doi: 10.1109/IEMBS.2009.5334166
Phillips, J. P., Langford, R. M., Chang, S. H. view all authors, Maney, K., Kyriacou, P. A. & Jones, D. P. (2009). Evaluation of a Fiber-optic Esophageal Pulse Oximeter. Paper presented at the EMBC 2009. Annual International Conference of the IEEE, 3-6 Sept. 2009, Minneapolis, MN.
Phillips, J. P., Langford, R. M., Chang, S. H. view all authors, Maney, K., Kyriacou, P. A. & Jones, D. P. (2009). Measurements of Cerebral Arterial Oxygen Saturation using a Fiber-optic Pulse Oximeter. Paper presented at the EMBC 2009. Annual International Conference of the IEEE, 3-6 Sept. 2009, Minneapolis, MN. doi: 10.1109/IEMBS.2009.5334604
Phillips, J. P., Langford, R. M., Chang, S. H. view all authors, Maney, K., Kyriacou, P. A. & Jones, D. P. (2009). An oesophageal pulse oximetry system utilising a fibre-optic probe. Journal of Physics: Conference Series, 178(1), 012021. doi: 10.1088/1742-6596/178/1/012021
Phylaktis, K. & Xia, L. (2009). Equity Market Comovement and Contagion: A Sectoral Perspective. Financial Management, 38(2), pp. 381-409. doi: 10.1111/j.1755-053X.2009.01040.x
Pilling, D., Timmons, J. C., Johnson, R. view all authors & Boeltzig, H. (2009). US and UK Routes to Employment: Strategies to Improve Integrated Service Delivery to People with Disabilities. Washington DC, US: The IBM Center for the Business of Government.
Pitsakis, K. (2009). The diffusion of university spinoffs: Institutional and ecological perspectives. (Unpublished Doctoral thesis, City University London)
Plagnol, A., Rowley, E., Martin, P. view all authors & Livesey, F. (2009). Industry perceptions of barriers to commercialization of regenerative medicine products in the UK. Regenerative Medicine, 4(4), pp. 549-559. doi: 10.2217/RME.09.21
Poon, H. F. I. (2009). Human resource management changes in China: a case study of the banking industry. (Unpublished Doctoral thesis, City University London)
Pothos, E. M. & Bailey, T. M. (2009). Predicting Category Intuitiveness With the Rational Model, the Simplicity Model, and the Generalized Context Model. Journal of Experimental Psychology: Learning Memory and Cognition, 35(4), pp. 1062-1080. doi: 10.1037/a0015903
Pothos, E. M. & Busemeyer, J. R. (2009). A quantum probability explanation for violations of "rational" decision theory. Proceedings of the Royal Society B: Biological Sciences, 276(1665), pp. 2171-2178. doi: 10.1098/rspb.2009.0121
Pothos, E. M., Calitri, R., Tapper, K. view all authors, Brunstrom, J. M. & Rogers, P. J. (2009). Comparing measures of cognitive bias relating to eating behaviour. Applied Cognitive Psychology, 23(7), pp. 936-952. doi: 10.1002/acp.1506
Pothos, E. M., Hahn, U. & Prat-Sala, M. (2009). Similarity chains in the transformational paradigm. European Journal of Cognitive Psychology, 21(7), pp. 1100-1120. doi: 10.1080/09541440802485339
Pothos, E. M., Tapper, K. & Calitri, R. (2009). Cognitive and behavioral correlates of BMI among male and female undergraduate students. Appetite, 52(3), pp. 797-800. doi: 10.1016/j.appet.2009.03.002
Pothos, E. M. & Wood, R. L. (2009). Separate influences in learning: Evidence from artificial grammar learning with traumatic brain injury patients. Brain Research, 1275, pp. 67-72. doi: 10.1016/j.brainres.2009.04.019
Pratt, A.C. (2009). Social and economic drivers of land use change in the British space economy. Land Use Policy, 26(S1), S109-S114. doi: 10.1016/j.landusepol.2009.09.006
Pratt, A.C. (2009). Urban regeneration: from the arts 'feel good' factor to the cultural economy. A case study of Hoxton, London. Urban Studies, 46(5-8), pp. 1041-1061. doi: 10.1177/0042098009103854
Pratt, A.C. (2009). The creative and cultural economy and the recession. Geoforum, 40(4), pp. 495-496. doi: 10.1016/j.geoforum.2009.05.002
Presseau, J., Sniehotta, F. F. & Francis, J. (2009). Multiple goals and time constraints: perceived impact on physicians' performance of evidence-based behaviours. Implementation Science, 4, 77 - ?. doi: 10.1186/1748-5908-4-77
Procter, S., Bickerton, J., Allan, T. view all authors, Davies, H., Abbott, S., Apau, D., Dewan, V., Frazer, A., Lynch, A., Wych, G., Nijjar, A. & Davies, J. (2009). Streaming Emergency Department Patients to Primary Care Services: Developing a Consensus in North East London (9781900804391). London: City University, London. doi: 9781900804391
Rahman, B. M., Kejalakshmy, N., Uthman, M. view all authors, Agrawal, A., Wongcharoen, T. & Grattan, K. T. V. (2009). Mode degeneration in bent photonic crystal fiber study by using the finite element method. Applied Optics, 48(31), G131 - G138. doi: 10.1364/AO.48.00G131
Raine, R., Cartwright, M., Richens, Y. view all authors, Muhamed, Z. & Smith, D. (2009). A qualitative study of women's experiences of communication in antenatal care: Identifying areas for action. Maternal and Child Health Journal, 14(4), pp. 590-599. doi: 10.1007/s10995-009-0489-7
Rajarajan, M., Spackova, B., Piliarik, M. view all authors, Kvasnicka, P., Themistos, C. & Homola, J. (2009). Novel concept of multi-channel fiber optic surface plasmon resonance sensor. Sensors and Actuators B: Chemical, 139(5), pp. 199-203. doi: 10.1016/j.snb.2008.12.020
Randell, R., Mamykina, L., Fitzpatrick, G. view all authors, Tanggaard, C. & Wilson, S. (2009). Evaluating New Interactions in Healthcare: Challenges and Approaches. Paper presented at the CHI2009 Conference on Human Factors in Computing Systems, 03-04-2009 - 09-04-2009, Boston, MA, USA.
Rasulo, D., Mayhew, L. & Rickayzen, B. D. (2009). The decomposition of disease and disability life expectancies in England 1992-2004 (Actuarial Research Paper No. 191). London, UK: Faculty of Actuarial Science & Insurance, City University London. doi: Actuarial Research Paper No. 191
Rauscher, F. G. (2009). Central and Peripheral Visual Function: Effects of Age and Disease. (Unpublished Doctoral thesis, City University London)
Redding, E. (2009). Testing and Training for Physical Fitness in Contemporary Dance: Investigations. (Unpublished Doctoral thesis, City University London)
Reimers, S. (2009). A paycheck half-empty or half-full? Framing, fairness and progressive taxation. Judgment and Decision Making, 4(6), pp. 461-466.
Reyes-Aldasoro, C. C. (2009). Retrospective shading correction algorithm based on signal envelope estimation. Electronics Letters, 45(9), pp. 454-456. doi: 10.1049/el.2009.0320
Reyes-Aldasoro, C. C., Zhao, Y., Coca, D. view all authors, Billings, S. A., Kadirkamanathan, V., Tozer, G. M. & Renshaw, S. A. (2009). Analysis of immune cell function using in vivo cell shape analysis and tracking. Paper presented at the 4th IAPR International Conference on Pattern Recognition in Bioinformatics, 07-09-2009 - 09-09-2009, Sheffield, UK.
Rich, A. (2009). Health Psychology: applied Health Psychology within Health Promotion. (Unpublished Doctoral thesis, City University London)
Ringsnose, J. & Schouenborg, L. ORCID: 0000-0002-2660-3403 (2009). Norden, Europa eller USA? De udenrigspolitiske overvejelser i forbindelse med købet af de danske F-16-fly. Internasjonal Politikk, 67(4), pp. 585-609.
Rubesam, A. (2009). ESSAYS ON EMPIRICAL ASSET PRICING USING BAYESIAN METHODS. (Unpublished Doctoral thesis, City University London)
Ruiz Garcia, V., Burls, A., Cabello Lopez, J. C. L. view all authors, Fry-Smith, A., Munoz, J. G. G., Jobanputra, P. & Saiz Cuenca, E. S. C. (2009). Certolizumab pegol (CDP870) for rheumatoid arthritis in adults. Cochrane Database of Systematic Reviews(1), CD007649. doi: 10.1002/14651858.CD007649.pub2
Russi, L. (2009). Substance or Mere Technique? A Precis on Good Faith Performance in England, France and Germany. Hanse Law Review, 5(1), pp. 21-30.
Russi, L. & Longobardi, F. (2009). A tiny heart beating: Student-edited legal periodicals in good ol' Europe. German Law Journal, 10(7), pp. 1127-1148.
Rybynok, V., Kyriacou, P. A., Binnersley, J. view all authors, Woodcock, A. & Wallace, L. M. (2009). My Care Card development: the patient held electronic health record device. Paper presented at the 2009 9th International Conference on Information Technology and Applications in Biomedicine, 4-7 Nov 2009, Larnaca, Cyprus.
Sambo, C.F. (2009). Crossmodal spatial representations: behavioural and electrophysiological evidence on the effects of vision and posture on somatosensory processing in normal population and in right-brain-damaged patients. (Unpublished Doctoral thesis, City University London)
Sambo, C.F. & Forster, B. (2009). An ERP Investigation on Visuotactile Interactions in Peripersonal and Extrapersonal Space: Evidence for the Spatial Rule. JOURNAL OF COGNITIVE NEUROSCIENCE, 21(8), pp. 1550-1559. doi: 10.1162/jocn.2009.21109
Sambo, C.F., Gillmeister, H. & Forster, B. (2009). Viewing the body modulates neural mechanisms underlying sustained spatial attention in touch. EUROPEAN JOURNAL OF NEUROSCIENCE, 30(1), pp. 143-150. doi: 10.1111/j.1460-9568.2009.06791.x
Sandoval, M. (2009). A critical contribution to the foundations of alternative media studies. Kurgu: Online International Journal of Communication Studies, 1,
Sarno, L., Della Corte, P. & Tsiakas, I. (2009). An Economic Evaluation of Empirical Exchange Rate Models. Review of Financial Studies, 22(9), pp. 3491-3530. doi: 10.1093/rfs/hhn058
Sauleau, P., Eusebio, A., Thevathasan, W. view all authors, Yarrow, K., Pogosyan, A., Zrinzo, L., Ashkan, K., Aziz, T., Vandenberghe, W., Nuttin, B. & Brown, P. (2009). Involvement of the subthalamic nucleus in engagement with behaviourally relevant stimuli. European Journal Of Neuroscience, 29(5), pp. 931-942. doi: 10.1111/j.1460-9568.2009.06635.x
Sawyer, A. & Ayers, S. (2009). Post-traumatic growth in women after childbirth. Psychology and Health, 24(4), pp. 457-471. doi: 10.1080/08870440701864520
Scarbrough, H. & Amaeshi, K. (2009). Knowledge Governance for Open Innovation: Evidence from an EU R&D Collaboration. In: Knowledge Governance: Processes and Perspectives. (pp. 220-246). Oxford University Press. doi: 10.1093/acprof:oso/9780199235926.003.0009
Scarbrough, H. & Swan, J. (2009). Project Work as a Locus of Learning: The Journey Through Practice. In: Community, Economic Creativity, and Organization. (pp. 148-177). UK: Oxford University Press. doi: 10.1093/acprof:oso/9780199545490.003.0007
Schira, M. M., Tyler, C. W., Breakspear, M. view all authors & Spehar, B. (2009). The Foveal Confluence in Human Visual Cortex. Journal of Neuroscience, 29(28), pp. 9050-9058. doi: 10.1523/JNEUROSCI.1760-09.2009
Schmeling, M., Melvin, M. M. & Menkhoff, L. (2009). Exchange Rate Management in Emerging Markets: Intervention via an Electronic Limit Order Book. Journal of International Economics, 79(1), pp. 54-63. doi: 10.1016/j.jinteco.2009.06.008
Schroth, E. & Albuquerque, R. (2009). Quantifying Private Benefits of Control from a Structural Model of Block Trades (202/2008). ECGI. doi: 202/2008
Schulz, A., De Martino, A., Ingenhoven, P. view all authors & Egger, R. (2009). Low-energy theory and RKKY interaction for interacting quantum wires with Rashba spin-orbit coupling. Physical Review B (PRB), 79(20), doi: 10.1103/PhysRevB.79.205432
Scott, J., Nolan, J. & Plagnol, A. (2009). Panel data and open-ended questions: Understanding perceptions of quality of life. Twenty-First Century Society: Journal of the Academy of Social Sciences, 4(2), pp. 123-135. doi: 10.1080/17450140902988891
Seward, L (2009). The Effect of Continuous Flight Auger Pile Installation on the Soil-Pile Interface in the Mercia Mudstone Group. (Unpublished Doctoral thesis, City, University of London)
Seyff, N., Maiden, N., Karlsen, K. view all authors, Lockerbie, J., Gruenbacher, P., Graf, F. & Ncube, C. (2009). Exploring how to use scenarios to discover requirements. Requirements Engineering, 14(2), pp. 91-111. doi: 10.1007/s00766-009-0077-9
Shafique, M., Phillips, J. P. & Kyriacou, P. A. (2009). Design and development of a new non-invasive trans-reflectance photoplethysmographic probe. Paper presented at the Annual National Conference of the Institute of Physics and Engineering in Medicine (IPEM 2009), 14-16 Sep 2009, Liverpool, UK.
Shafique, M., Phillips, J. P. & Kyriacou, P. A. (2009). A novel non-invasive trans-reflectance photoplethysmographic probe for use in cases of low peripheral blood perfusion. Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference, doi: 10.1109/IEMBS.2009.5334165
Shafqat, K., Pal, S., Kumari, S. view all authors & Kyriacou, P. A. (2009). Time-Frequency analysis of HRV data from locally anesthetized patients. Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2009. EMBC 2009, pp. 1824-1827. doi: 10.1109/IEMBS.2009.5332604
Shafqat, K., Pal, S. K., Kumari, S. view all authors & Kyriacou, P. A. (2009). Empirical Mode Decomposition (EMD) analysis of HRV data from locally anesthetized patients. Paper presented at the EMBC 2009. Annual International Conference of the IEEE, 3-6 Sept. 2009, Minneapolis, MN. doi: 10.1109/IEMBS.2009.5335000
Shah, R., Edgar, D. F, Harle, D. E. view all authors, Weddell, L., Austen, D. P., Burghardt, D. & Evans, B. J. W. (2009). The content of optometric eye examinations for a presbyopic patient presenting with symptoms of flashing lights. Ophthalmic And Physiological Optics, 29(2), pp. 105-126. doi: 10.1111/j.1475-1313.2008.00613.x
Shah, R., Edgar, D. F, Rabbetts, R. view all authors, Harle, D. E. & Evans, B. J. W. (2009). Standardized Patient Methodology to Assess Refractive Error Reproducibility. Optometry and Vision Science, 86(5), pp. 517-528. doi: 10.1097/OPX.0b013e31819fa590
Shah, R., Edgar, D. F, Spry, P. G. view all authors, Harper, R. A., Kotecha, A., Rughani, S. & Evans, B. J. W. (2009). Glaucoma detection: the content of optometric eye examinations for a presbyopic patient of African racial descent. British Journal of Ophthalmology, 93(4), pp. 492-496. doi: 10.1136/bjo.2008.145623
Shah, Rakhee (2009). An Evidence-Based Investigation of the Content of Optometric Eye Examinations in the UK. (Unpublished Doctoral thesis, City University London)
Sigurjónsson, N. (2009). Variations on the act of listening: Twenty-one orchestra audience development events in light of John Dewwy's 'art as experience' metaphor. (Unpublished Doctoral thesis, City University London)
Silvers, L. J., Bushby, P. J. & Proctor, M. R. E. (2009). Interactions between magnetohydrodynamic shear instabilities and convective flows in the solar interior. Monthly Notices Of The Royal Astronomical Society, 400(1), pp. 337-345. doi: 10.1111/j.1365-2966.2009.15455.x
Silvers, L. J., Vasil, G. M., Brummell, N. H. view all authors & Proctor, M. R. E. (2009). Double-diffusive instabilities of a shear-generated magnetic layer. The Astrophysical Journal Letters, 702(1), doi: 10.1088/0004-637X/702/1/L14
Silvestri, S. (2009). Unveiled Issues: Reflections from a Comparative Pilot Study on Europe's Muslim Women (CUTP/005). London, UK: Department of International Politics, City University London, ISSN 2052-1898. doi: CUTP/005
Simpson, A. & Brennan, G. (2009). Working in Partnership. In: Callaghan, P., Playle, P. & Cooper, L. (Eds.), Mental Health Nursing Skills. (pp. 74-84). UK: Oxford University Press.
Singer, J. (2009). Convergence and divergence. Journalism, 10(3), pp. 375-377. doi: 10.1177/1464884909102579
Singer, J. (2009). Ethnography. Journalism and Mass Communication Quarterly, 86(1), pp. 191-198. doi: 10.1177/107769900908600112
Singer, J. (2009). Implications of Technological Change for Journalists' Tasks and Skills. Journal of Media Business Studies, 6(1), pp. 61-85.
Singer, J. (2009). Journalism in the Network. In: The Routledge Companion to News and Journalism Studies. (pp. 277-286). New York: Routledge.
Singer, J. (2009). Role call: 2008 Campaign and election coverage on the web sites of leading U.S. newspapers. Journalism and Mass Communication Quarterly, 86(4), pp. 827-843. doi: 10.1177/107769900908600407
Singer, J. (2009). Separate spaces: Discourse about the 2007 Scottish elections on a national newspaper Web site. International Journal of Press/Politics, 14(4), pp. 477-496. doi: 10.1177/1940161209336659
Singer, J. & Ashman, I. (2009). 'Comment Is Free, but Facts Are Sacred': User-generated Content and Ethical Constructs at the Guardian. Journal of Mass Media Ethics, 24(1), pp. 3-21. doi: 10.1080/08900520802644345
Singer, J. & Ashman, I. (2009). User-Generated Content and Journalistic Values. In: Allan, S & Thorsen, E. (Eds.), Citizen Journalism: Global Perspectives. Global Crises and the Media (1). (pp. 233-242). New York, USA: Peter Lang.
Singer, J. & Quandt, T. (2009). Convergence and Cross-Platform Content Production. In: Wahl-Jorgensen, K. & Hanitszch,, T. (Eds.), The Handbook of Journalism Studies. (pp. 130-144). New York: Routledge.
Slabaugh, G. G., Unal, G. B., Wels, M. view all authors, Fang, T. & Rao, B. (2009). Statistical Region-Based Segmentation of Ultrasound Images. Ultrasound in Medicine and Biology, 35(5), pp. 781-795. doi: 10.1016/j.ultrasmedbio.2008.10.014
Slingsby, A., Dykes, J. & Wood, J. (2009). Configuring Hierarchical Layouts to Address Research Questions. IEEE Transactions on Visualization and Computer Graphics, 15(6), pp. 977-984. doi: 10.1109/TVCG.2009.128
Slingsby, A., Lowe, R., Dykes, J. view all authors, Stephenson, D., Wood, J. & Jupp, T. (2009). A pilot study for the collaborative development of new ways of visualising seasonal climate forecasts. Paper presented at the GIS Research UK, 17th Annual Conference, 1 - 3 Apr 2009, University of Durham, Durham, UK.
Smith, P. J. (2009). 'Contention' in multiple myeloma: the impact on life and supportive care needs. (Unpublished Doctoral thesis, City University London)
Solomon, J. A. (2009). The history of dipper functions. Attention, Perception, & Psychophysics, 71(3), pp. 435-443. doi: 10.3758/APP.71.3.435
Spanos, P. D., Giaralis, A. & Li, J. (2009). Synthesis of accelerograms compatible with the Chinese GB 50011-2001 design spectrum via harmonic wavelets: artificial and historic records. Earthquake engineering and engineering vibration, 8(2), pp. 189-206. doi: 10.1007/s11803-009-9017-4
Spanoudakis, G. & Comuzzi, M. (2009). Describing and Verifying Monitoring Capabilities for Service Based Systems. Paper presented at the CAiSE 2009 Forum, 8-12 Jun 2009, Amsterdam, The Netherlands.
Spanoudakis, G. & LoPresti, S. (2009). Web Service Trust: Towards A Dynamic Assessment Framework. Paper presented at the International Conference on Availability, Reliability and Security, 2009. ARES '09, 16 - 19 Mar 2009, Fukuoka Institute of Technology, Fukuoka, Japan. doi: 10.1109/ARES.2009.149
Sparrow, H. (2009). Nothings ever enough: the counselling psychology of compulsive buying, perfectionism and hedonic adapations. (Unpublished Doctoral thesis, City University London)
Spee, P. & Jarzabkowski, P. (2009). Strategy tools as boundary objects. Strategic Organization, 7(2), pp. 223-232. doi: 10.1177/1476127009102674
Spicer, A., Alvesson, M. & Kärreman, D. (2009). Critical Performativity: The unfinished business of critical management studies. Human Relations, 62(4), pp. 537-560. doi: 10.1177/0018726708101984
Spreeuw, J. & Karlsson, M. (2009). Time Deductibles as Screening Devices: Competitive Markets. Journal Of Risk And Insurance, 76(2), pp. 261-278. doi: 10.1111/j.1539-6975.2009.01298.x
Stankovic, V., Bessani, A. N., Daidone, A. view all authors, Gashi, I., Obelheiro, R. R. & Sousa, P. (2009). Enhancing Fault / Intrusion Tolerance through Design and Configuration Diversity. Paper presented at the 3rd Workshop on Recent Advances on Intrusion-Tolerant Systems (WRAITS 2009), Jun 2009, Estoril, Lisbon, Portugal.
Stankovic, V. & Strigini, L. (2009). A survey on online monitoring approaches of computer-based systems. London, UK: Centre for Software Reliability, City University London.
Stares, S. (2009). Using latent class models to explore cross-national typologies of Public engagement with Science and technology in Europe. Science, Technology and Society, 14(2), 289 329. doi: 10.1177/097172180901400205
Stefanski, B. (2009). Green-Schwarz action for Type IIA strings on $AdS_4\times CP^3$. Nuclear Physics B, 808(1-2), pp. 80-87. doi: 10.1016/j.nuclphysb.2008.09.015
Steggall, M.J. (2009). 'A loose nerve': Culture(s), Time and Governmentality in the biomedical treatment of Premature Ejaculation in Bangladeshi Muslim men. (Unpublished Doctoral thesis, City University London)
Stewart, C. E., Wilson, C. M. & Fielder, A. R. (2009). A 3 1/2 year old girl presenting with strabismus. British Medical Journal (BMJ), 338(b68), p. 243. doi: 10.1136/bmj.b68
Stewart, D., Bowers, L., Simpson, A. view all authors, Ryan, C. & Tziggili, M. (2009). Manual restraint of adult psychiatric inpatients: a literature review. Conflict and Containment Reduction Research Programme.
Stewart, D., Bowers, L., Simpson, A. view all authors, Ryan, C. & Tziggili, M. (2009). Manual restraint of adult psychiatric inpatients: a literature review. JOURNAL OF PSYCHIATRIC AND MENTAL HEALTH NURSING, 16(8), pp. 749-757. doi: 10.1111/j.1365-2850.2009.01475.x
Stosic, N., Smith, I. K. & Kovacevic, A. (2009). Steam as the Working Fluid for Power Recovery from Exhaust Gases by Means of Screw Expanders. Paper presented at the International Conference on Compressors and Their Systems, 07-09-2009 - 09-09-2009, London, England.
Stuart, Alison (2009). The inner world of dance: An exploration into the psychological support needs of professional dancers. (Unpublished Doctoral thesis, City University London)
Stumpf, S., Rajaram, V., Li, L. view all authors, Wong, W-K., Burnett, M., Dietterich, T. G., Sullivan, E. & Herlocker, J. (2009). Interacting meaningfully with machine learning systems: Three experiments. International Journal of Human Computer Studies, 67(8), pp. 639-662. doi: 10.1016/j.ijhcs.2009.03.004
Stychin, C. (2009). Closet Cases: 'Conscientious Objection' to Lesbian and Gay Legal Equality. Griffith Law Review, 18, pp. 17-40.
Stychin, C. (2009). Faith in the future: Sexuality, religion and the public sphere. Oxford Journal of Legal Studies, 29(4), pp. 729-755. doi: 10.1093/ojls/gqp016
Suarez-Tangil, G., Palomar, E., De Fuentes, J. M. view all authors, Blasco, J. & Ribagorda, A. (2009). Automatic rule generation based on genetic programming for event correlation. Advances in Intelligent and Soft Computing, 63, pp. 127-134. doi: 10.1007/978-3-642-04091-7_16
Susen, S. (2009). Between Emancipation and Domination: Habermasian Reflections on the Empowerment and Disempowerment of the Human Subject. Pli: The Warwick Journal of Philosophy, 20, pp. 80-110.
Susen, S. (2009). The Philosophical Significance of Binary Categories in Habermas's Discourse Ethics. Sociological Analysis, 3(2), pp. 97-125.
Tapper, K., Shaw, C., Ilsley, J. view all authors, Hill, A. J., Bond, F. W. & Moore, L. (2009). Exploratory randomised controlled trial of a mindfulness-based weight loss intervention for women. Appetite, 52(2), pp. 396-404. doi: 10.1016/j.appet.2008.11.012
Taylor, Abigail (2009). Integrating the mind and the body: examining the role of counselling psychology for individuals with physical health problems. (Unpublished Doctoral thesis, City University London)
Tedeschi, G., Iori, G. & Gallegati, M. (2009). The role of communication and imitation in limit order markets. European Physical Journal B (The), 71(4), pp. 489-497. doi: 10.1140/epjb/e2009-00337-6
Terjesen, S., Sealy, R. & Singh, V. (2009). Women directors on corporate boards: A review and research agenda. Corporate Governance, 17(3), pp. 320-337. doi: 10.1111/j.1467-8683.2009.00742.x
Themistos, C., Rajarajan, M., Rahman, B. M. view all authors & Grattan, K. T. V. (2009). Characterization of Silica Nanowires for Optical Sensing. Journal of Lightwave Technology, 27(24), pp. 5537-5542.
Thurman, N. & Myllylahti, M. (2009). Taking the paper out of news: A case study of Taloussanomat, Europe's first online-only newspaper. Journalism Studies, 10(5), pp. 691-708. doi: 10.1080/14616700902812959
Tibber, M.S., Anderson, E. J., Melmoth, D. view all authors, Rees, G. & Morgan, M. J. (2009). Common Cortical Loci Are Activated during Visuospatial Interpolation and Orientation Discrimination Judgements. PLoS One, 4(2), e4585. doi: 10.1371/journal.pone.0004585
Tidhar, D., Fazekas, G., Kolozali, S. view all authors & Sandler, M. (2009). Publishing Music Similarity Features on the Semantic Web.. Paper presented at the 10th International Society for Music Information Retrieval Conference, ISMIR 2009, 26 - 30 Oct 2009, Kobe, Japan.
Trapani, L. & Urga, G. (2009). Optimal forecasting with heterogeneous panels: A Monte Carlo study. International Journal of Forecasting, 25(3), pp. 567-586. doi: 10.1016/j.ijforecast.2009.02.001
Tsanakas, A. (2009). To split or not to split: capital allocation with convex risk measures. Insurance: Mathematics and Economics, 44(2), pp. 268-277. doi: 10.1016/j.insmatheco.2008.03.007
Tsavdaridis, K. D. & D'Mello, C. (2009). FE Investigation of Perforated Sections with Standard and Non-Standard Web Opening Configurations and Sizes. Paper presented at the 6th International Conference on Advances is Steel Structures, 16-12-2009 - 18-12-2009, Hong Kong, China.
Tsavdaridis, K. D., D'Mello, C. & Hawes, M. (2009). Experimental Study of Ultra Shallow Floor Beams (USFB) with Perforated Steel Sections. Paper presented at the Nordic Steel Construction Conference 2009 - NSCC2009, 02-09-2009 - 04-09-2009, Malmö, Sweden.
Tsavdaridis, K. D., D'Mello, C. & Huo, B. Y. (2009). Shear Capacity of Perforated Concrete-Steel Ultra Shallow Floor Beams (USFB). Paper presented at the 6th National Concrete Conference, TEE, ETEK, 21-10-2009 - 23-10-2009, Paphos, Cyprus.
Tselikis, C., Mitropoulos, S., Douligeris, C. view all authors, Ladis, E., Georgouleas, K., Vangelatos, C. & Komninos, N. (2009). Empirical study of clustering algorithms for wireless ad hoc networks. Paper presented at the 16th International Conference on Systems, Signals and Image Processing (IWSSIP 2009), 18 - 20 June 2009, Chalkida, Greece.
Tsigkritis, T., Spanoudakis, G., Kloukinas, C. view all authors & Lorenzoli, D. (2009). Diagnosis and Threat Detection Capabilities of the SERENITY Monitoring Framework. In: Spanoudakis, G., Gomez, A. & Kokolakis, S. (Eds.), Security and Dependability for Ambient Intelligence. Advances in Information Security, 45. (pp. 239-271). USA: Springer. doi: 10.1007/978-0-387-88775-3_14
Tyler, C. W. (2009). Straightness and the sphere of vision. PERCEPTION, 38(10), doi: 10.1068/p3810ed
Tyler (formerly Curtis), K. (2009). Levers and barriers to patient-centred care with school-age children living with long-term illness in multi-cultural settings: Type 1 diabetes as a case study. (Unpublished Doctoral thesis, City University London)
Tzavaras, A. (2009). Intelligent Decision Support Systems in Ventilation Management. (Unpublished Doctoral thesis, City University London)
Upile, T., Jerjes, W., Kafas, P. view all authors, Hirani, S. P., Singh, S. U., Guyer, M., Bentley, M., Sudhoff, H. & Hopper, C. (2009). Salivary VEGF: a non-invasive angiogenic and lymphangiogenic proxy in head and neck cancer prognostication. International Archives of Medicine, 2(1), p. 12. doi: 10.1186/1755-7682-2-12
Visser, I., Raijmakers, M. E. J. & Pothos, E. M. (2009). Individual strategies in artificial grammar learning. American Journal of Psychology, 122(3), pp. 293-307.
Vogiatzaki, K., Cleary, M. J., Kronenburg, A. view all authors & Kent, J. H. (2009). Modeling of scalar mixing in turbulent jet flames by multiple mapping conditioning. Physics of Fluids, 21(2), 025105. doi: 10.1063/1.3081553
Vogiatzaki, K., Kronenburg, A., Cleary, M. J. view all authors & Kent, J. H. (2009). Multiple mapping conditioning of turbulent jet diffusion flames. Proceedings of the Combustion Institute, 32(2), pp. 1679-1685. doi: 10.1016/j.proci.2008.06.164
van der Merwe, M., Bowers, L., Jones, J. view all authors, Simpson, A. & Haglund, K. (2009). Locked doors in acute inpatient psychiatry: a literature review. Journal of Psychiatric and Mental Health Nursing, 16(3), pp. 293-299. doi: 10.1111/j.1365-2850.2008.01378.x
Walby, S. (2009). Globalization and Inequalities: Complexity and Contested Modernities. Los Angeles: Sage.
Walby, S. (2009). The cost of domestic violence: Up-date 2009. Lancaster: Lancaster University.
Walsh, E. (2009). Predicting the Impact of Health States on Well-being: Explanations and Remedies for Biased Judgments. (Unpublished Doctoral thesis, City, University of London)
Walsh, E. & Ayton, P. (2009). My Imagination Versus Your Feelings: Can Personal Affective Forecasts Be Improved by Knowing Other Peoples' Emotions?. Journal of Experimental Psychology: Applied, 15(4), pp. 351-360. doi: 10.1037/a0017984
Walsh, G., Mitchell, V-W., Jackson, P. R. view all authors & Beatty, S. E. (2009). Examining the Antecedents and Consequences of Corporate Reputation: A Customer Perspective. British Journal of Management, 20(2), pp. 187-203. doi: 10.1111/j.1467-8551.2007.00557.x
Walsh, Y,S. (2009). A Qualitative Exploration of Cultic Experience in Relation to Mental Health Difficulties. (Unpublished Doctoral thesis, City, University of London)
Webster, F. (2009). Capitalism, Information and Democracy. Communications and Convergence Review, 1(1), pp. 15-31.
Wei, Q. (2009). A study of pay for performance in China's non-public sector knowledge-intensive industries. (Unpublished Doctoral thesis, City University London)
Wheelwright, J. (2009). Disappeared Review. The Independent,
Whited, B., Rossignac, J., Slabaugh, G. G. view all authors, Fang, T. & Unal, G. B. (2009). Pearling: Stroke segmentation with crusted pearl strings. Pattern Recognition and Image Analysis, 19(2), pp. 277-283. doi: 10.1134/S1054661809020102
Whittington, R., Bowers, L., Nolan, P. view all authors, Simpson, A. & Neil, L. (2009). Approval Ratings of Inpatient Coercive Interventions in a National Sample of Mental Health Service Users and Staff in England. Psychiatric Services, 60(6), pp. 792-798. doi: 10.1176/ps.2009.60.6.792
Willig, C. (2009). 'Unlike a Rock, a Tree, a Horse or an Angel ...' Reflections on the Struggle for Meaning through Writing during the Process of Cancer Diagnosis. Journal of Health Psychology, 14(2), pp. 181-189. doi: 10.1177/1359105308100202
Wilson, P.J. (2009). The impact of social influences on a woman's sense of self. (Unpublished Doctoral thesis, City University London)
Wilson, S., Randell, R., Galliers, J. R. view all authors & Woodward, P. (2009). Reconceptualising clinical handover: Information sharing for situation awareness. ECCE 2009 - EUROPEAN CONFERENCE ON COGNITIVE ERGONOMICS, 258, pp. 315-322. ISSN 0357-9387
Wimmer, M. C. & Howe, M. L. (2009). The development of automatic associative processes and children's false memories. Journal of Experimental Child Psychology, 104(4), pp. 447-465. doi: 10.1016/j.jecp.2009.07.006
Wolman, A. (2009). Protecting Victim Rights: The Role of the National Human Rights Commission of Korea. Journal of East Asia and International Law, 2(3), pp. 457-479. doi: 10.14330/jeail.2009.2.2.07
Wood, J., Dykes, J., Slingsby, A. view all authors & Radburn, R. (2009). Flow trees for exploring spatial trajectories. Paper presented at the GIS Research UK, 17th Annual Conference, 1 - 3 Apr 2009, University of Durham, Durham, UK.
Wood, J., Slingsby, A., Khalili-Shavarini, N. view all authors, Dykes, J. & Mountain, D. (2009). Visualization of uncertainty and analysis of geographical data. Paper presented at the Visual Analytics Science and Technology, 2009. VAST 2009. IEEE Symposium on, 12 - 13 Oct 2009, Atlantic City, NJ, USA. doi: 10.1109/VAST.2009.5333965
Xie, L. (2009). China's Environmental Activism in the Age of Globalisation (CUTP/006). London, UK: Department of International Politics, City University London, ISSN 2052-1898. doi: CUTP/006
Xu, F. (2009). Essays on Aggregate Liquidity and Corporate Events. (Unpublished Doctoral thesis, City University London)
Yan, S. & Ma, Q. (2009). Numerical simulation of interaction between wind and 2D freak waves. European Journal of Mechanics - B/Fluids, 29(1), pp. 18-31. doi: 10.1016/j.euromechflu.2009.08.001
Yarrow, K., Brown, P. & Krakauer, J. W. (2009). Inside the brain of an elite athlete: The neural processes that support high achievement in sports. Nature Reviews Neuroscience, 10(8), pp. 585-596. doi: 10.1038/nrn2672
Ybema, S., Keenoy, T., Oswick, C. view all authors, Beverungen, A., Ellis, N. & Sabelis, I. (2009). Articulating identities. Human Relations, 62(3), pp. 299-322. doi: 10.1177/0018726708101904
Ye, X., Lin, X., Dehmeshki, J. view all authors, Slabaugh, G. G. & Beddoe, G. (2009). Shape-Based Computer-Aided Detection of Lung Nodules in Thoracic CT Images. IEEE Transactions on Biomedical Engineering, 56(7), pp. 1810-1820. doi: 10.1109/TBME.2009.2017027
Ye, X., Siddique, M., Douiri, A. view all authors, Beddoe, G. & Slabaugh, G. G. (2009). Image segmentation using joint spatial-intensity-shape features: Application to CT lung nodule segmentation. Progress in Biomedical Optics and Imaging - Proceedings of SPIE, 7259, 72594V. doi: 10.1117/12.811151
Yim, A. (2009). Efficient Committed Budget for Implementing Target Audit Probability for Many Inspectees. Management Science, 55(12), pp. 2000-2018. doi: 10.1287/mnsc.1090.1083
Zhang, Y. (2009). Optimal Plan Design and Dynamic Asset Allocation of Defined Contribution Pension Plans: Lessons from Behavioural Finance and Non-expected Utility Theories. (Unpublished Doctoral thesis, City University London)
Zhong, J. Y. (2009). Investigating the relationship between subjective well-being and consumption in the United Kingdom. (Unpublished Doctoral thesis, City University London)
Černý, A. (2009). Characterization of the oblique projector U(VU)V-dagger with application to constrained least squares. Linear Algebra and its Applications, 431(9), pp. 1564-1570. doi: 10.1016/j.laa.2009.05.025
Černý, A. & Kallsen, J. (2009). Hedging by sequential regressions revisited. Mathematical Finance, 19(4), pp. 591-617. doi: 10.1111/j.1467-9965.2009.00381.x
|
CommonCrawl
|
Suspension of the billiard maps in the Lazutkin's coordinate
DCDS Home
Global attractor for a strongly damped wave equation with fully supercritical nonlinearities
April 2017, 37(4): 2207-2226. doi: 10.3934/dcds.2017095
Multiple solutions with constant sign of a Dirichlet problem for a class of elliptic systems with variable exponent growth
Li Yin 1, , Jinghua Yao 2,, , Qihu Zhang 3, and Chunshan Zhao 4,
College of Information and Management Science, Henan Agricultural University, Zhengzhou, Henan 450002, China
Department of Mathematics, Indiana University, Bloomington, IN 47408, USA
Department of Mathematics and Information Science, Zhengzhou University of Light Industry, Zhengzhou, Henan 450002, China
Department of Mathematical Science, Georgia Southern University, Statesboro, GA 30460, USA
*Corresponding author: Jinghua Yao
Received July 2016 Revised September 2016 Published December 2016
Fund Project: This research is partly supported by the key projects in Science and Technology Research of the Henan Education Department (14A110011).
We investigate the followingDirichlet problem with variable exponents:
$\left\{ \begin{align} &-{{\vartriangle }_{p(x)}}u=\lambda \alpha (x)|u{{\text{ }\!\!|\!\!\text{ }}^{\alpha (x)-2}}u|v{{\text{ }\!\!|\!\!\text{ }}^{\beta (x)}}+{{F}_{u}}(x,u,v),\text{ in }\Omega , \\ &-{{\vartriangle }_{q(x)}}v=\lambda \beta (x)\text{ }\!\!|\!\!\text{ }u{{|}^{\alpha \left( x \right)}}|v{{|}^{\beta (x)\text{-2}}}v+{{F}_{v}}(x,u,v),\text{ in}\ \Omega , \\ &u=0=v,\text{ on }\partial \Omega \text{.} \\ \end{align} \right.$
We present here, in the system setting, a new set of growth conditions under which we manage to use a novel method to verify the Cerami compactness condition. By localization argument, decomposition technique and variational methods, we are able to show the existence of multiple solutions with constant sign for the problem without the well-knownAmbrosetti-Rabinowitz type growth condition. More precisely, we manage to show that the problem admitsfour, six and infinitely many solutions respectively.
Keywords: $ p(x) $-Laplacian, Dirichlet problem, solutions with constant sign, Ambrosetti-Rabinowitz condition, Cerami condition, critical point.
Mathematics Subject Classification: Primary:35J20, 35J25;Secondary:35J60.
Citation: Li Yin, Jinghua Yao, Qihu Zhang, Chunshan Zhao. Multiple solutions with constant sign of a Dirichlet problem for a class of elliptic systems with variable exponent growth. Discrete & Continuous Dynamical Systems - A, 2017, 37 (4) : 2207-2226. doi: 10.3934/dcds.2017095
E. Acerbi and G. Mingione, Regularity results for a class of functionals with nonstandard growth, Arch. Ration. Mech. Anal., 156 (2001), 121-140. doi: 10.1007/s002050100117. Google Scholar
C. Alves and S. Liu, On superlinear $ p(x) $-Laplacian equations in $ R^{N} $, Nonlinear Analysis, 73 (2010), 2566-2579. doi: 10.1016/j.na.2010.06.033. Google Scholar
[3] K. C. Chang, Critical Point Theory and Applications, Shanghai Scientific and Technology Press, Shanghai, 1986. Google Scholar
Y. Chen, S. Levine and M. Rao, Variable exponent, linear growth functionals in image restoration, SIAM J. Appl. Math., 66 (2006), 1383-1406. doi: 10.1137/050624522. Google Scholar
L. Diening, P. Harjulehto, P. Hästö and M. Růžička, Lebesgue and Sobolev Spaces with Variable Exponents, Lecture Notes in Mathematics, vol. 2017, Springer-Verlag, Berlin, 2011. doi: 10.1007/978-3-642-18363-8. Google Scholar
X. Fan and D. Zhao, On the spaces $ {{L}^{p(x)}}(\Omega \text{)} $ and $ {{W}^{m,p(x)}}\left( \Omega \right) $, J. Math. Anal. Appl., 263 (2001), 424-446. doi: 10.1006/jmaa.2000.7617. Google Scholar
X. Fan and Q. Zhang, Existence of solutions for $ p(x) $-Laplacian Dirichlet problem, Nonlinear Anal., 52 (2003), 1843-1852. doi: 10.1016/S0362-546X(02)00150-5. Google Scholar
X. Fan, Q. Zhang and D. Zhao, Eigenvalues of $ p(x) $-Laplacian Dirichlet problem, J. Math. Anal. Appl., 302 (2005), 306-317. doi: 10.1016/j.jmaa.2003.11.020. Google Scholar
L. Gasiński and N. Papageorgiou, A pair of positive solutions for the Dirichlet $ p(z) $-Laplacian with concave and convex nonlinearities, J. Glob. Optim., 56 (2013), 1347-1360. doi: 10.1007/s10898-011-9841-8. Google Scholar
B. Ge, Q. Zhou and L. Zu, Positive solutions for nonlinear elliptic problems of $ p $-Laplacian type on $ \mathbb{R}^{N} $ without (AR) condition, Nonlinear Anal Real World Appl., 21 (2015), 99-109. doi: 10.1016/j.nonrwa.2014.07.002. Google Scholar
O. Kováčik and J. Rákosník, On spaces $ {{L}^{p(x)}}\left( \Omega \right) $ and $ {{W}^{k,p(x)}}\left( \Omega \right) $, Czechoslovak Math. J., 41 (1991), 592-618. Google Scholar
N. Lam and G. Lu, Elliptic equations and systems with subcritical and critical exponential growth without the Ambrosetti-Rabinowitz condition, J. Geom. Anal., 24 (2014), 118-143. doi: 10.1007/s12220-012-9330-4. Google Scholar
M. Mihăilescu and V. Rădulescu, On a nonhomogeneous quasilinear eigenvalue problem in Sobolev spaces with variable exponent, Proc. Amer. Math. Soc., 135 (2007), 2929-2937. doi: 10.1090/S0002-9939-07-08815-6. Google Scholar
O. Miyagaki and M. Souto, Superlinear problems without Ambrosetti and Rabinowitz growth condition, J. Differential Equations, 245 (2008), 3628-3638. doi: 10.1016/j.jde.2008.02.035. Google Scholar
M. Růžička, Electrorheological Fluids: Modeling and Mathematical Theory, Lecture Notes in Math 1748, Springer-Verlag, Berlin, 2000. doi: 10.1007/BFb0104029. Google Scholar
V. Radulescu and D. Repovs, Partial Differential Equations with Variable Exponents: Variational Methods and Qualitative Analysis, Chapman and Hall/CRC, Monographs and Research Notes in Mathematics, 2015. doi: 10.1201/b18601. Google Scholar
X. Wang, J. Yao and D. Liu, High energy solutions to $ p(x) $-Laplace equations of Schrödinger type, Electron. J. Diff. Equ., 136 (2015), 1-17. Google Scholar
X. Wang and J. Yao, Compact embeddings between variable exponent spaces with unbounded underlying domain, Nonlinear Analysis: TMA, 70 (2009), 3472-3482. doi: 10.1016/j.na.2008.07.005. Google Scholar
M. Willem and W. Zou, On a Schröinger equation with periodic potential and spectrum point zero, Indiana Univ. Math. J., 52 (2003), 109-132. doi: 10.1512/iumj.2003.52.2273. Google Scholar
J. Yao and X. Wang, On an open problem involving the $ p(x) $-Laplacian, Nonlinear Analysis: TMA, 69 (2008), 1445-1453. doi: 10.1016/j.na.2007.06.044. Google Scholar
J. Yao, Solutions for Neumann boundary value problems involving $ p(x) $-Laplace operators, Nonlinear Analysis: TMA, 68 (2008), 1271-1283. doi: 10.1016/j.na.2006.12.020. Google Scholar
L. Yin, J. Yao, Q. Zhang and C. Zhao, Multiplicity of strong solutions for a class of elliptic problems without the Ambrosetti-Rabinowitz condition in $ \mathbb{R}^{N} $, arXiv: 1607.00581. Google Scholar
A. Zang, $ p(x) $-Laplacian equations satisfying Cerami condition, J. Math. Anal. Appl., 337 (2008), 547-555. doi: 10.1016/j.jmaa.2007.04.007. Google Scholar
Q. Zhang and C. Zhao, Existence of strong solutions of a $ p(x) $-Laplacian Dirichlet problem without the Ambrosetti-Rabinowitz condition, Comput. Math. Appl., 69 (2015), 1-12. doi: 10.1016/j.camwa.2014.10.022. Google Scholar
[25] J. Zhao, Structure Theory of Banach Spaces, Wuhan University Press, Wuhan, 1991. Google Scholar
V. Zhikov, Averaging of functionals of the calculus of variations and elasticity theory, (Russian) Izv. Akad. Nauk SSSR Ser. Mat., 50 (1986), 675-710. Google Scholar
[27] C. Zhong, X. Fan and W. Chen, Introduction to Nonlinear Functional Analysis, Lanzhou University Press, Lanzhou, 1998. Google Scholar
Zaizheng Li, Qidi Zhang. Sub-solutions and a point-wise Hopf's lemma for fractional $ p $-Laplacian. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020293
Raffaele Folino, Ramón G. Plaza, Marta Strani. Long time dynamics of solutions to $ p $-Laplacian diffusion problems with bistable reaction terms. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020403
Shengbing Deng, Tingxi Hu, Chun-Lei Tang. $ N- $Laplacian problems with critical double exponential nonlinearities. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 987-1003. doi: 10.3934/dcds.2020306
Lihong Zhang, Wenwen Hou, Bashir Ahmad, Guotao Wang. Radial symmetry for logarithmic Choquard equation involving a generalized tempered fractional $ p $-Laplacian. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020445
Mokhtar Bouloudene, Manar A. Alqudah, Fahd Jarad, Yassine Adjabi, Thabet Abdeljawad. Nonlinear singular $ p $ -Laplacian boundary value problems in the frame of conformable derivative. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020442
Fuensanta Andrés, Julio Muñoz, Jesús Rosado. Optimal design problems governed by the nonlocal $ p $-Laplacian equation. Mathematical Control & Related Fields, 2021, 11 (1) : 119-141. doi: 10.3934/mcrf.2020030
Wenqiang Zhao, Yijin Zhang. High-order Wong-Zakai approximations for non-autonomous stochastic $ p $-Laplacian equations on $ \mathbb{R}^N $. Communications on Pure & Applied Analysis, 2021, 20 (1) : 243-280. doi: 10.3934/cpaa.2020265
Hai Q. Dinh, Bac T. Nguyen, Paravee Maneejuk. Constacyclic codes of length $ 8p^s $ over $ \mathbb F_{p^m} + u\mathbb F_{p^m} $. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020123
Mathew Gluck. Classification of solutions to a system of $ n^{\rm th} $ order equations on $ \mathbb R^n $. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5413-5436. doi: 10.3934/cpaa.2020246
Luca Battaglia, Francesca Gladiali, Massimo Grossi. Asymptotic behavior of minimal solutions of $ -\Delta u = \lambda f(u) $ as $ \lambda\to-\infty $. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 681-700. doi: 10.3934/dcds.2020293
Sebastian J. Schreiber. The $ P^* $ rule in the stochastic Holt-Lawton model of apparent competition. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 633-644. doi: 10.3934/dcdsb.2020374
Hongming Ru, Chunming Tang, Yanfeng Qi, Yuxiao Deng. A construction of $ p $-ary linear codes with two or three weights. Advances in Mathematics of Communications, 2021, 15 (1) : 9-22. doi: 10.3934/amc.2020039
Chunming Tang, Maozhi Xu, Yanfeng Qi, Mingshuo Zhou. A new class of $ p $-ary regular bent functions. Advances in Mathematics of Communications, 2021, 15 (1) : 55-64. doi: 10.3934/amc.2020042
Lei Liu, Li Wu. Multiplicity of closed characteristics on $ P $-symmetric compact convex hypersurfaces in $ \mathbb{R}^{2n} $. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020378
Michal Fečkan, Kui Liu, JinRong Wang. $ (\omega,\mathbb{T}) $-periodic solutions of impulsive evolution equations. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021006
Federico Rodriguez Hertz, Zhiren Wang. On $ \epsilon $-escaping trajectories in homogeneous spaces. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 329-357. doi: 10.3934/dcds.2020365
Martin Heida, Stefan Neukamm, Mario Varga. Stochastic homogenization of $ \Lambda $-convex gradient flows. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 427-453. doi: 10.3934/dcdss.2020328
Jiahao Qiu, Jianjie Zhao. Maximal factors of order $ d $ of dynamical cubespaces. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 601-620. doi: 10.3934/dcds.2020278
HTML views (57)
Li Yin Jinghua Yao Qihu Zhang Chunshan Zhao
|
CommonCrawl
|
Bioresources and Bioprocessing
Vitamin combination promotes ex vivo expansion of NK-92 cells by reprogramming glucose metabolism
Yan Fu1,
Yuying Chen1,
Zhepei Xie1,
Huimin Huang1,
Wen-Song Tan1 &
Haibo Cai ORCID: orcid.org/0000-0001-6449-86431
Bioresources and Bioprocessing volume 9, Article number: 87 (2022) Cite this article
Robust ex vivo expansion of NK-92 cells is essential for clinical immunotherapy. The vitamin B group is critical for the expansion and function of immune cells. This study optimized a vitamin combination by response surface methodology based on an in-house designed chemically defined serum-free medium EM. The serum-free medium EM-V4 with an optimal vitamin combination favoured ex vivo expansion of NK-92 cells. The characteristics of glucose metabolism of NK-92 cells in EM-V4 and the relationships between cell expansion and metabolism were investigated. NK-92 cells in EM-V4 underwent metabolic reprogramming. An elevated ratio of glucose-6-phosphate dehydrogenase/phosphofructokinase (G6PDH/PFK) indicated that NK-92 cells shifted towards the pentose phosphate pathway (PPP). An increase in the ratio of pyruvate dehydrogenase/lactate dehydrogenase (PDH/LDH) suggested that the cells shifted towards the Krebs (TCA) cycle, i.e., from glycolysis to aerobic metabolism. The enhanced ratio of oxygen consumption rate/extracellular acidification rate (OCR/ECAR) indicated that NK-92 cells were more reliant on mitochondrial respiration than on glycolysis. This shift provided more intermediate metabolites and energy for biosynthesis. Thus, EM-V4 accelerated biomass accumulation and energy production to promote NK-92 cell expansion by regulating the metabolic distribution. Our results provide valuable insight for the large-scale ex vivo expansion of clinically available NK-92 cells.
Natural killer (NK) cells are effector lymphocytes important for antitumour and antiviral immune responses and act as potentially adaptive immune effectors. In addition, unlike T cells, NK cells do not cause graft-versus-host disease (Vivier et al. 2008; Waldhauer and Steinle 2008). However, only limited number of NK cells can be obtained initially, since these cells account for only 10% of lymphocytes. NK cells derived from the patients are often dysfunctional, and the cells derived from healthy donors usually require depletion of allogeneic T cells. In addition, clinically infused NK cells are always variable because of individual differences between the donors or patients (Cerwenka and Lanier 2016; Guillerey et al. 2016; Chiossone et al. 2018).
The interleukin (IL) 2-dependent NK-92 cell line (CD56+/CD3−) was isolated from non-Hodgkin's lymphoma cells derived from a male patient (Gong et al. 1994; Hodge et al. 2002). NK-92 cells can be used to produce a large number of well-defined effector cell populations. NK-92 cells are "off the shelf therapeutic" for adoptive cancer immunotherapy based on natural killer cells and are readily available from a current (c)–GMP-compliant master cell bank; NK-92 cells have relatively stable growth characteristics and predictably higher cytotoxic activity than NK cells (Klingemann et al. 2016; Romanski et al. 2016; Suck et al. 2016). The established cell line expresses higher levels of perforin and granzyme B activity against a broad range of tumour cells. Accumulating experimental data have shown the clinical benefits with minimal side effects of NK-92 cells infused in patients with advanced cancer, such as haematological malignancies and lung cancer (Williams et al. 2015; Pockley et al. 2020). Tang et al. demonstrated that specific targeting by CAR–NK92 in acute myeloid leukaemia was characterized by good safety and high effectiveness (Tang et al. 2018). Although these cells have potential in clinical trials, specific protocols of NK-92 cell proliferation for clinical application have not been established (Tam et al. 2003; Chrobok et al. 2019). Large-scale ex vivo expansion of NK-92 cells under the conditions suitable for clinical practice remains challenging.
Major advances in immunology indicated that energy metabolism is important for the control of the function and growth of immune cells. Glucose is a substrate critical for the maintenance of cellular bioenergy, which is essential for cell proliferation (MacIver et al. 2008; Zhang et al. 2018; Jung et al. 2019). During glycolysis, glucose is broken down into two molecules of pyruvate that are transformed to lactate to quickly provide adenosine triphosphate (ATP) for cell expansion, which does not require oxygen. Oxidative phosphorylation (OXPHOS) is an oxygen-dependent process that can maximize the yield of ATP from glucose to maintain rapid biological processes. PPP (Pentose Phosphate Pathway) is an important nonoxidative arm of metabolism that generates key intermediates for nucleotide and fatty acid biosynthesis (Greiner et al. 1994; Han et al. 2016). Glycolysis and OXPHOS have been demonstrated to be essential for the development of NK cells. The regulation of metabolic distribution may be used to reversibly switch between quiescent and rapidly proliferating NK cells (Assmann et al. 2017; Poznanski et al. 2018). Lymphocyte metabolism can be regulated by the changes in the expression of metabolic genes and cell surface receptors and by feedback inhibition and other forms of allosteric regulation. In particular, enzyme regulation plays an especially prominent role in the regulation of metabolism by influencing the metabolic rates and fluxes to maintain metabolic homeostasis (Zhou et al. 2011; Pearce et al. 2013).
B vitamins are water-soluble vitamins involved in energy metabolism, methylation, DNA repair and immune regulation of the cells. This vitamin group often acts as coenzymes of the metabolic enzymes critical for mitochondrial and cellular functions (Depeint et al. 2006). Riboflavin is a precursor of flavin adenine dinucleotide and flavin mononucleotide (FMN), which are required for the flavoenzymes of the respiratory chain. Nicotinamide is central to the energy metabolism and is converted into nicotinamide adenine dinucleotide (NAD) and nicotinamide adenine dinucleotide phosphate (NADP), which participate in redox reactions in vivo (Schnellbaecher et al. 2019). In addition, inositol plays certain roles in cell expansion, acting as a precursor of phosphoinositides, membrane components, anchor molecules and signalling molecules in eukaryotic cells (Colazingari et al. 2014). These metabolites often form a tight network of interactions in the regulation of cellular metabolism. Moreover, the vitamin B group is important for cell proliferation and immune function. Riboflavin promotes ex vivo expansion of macrophages, and riboflavin deprivation has negative effects on macrophage activity and the immune response. Riboflavin-deficient mice had a higher risk of tumour development (Mazur-Bialy et al. 2015). Moreover, clinical data showed that riboflavin can be used as a therapeutic agent in disorders affecting mitochondrial energy metabolism (Bizukojc et al. 2007; Suwannasom et al. 2020). Nicotinamide was extensively studied. Clinical reports demonstrated that nicotinamide-dependent NK cells have higher toxicity and better expansion in patients with advanced multiple myeloma (MM) subjected to adoptive NK therapy. Nicotinamide supplementation can increase the proliferative capacity of mesenchymal stromal cells and umbilical cord blood-derived haematopoietic stem cells (Horwitz et al. 2014; Bachanova et al. 2019; Khorraminejad-Shirazi et al. 2020). Inositol was essential for blastocyst growth, which was crucial to the culture of preimplantation rabbit embryos in vitro (Inoue et al. 2002).
This study demonstrated that nicotinamide, riboflavin, and inositol were associated with amplification of NK-92 cells. The concentration of three substances was optimized by response surface methodology–central composite design (RSM–CCD) to promote ex vivo expansion of NK-92 cells. The glucose consumption rate, activities of key dehydrogenases and ATP production were investigated to better understand the effects of vitamin combinations on cell expansion. In general, this study can be used as a reference to achieve an extensive ex vivo expansion of NK-92 cells and therapeutic application.
Ex vivo Expansion of NK-92 Cells
NK-92 cells were donated by the Bioengine Co., China. The cells were seeded at 1 × 105 cells/ml in serum-free medium in the presence of 1,000 U/ml human recombinant IL-2 (Peprotech, USA). Cell cultures were maintained at 37 °C in a humidified incubator with 21% O2 and 5% CO2 in an atmosphere of nitrogen. NK-92 cells were counted every 2 days during the culture process, and the culture medium was sufficiently mixed before sampling. Then, fresh medium and IL-2 were added to maintain the cell density at 1 × 105 cells/ml. The kinetics of NK-92 cell growth was calculated according to the following equations:
Expansion fold of total cells:
$$Y=\frac{{N}_{t}}{{N}_{0}}$$
where Y is the expansion fold of the cells, Nt is the number of the cells at indicated time t and N0 is the number of cells at time t0.
The specific growth rate:
$$\mu =\frac{ln{N}_{2}-ln{N}_{1}}{{t}_{2}-{t}_{1}}$$
where μ is the specific growth rate of NK-92 cells, N1 is the number of the cells at time t1 and N2 is the number of the cells at time t2.
Experimental design and data analysis of optimization of vitamin concentrations
The concentrations of niacinamide (A) (Sigma-Aldrich, Germany, Catalog # N0636), riboflavin (B) (Sigma-Aldrich, Germany, Catalog # R9504) and inositol (C) (Sigma-Aldrich, Germany, Catalog # I7508) in the serum-free medium designed in the present study based on chemically defined EM were optimized for best cell expansion through 3-factor and 5-level CCD using Design Expert 8.0 software. Cell expansion after 8 days was selected as the response variable (Y). The concentration range of the factors was as follows: 16.50–40.00 µM niacinamide, 0.26–1.31 µM riboflavin and 26.97–238.75 µM inositol. An experimental design of 20 runs with 3 factors varying over 5 levels is summarized in Table 1. Optimum values of three variables were obtained after the response surface analysis.
Table 1 Central composite design and experimental data used for the response surface analysis
Immunophenotype analysis of expanded NK-92 cells
Approximately 7 × 105 NK-92 cells were collected by centrifugation. After rinsing twice with antibody diluent, the cells were stained with PE-conjugated anti-human CD56 antibody (BD, USA, Catalog # 555,516) and FITC-conjugated anti-human CD3 antibody (BD, USA, Catalog # 555,332) for 30 min at 4 °C in the dark. Then, the cells were washed twice with antibody diluent and resuspended in 500 µl of antibody protection solution. Sample phenotypes were analysed by flow cytometry (BD, USA).
Cytotoxicity assays of expanded NK-92 cells
Cytotoxicity of ex vivo expanded NK-92 cells was assessed by killing of K562 cells measured with cell counting kit-8 (CCK8) (Dojindo, Japan). NK-92 effector cells (E) target tumour K562 cells (T). Briefly, three experimental groups included a target cell group with (T), an effector cell group with (E) and an experimental group with cells at an E:T ratio of 5:1. The cells were suspended in 100 µl of the medium and seeded into 96-well microplates. After incubation for 4 h at 37 °C in an incubator, 10 µl CCK8 solution was added into each well, and the cells were incubated for another 2 h before detection of absorbance at 450 nm using a microplate reader. The cytotoxicity of NK-92 cells against K562 cells was calculated as follows:
$$cytotoxicity\, \%=\frac{{N}_{1}-({N}_{3}-{N}_{2})}{{N}_{1}}\times 100 \%$$
where N1, N2 and N3 represent the absorbance of the target cell group, effector cell group and experimental group, respectively.
The CD107a expression of cultured cells was assayed to evaluate cell degranulation. NK-92 cells were co-cultured with K562 cells at an E: T ratio of 5:1 for 4 h and then stained with mouse anti-human CD107a PE-Cy7-conjugated antibodies (BD, USA, Catalog # 561,348) for 30 min at 4 °C in dark. Besides, to carry out the intracellular granzyme B and perforin of NK-92 cells in cultures, cells were fixed, permeabilized and stained with antibodies of V450-granzyme B (BD, USA, Catalog # 563,389) and V450-perforin (BD, USA, Catalog # 563,393). The expression of CD107a, granzyme B and perforin were measured by flow cytometry.
Detection and calculation of kinetic parameters
The culture supernatant was collected at specific timepoints by centrifugation to analyse the concentrations of glucose and lactic acid using a glucose assay kit and a lactate assay kit (Jiancheng Bioengineering Institute, China). The absorbance values were detected by a microplate reader. Relevant kinetic parameters were determined as follows:
Specific glucose consumption rate:
$${Q}_{gluc}=\frac{{C}_{1}-{C}_{2}}{{\int }_{{t}_{1}}^{{t}_{2}}Af(t)dt}$$
where C1 represents the concentration of glucose at time t1, C2 represents the concentration of glucose at time t2 and \({\int }_{{t}_{1}}^{{t}_{2}}Af\left(t\right)dt\) is the integral of the number of NK92 cells A from t1 to t2.
Specific lactic acid production rate:
$${q}_{lac}=\frac{{K}_{2}-{K}_{1}}{{\int }_{{t}_{1}}^{{t}_{2}}Af(t)dt}$$
where K1 represents the concentration of lactic acid at time t1, K2 represents the concentration of lactic acid at time t2 and \({\int }_{{t}_{1}}^{{t}_{2}}Af\left(t\right)dt\) is the integral of the number of NK-92 cells (A) from t1 to t2.
The yield coefficient of lactate to glucose:
$${Y}_{lac/gluc}=\frac{{q}_{lac}}{{Q}_{gluc}}$$
Determination of activities of enzymes of glucose metabolism
The enzyme activities of phosphofructokinase (PFK), glucose-6-phosphate dehydrogenase (G6PDH), pyruvate dehydrogenase (PDH) and lactate dehydrogenase (LDH) in NK-92 cells were detected with PFK, G6PDH, PDH and LDH assay kits, respectively, following the manufacturer's instructions (Comin Biotechnology, China).
Extracellular flux assays
A Seahorse XFe 96 analyzer (Agilent Technologies, USA) was used to measure the OCR and ECAR of NK-92 cells cultured in various media. NK-92 cells (4 × 104 per well) were seeded in Seahorse XF96 cell culture microplates, which were coated with polylysine overnight and washed twice using sterile water before plating the cells for the assay. The cells were cultured in the assay medium and incubated for an hour at 37 ℃ in an incubator without CO2. The glycolysis assay medium was glucose-free, and the mitochondrial assay medium contained 2 mM glutamine, 1 mM pyruvate and 10 mM glucose. For glycolytic stress tests, 10 mM glucose, 2 µM oligomycin and 30 mM 2-deoxyglucose were injected during the measurements. For the mitochondrial stress tests, 2 µM oligomycin, 0.5 µM FCCP and 2 µM rotenone/antimycin were sequentially added to the wells.
Detection of intracellular metabolites
NK-92 cells were collected by centrifugation, and the intracellular metabolites ATP, NADP(H) and GSH were assayed using the corresponding assay kits, including an ATP assay kit, a NADP(H) assay kit and a GSH assay kit, respectively (Beyotime, China).
The values are presented as the mean ± standard error. The significant differences were assessed by Student's t test (two samples, one-tailed). P < 0.05 was considered statistically significant.
Optimisation of concentrations of nicotinamide, riboflavin and inositol using CCD
RSM–CCD is an efficient mathematical and statistical method often used to optimize the experimental factors. The optimal concentrations of nicotinamide, riboflavin and inositol for the maximum fold expansion of NK-92 cells were determined using the 3-factor-5-level CCD method. The experimental design matrix and results are shown in Table 1. The corresponding second-order polynomial equation was as follows:
$${\text{Y}}\, = \,{88}.{85}\, + \,{2}.{\text{35A}} - {7}.{\text{43B}} - {1}.{\text{28C}} - {3}.{\text{24AB}}\, + \,{2}.{\text{57AC}}\, + \,{6}.{\text{27BC}} - {7}.{\text{88A}}^{{2}} - {1}0.{\text{12B}}^{{2}} - {8}.{\text{87C}}^{{2}} ,$$
where A, B and C are the concentrations of nicotinamide, riboflavin and inositol, respectively, and Y is the predicted total fold change in NK-92 cell expansion. Analysis of variance (ANOVA) indicated that the model P value is 0.0065, implying that the model is significant. The "lack of fit P value" of 0.1641 indicated that the effective mathematical equation is credible and lack of nonsignificant fitting is sufficient; thus, the equation can adequately reflect the actual relationships between the response variable and factors. In addition, the coefficient of determination (R2) was 0.86, indicating that the predicted and actual values were adequately fitted (Additional file 1: Table S1). All data indicated that the model prediction is credible.
Intuitively, the 2D contour plots indicated the presence of a concentration point for each substance within the experimental concentration range optimally favourable for the best cell expansion (Fig. 1). The highest predicted value was obtained in the smallest ellipse in the contour diagram. Model prediction revealed that the maximum fold of cell expansion can be obtained when the optimal concentrations of A, B and C were 30.00 µM, 0.70 µM and 120.00 µM, respectively, and the predicted maximum response value was 91.00. Subsequently, the optimized EM-V4 medium was prepared to include this vitamin combination.
Contour plots for concentration optimization of nicotinamide, riboflavin and inositol: the effect of the mutual interactions of A nicotinamide and riboflavin, B nicotinamide and inositol, and C inositol and riboflavin on NK-92 cell expansion. (n = 3)
Ex vivo expansion of NK-92 cells in EM-V4
Cell expansion, phenotype and cytotoxicity were determined to verify the effect of EM-V4 on the growth of NK-92 cells. During the 4-day culture period, NK-92 cells in EM-V4 had higher viability than those in EM; correspondingly, the calculated indexes, μ and the fold expansion of total cells, were also higher (Fig. 2A–C). Meanwhile, cell expansion in longer period (8 days) was further analysed and the results also showed that cells in EM-V4 maintained higher viability, specific growth rate and expansion fold (Additional file 1: Fig. S1). Specifically, the fold expansion of total cells in EM-V4 was 110.9 ± 11.7 on day 8, which was significantly higher than the 42.2 ± 7.6 in EM. Obviously, EM-V4 improved the amplification rate of NK-92 cells. Moreover, the cell phenotype was analysed on day 4 by flow cytometry, and the percentages of CD3−CD56+ cells were comparable in both cultures (Fig. 2D). In addition, the data of Fig. 2E show that EM-V4-cultured NK-92 cells on day 4 had more robust cytotoxic ability based on an increase in the ability to kill K562 cells. In addition, higher percentage of CD107a+ cells, granzyme B+ cells and perforin+ cells in EM-V4 were shown in Fig. 2F–I, which could characterize the enhanced cytotoxicity well. Representative images of expanded NK-92 cells are shown in Fig. 2J, K. Brighter cells and higher number of the cell clusters were observed in EM-V4-cultured NK-92 cells, indicating a healthier cell proliferation status. Overall, these results demonstrated that EM-V4 favoured ex vivo expansion and stronger cytotoxicity of NK-92 cells without impairing the cell phenotype.
Ex vivo expansion characteristics of NK-92 cells. A Cell viability. B Specific growth rate of NK-92 cells. C Expansion fold of the cells. D Percentage of CD3−CD56+ cells. E Cytotoxicity of expanded NK-92 cells on day 4 at an E:T ratio of 5:1. F Representative flow cytometric analysis of CD107a+, granzyme B+ and perforin+ cells which grew in EM-V4; G Percentages of CD107a+ cells gated on CD3−CD56+ cells; H percentages of granzyme B+ cells gated on CD3−CD56+ cells; I percentages of perforin+ cells gated on CD3−CD56+ cells. J Light microscopic image of NK-92 cell morphology at 40 times magnification on day 4. K Light microscopic image of NK-92 cell morphology at 100 times magnification on day 4. (*P < 0.05, n = 3)
Metabolic properties of NK-92 cells expanded in EM-V4
Glucose metabolism was investigated to determine how EM-V4 influenced the ex vivo proliferation of NK-92 cells. Specifically, kinetic parameters in both cultures were analysed. The glucose and lactate concentrations in both culture supernatants were measured every 2 days, and the results are shown in Additional file 1: Fig. S2, which revealed that NK-92 cells in EM-V4 had higher glucose consumption and lactic acid production. Subsequently, Qgluc was calculated using Eq. (4) to analyse the rate of glucose consumption. As shown in Fig. 3A, the Qgluc of the cells in the two cultures exhibited a similar trend of an increase on day 2 compared to that on day 4, indicating that the glucose consumption rate of the cells in the two cultures initially increased and subsequently decreased. Furthermore, higher Qgluc of NK-92 cells indicated that the cells cultured in EM-V4 had a higher glucose metabolism rate. The qlac of the cells displayed a trend similar to that of Qgluc. However, the differences in a decrease in qlac were detected on day 4 in the two cultures (Fig. 3B). Considering the lower Ylac/gluc of the cells in EM-V4, EM-V4 reduced the coefficient of lactate to glucose conversion of the cells and improved the TCA cycle distribution ratio between glycolysis and aerobic metabolism (Fig. 3C).
Characteristics of glucose metabolism of NK92 cells. A Specific glucose consumption rates of the cells (Qgluc). B Specific lactate production rates of the cells (qlac). C Yield coefficient of lactate to glucose conversion (Ylac/gluc). D PFK activity. E G6PDH activity. F Relative G6PDH/PFK activity. G LDH activity. H PDH activity. I Relative PDH/LDH activity. (*P < 0.05, n = 3)
Furthermore, the key enzymes of the glucose metabolism were assayed to determine the mechanisms by which EM-V4 regulates glucose metabolism. Glucose enters the cells and is directly involved in glycolysis and PPP, which are regulated by the rate-limiting enzymes PFK and G6PDH, respectively. Then, pyruvate is generated in the cytoplasm. On the one hand, pyruvate can be used in the PDH-catalysed reaction to produce acetyl-CoA that enters the TCA cycle; on the other hand, pyruvate can be used in the LDH-catalysed reaction to produce lactic acid. Two metabolic distribution ratios were considered in the present study. In the case of glycolysis and PPP, NK-92 cells in both cultures displayed comparable PFK activities; however, EM-V4-cultured cells had significantly higher G6PDH activity than that in EM-cultured cells (Fig. 3D, E), indicating that EM-V4 improved glucose consumption of the cells by increasing the PPP metabolic flux. A higher G6PDH/PFK value suggested that the cells shifted to PPP (Fig. 3F). Moreover, LDH and PDH activities of NK-92 cells were investigated. The results showed that EM-V4 upregulated cellular LDH and PDH activities, especially the PDH activity (Fig. 3G, H). The improved PDH/LDH value of EM-V4-cultured NK-92 cells indicated that the TCA distribution ratio was increased (Fig. 3I). Thus, EM-V4 increased the rate of glucose metabolism and governed metabolic reprogramming by regulating the activities of key dehydrogenases of NK-92 cells. Improved PPP metabolism ratio and TCA distribution percentage may enhance the generation of the substrates and energy for cell proliferation.
Similarly, the ECAR (an indicator of glycolysis) and OCR (an indicator of OXPHOS) values were determined using a Seahorse analyser. Figure 4A–D shows that EM-V4-activated NK-92 cells had increased ECAR and OCR, indicating that EM-V4 improved the energy metabolism of the cells. The higher OCR/ECAR ratio suggested that EM-V4 promoted OXPHOS more than glycolysis in NK-92 cells (Fig. 4E). Furthermore, the increased ATP production capacity in the mitochondria was detected (Fig. 4F). ATP is the key energy donor crucial for cellular processes in lymphocytes, including both cell proliferation and immune response. The intracellular ATP content of NK-92 cells was measured. Interestingly, slightly lower ATP levels were observed in EM-V4-stimulated NK-92 cells on day 4 (Fig. 4G). Considering that intracellular ATP levels are a result of the consumption and production, large amounts of ATP may be expended for biosynthesis. Hence, these results demonstrated that NK-92 cells in EM-V4 had enhanced ATP synthesis and relied more on OXPHOS to produce ATP for faster cell replication.
Extracellular acidification rate (ECAR) and oxygen consumption rate (OCR) of NK-92 cells. A ECAR of the glycolytic rate. B OCR of mitochondrial oxygen consumption rate. C Basal ECAR, glycolytic capacity and glycolytic reserve of the cells. D Basal OCR, maximal respiration and spare respiration capacity of the cells. E OCR/ECAR ratio. F ATP turnover. G Intracellular ATP level. (*P < 0.05, n = 3)
Moreover, intracellular NADP(H) and GSH were determined to investigate the physiological state of the cells. No significant differences were observed in the total NADP(H) levels in the two cultures (Fig. 5A); however, EM-V4-cultured NK-92 cells had higher NADPH levels and NADPH/NADP+ ratio (Fig. 5B, C). In addition, elevated GSH content was detected in EM-V4-cultured NK-92 cells (Fig. 5D). Thus, increased NADPH and GSH levels maintained a higher cellular redox state, which favoured cell expansion in vitro.
Content of intracellular metabolites is related to the physiological state of NK-92 cells. A Total NADP(H). B NADPH. C NADPH/NADP.+ ratio. D Intracellular GSH level. (*P < 0.05, n = 3)
Adoptive NK cell therapy is a promising strategy against cancer. Although comprehensive characterisation of metabolic pathways during NK cell development has not been reported, immature NK cells are highly reliant on glucose metabolism than other lymphocytes. NK cell activation is mainly fuelled by glucose. In addition, activated NK cells undergo metabolic reprogramming, which is significant for cell proliferation and functional response (Gardiner 2019). The vitamin B family is involved in glucose metabolism, influencing enzyme activity, and has a profound impact on the regulation of metabolic pathways. Some vitamins can effectively promote the growth and function of various lymphocytes (Grudzien & Rapak 2018). Thus, niacinamide, riboflavin and inositol were considered in the present study, and an optimized vitamin combination was obtained by CCD (Fig. 1). Our data demonstrated that the optimal EM-V4 medium, including the optimized vitamin combination, improved the expansion and cytotoxicity of NK-92 cells (Fig. 2).
Efficient glucose metabolism has been demonstrated to be critically important for NK cell responses. Hence, kinetic parameters during the culture process and key enzyme activities of glucose metabolism were analysed. The results showed that the optimal vitamin combination improved the glucose metabolism rate and regulated metabolic flux by regulating the key metabolic enzymes (Fig. 3). Specifically, the vitamin combination upregulated the G6PDH, LDH and PDH activities and maintained the PFK activity. We hypothesised that nicotinamide and riboflavin may act as coenzymes regulating dehydrogenase activity through their active forms. Nicotinamide can transform into NAD+ and elevate deacetylase activity to improve glucose metabolism and extend the longevity of worms (Mouchiroud et al. 2013). The optimal vitamin combination apparently upregulated glucose metabolism and induced changes in the distribution of metabolites by regulating the enzyme activities to promote cell expansion.
Accelerated glucose consumption of the cells may result in higher energy production. Our results revealed that NK-92 cells in EM-V4 produced higher amount of energy (Fig. 4). The higher ECAR and OCR values of the cells indicated that EM-V4 improved ATP synthesis by enhancing both glycolysis and OXPHOS. In addition, the higher OCR/ECAR ratio indicated that EM-V4 had a greater impact on OXPHOS of the cells than that on glycolysis, which was consistent with an increase in the PDH/LDH ratio and a decrease in Ylac/gluc. Demonstrated shift from glycolysis to mitochondrial metabolism of NK-92 cells may favour the maximal level of ATP derived from glucose for faster biomass accumulation (Zhang et al. 2019). Activated NK cells are characterized by a shift in relative ATP reliance from OXPHOS to glycolysis, which may be a reaction to increased energy demands within a short time period; however, mature NK cells still mainly rely on OXPHOS to support a change in energy needs (O'Brien and Finlay 2019). Moreover, nicotinamide can promote the proliferation of human primary keratinocytes by elevating OXPHOS (Tan et al. 2019). Theoretically, nicotinamide and riboflavin may be used as the substrates and electron acceptors to participate in the respiratory chain to promote OXPHOS. In addition, inositol included in EM-V4 may promote fatty acid metabolism together with other B vitamins to regulate mitochondrial respiration (Burton and Wells 1976). Therefore, the vitamin combination may induce metabolic reprogramming and enhance cellular ATP synthesis to improve cell proliferation.
Healthy cell physiology provides better support of cell growth. It has been demonstrated that nicotinamide can upregulate the expression of antioxidant reductase and delay ageing in mice (Zhang et al. 2016). In addition, riboflavin can elevate the expression of antioxidant proteins in HepG2 cells (Xin et al. 2017). Higher intracellular NADPH levels and NADPH/NADP+ values of EM-V4-cultured cells detected in the present study were consistent with an increase in the metabolic flux of PPP. In addition, NADPH can induce the synthesis of GSH. Therefore, an increase in the cellular GSH content may result from an increase in the NADPH level (Fig. 5). An increase in the contents of these antioxidants may maintain a more active state of the cells.
Thus, the vitamin combination upregulated the activities of the key dehydrogenases of NK-92 cells to accelerate the glucose consumption rate and promote metabolic migration. Enhanced PPP flux provided the higher amount of the biomaterials for biomass accumulation, and migration from glycolysis to aerobic metabolism supplied sufficient energy for better ex vivo expansion of NK-92 cells (Fig. 6).
Overview of glucose metabolism distribution in EM-V4-cultured NK92 cells. Upregulated metabolic pathways, enzyme activities and intracellular metabolites are indicated with red arrows
In this study, we optimized a vitamin combination, which could favor ex vivo expansion of NK-92 cells. In addition, the effect of vitamin combination on cell expansion was investigated. Results demonstrated that the optimal vitamin combination could induce glucose metabolism reprogramming and enhance energy metabolism by upregulating key dehydrogenase activities of NK-92 cells. Besides, it maintained a better cellular redox state. That will lay a solid foundation for scale-up ex vivo expansion of NK-92 cells and serum-free medium development to provide a valuable guidance for therapeutic application.Additional file: As per journal requirements, every additional file must have a corresponding caption. In this regard, please be informed that the caption of Additional file [1] was taken from the additional e-file itself. Please advise if action taken appropriate and amend if necessary.We regarded it as appropriate action.
All data generated or analyzed during this study are included in this article and its supplementary information files.
NK:
Natural Killer
G6PDH:
Glucose-6-phosphate dehydrogenase
PFK:
Phosphofructokinase
PPP:
PDH:
Pyruvate dehydrogenase
LDH:
TCA:
Tricarboxylic acid
OCR:
Oxygen consumption rate
ECAR:
Extracellular acidification rate
IL:
GMP:
CAR:
Chimeric antigen receptor
ATP:
Adenosine triphosphate
OXPHOS:
FMN:
Flavin mononucleotide
NAD:
NADP:
Nicotinamide adenine dinucleotide phosphate
GSH:
RSM–CCD:
Response surface methodology–central composite design
CCK8:
Cell counting kit-8
2-Dimensional
CoA:
Coenzyme A
Assmann N, O'Brien KL, Donnelly RP, Dyck L, Zaiatz-Bittencourt V, Loftus RM, Heinrich P, Oefner PJ, Lynch L, Gardiner CM (2017) Srebp-controlled glucose metabolism is essential for NK cell functional responses. Nat Immunol 18:1197–1206. https://doi.org/10.1038/ni.3838
Bachanova V, McKenna DH, Luo X, Defor TE, Cooley S, Warlick E, Weisdorf DJ, Brachya G, Peled T, Miller JS (2019) First-in-human phase I study of nicotinamide-expanded related donor natural killer cells for the treatment of relapsed/refractory non-hodgkin lymphoma and multiple myeloma. Biol Blood Marrow Transplant 25:S175–S176. https://doi.org/10.1016/j.bbmt.2018.12.317
Bizukojc M, Pawlowska B, Ledakowicz S (2007) Supplementation of the cultivation media with B-group vitamins enhances lovastatin biosynthesis by Aspergillus terreus. J Biotechnol 127:258–268. https://doi.org/10.1016/j.jbiotec.2006.06.017
Burton LE, Wells WW (1976) Myo-inositol metabolism during lactation and development in the rat. The prevention of lactation-induced fatty liver by dietary myo-inositol. J Nutr 106:1617–1628. https://doi.org/10.1093/jn/106.11.1617
Cerwenka A, Lanier LL (2016) Natural killer cell memory in infection, inflammation and cancer. Nat Rev Immunol 16:112–123. https://doi.org/10.1038/nri.2015.9
Chiossone L, Dumas P-Y, Vienne M, Vivier E (2018) Natural killer cells and other innate lymphoid cells in cancer. Nat Rev Immunol 18:671–688. https://doi.org/10.1038/s41577-018-0061-z
Chrobok M, Dahlberg CI, Sayitoglu EC, Beljanski V, Nahi H, Gilljam M, Stellan B, Sutlu T, Duru AD, Alici E (2019) Functional assessment for clinical use of serum-free adapted NK-92 cells. Cancers 11:69. https://doi.org/10.3390/cancers11010069
Article CAS PubMed Central Google Scholar
Colazingari S, Fiorenza MT, Carlomagno G, Najjar R, Bevilacqua A (2014) Improvement of mouse embryo quality by myo-inositol supplementation of IVF media. J Assist Reprod Genet 31:463–469. https://doi.org/10.1007/s10815-014-0188-1
Depeint F, Bruce WR, Shangari N, Mehta R, O'Brien PJ (2006) Mitochondrial function and toxicity: role of the B vitamin family on mitochondrial energy metabolism. Chem Biol Interact 163:94–112. https://doi.org/10.1016/j.cbi.2006.04.014
Gardiner CM (2019) NK cell metabolism. J Leukoc Biol 105:1235–1242. https://doi.org/10.1002/JLB.MR0718-260R
Gong J-H, Maki G, Klingemann HG (1994) Characterization of a human cell line (NK-92) with phenotypical and functional characteristics of activated natural killer cells. Leukemia 8:652–658
Greiner EF, Guppy M, Brand K (1994) Glucose is essential for proliferation and the glycolytic enzyme induction that provokes a transition to glycolytic energy production. J Biol Chem 269:31484–31490. https://doi.org/10.1016/S0021-9258(18)31720-4
Grudzien M, Rapak A (2018) Effect of natural compounds on NK cell activation. J Immunol Res. https://doi.org/10.1155/2018/4868417
Guillerey C, Huntington ND, Smyth MJ (2016) Targeting natural killer cells in cancer immunotherapy. Nat Immunol 17:1025–1036. https://doi.org/10.1038/ni.3518
Han H-S, Kang G, Kim JS, Choi BH, Koo S-H (2016) Regulation of glucose metabolism from a liver-centric perspective. Exp Mol Med 48:e218–e218. https://doi.org/10.1038/emm.2015.122
Hodge DL, Schill WB, Wang JM, Blanca I, Reynolds DA, Ortaldo JR, Young HA (2002) IL-2 and IL-12 alter NK cell responsiveness to IFN-γ-inducible protein 10 by down-regulating CXCR3 expression. J Immunol 168:6090–6098. https://doi.org/10.4049/jimmunol.168.12.6090
Horwitz ME, Chao NJ, Rizzieri DA, Long GD, Sullivan KM, Gasparetto C, Chute JP, Morris A, McDonald C, Waters-Pick B (2014) Umbilical cord blood expansion with nicotinamide provides long-term multilineage engraftment. J Clin Investig 124:3121–3128. https://doi.org/10.1172/JCI74556
Inoue K, Ogonuki N, Yamamoto Y, Noguchi Y, Takeiri S, Nakata K, Miki H, Kurome M, Nagashima H, Ogura A (2002) Improved postimplantation development of rabbit nuclear transfer embryos by activation with inositol 1, 4, 5-trisphosphate. Cloning Stem Cells 4:311–317. https://doi.org/10.1089/153623002321024989
Jung J, Zeng H, Horng T (2019) Metabolism as a guiding force for immunity. Nat Cell Biol 21:85–93. https://doi.org/10.1038/s41556-018-0217-x
Khorraminejad-Shirazi M, Sani M, Talaei-Khozani T, Dorvash M, Mirzaei M, Faghihi MA, Monabati A, Attar A (2020) AICAR and nicotinamide treatment synergistically augment the proliferation and attenuate senescence-associated changes in mesenchymal stromal cells. Stem Cell Res Ther 11:1–17. https://doi.org/10.1186/s13287-020-1565-6
Klingemann H, Boissel L, Toneguzzo F (2016) Natural killer cells for immunotherapy–advantages of the NK-92 cell line over blood NK cells. Front Immunol. https://doi.org/10.3389/fimmu.2016.00091
MacIver NJ, Jacobs SR, Wieman HL, Wofford JA, Coloff JL, Rathmell JC (2008) Glucose metabolism in lymphocytes is a regulated process with significant effects on immune cell function and survival. J Leukoc Biol 84:949–957. https://doi.org/10.1189/jlb.0108024
Mazur-Bialy A, Pochec E, Plytycz B (2015) Immunomodulatory effect of riboflavin deficiency and enrichment-reversible pathological response versus silencing of inflammatory activation. J Physiol Pharmacol 66:793–802
Mouchiroud L, Houtkooper RH, Moullan N, Katsyuba E, Ryu D, Cantó C, Mottis A, Jo Y-S, Viswanathan M, Schoonjans K (2013) The NAD+/sirtuin pathway modulates longevity through activation of mitochondrial UPR and FOXO signaling. Cell 154:430–441. https://doi.org/10.1016/j.cell.2013.06.016
O'Brien KL, Finlay DK (2019) Immunometabolism and natural killer cell responses. Nat Rev Immunol 19:282–290. https://doi.org/10.1038/s41577-019-0139-2
Pearce EL, Poffenberger MC, Chang C-H, Jones RG (2013) Fueling immunity: insights into metabolism and lymphocyte function. Science 342:1242454. https://doi.org/10.1126/science.1242454
Pockley AG, Vaupel P, Multhoff G (2020) NK cell-based therapeutics for lung cancer. Expert Opin Biol Ther 20:23–33. https://doi.org/10.1080/14712598.2020.1688298
Poznanski SM, Barra NG, Ashkar AA, Schertzer JD (2018) Immunometabolism of T cells and NK cells: metabolic control of effector and regulatory function. Inflamm Res 67:813–828. https://doi.org/10.1007/s00011-018-1174-3
Romanski A, Uherek C, Bug G, Seifried E, Klingemann H, Wels WS, Ottmann OG, Tonn T (2016) CD 19-CAR engineered NK-92 cells are sufficient to overcome NK cell resistance in B-cell malignancies. J Cell Mol Med 20:1287–1294. https://doi.org/10.1111/jcmm.12810
Schnellbaecher A, Binder D, Bellmaine S, Zimmer A (2019) Vitamins in cell culture media: Stability and stabilization strategies. Biotechnol Bioeng 116:1537–1555. https://doi.org/10.1002/bit.26942
Suck G, Odendahl M, Nowakowska P, Seidl C, Wels WS, Klingemann HG, Tonn T (2016) NK-92: an 'off-the-shelf therapeutic'for adoptive natural killer cell-based cancer immunotherapy. Cancer Immunol Immunother 65:485–492. https://doi.org/10.1007/s00262-015-1761-x
Suwannasom N, Kao I, Pruß A, Georgieva R, Bäumler H (2020) Riboflavin: The health benefits of a forgotten natural vitamin. Int J Mol Sci 21:950. https://doi.org/10.3390/ijms21030950
Tam Y, Martinson J, Doligosa K, Klingemann H (2003) Ex vivo expansion of the highly cytotoxic human natural killer-92 cell-line under current good manufacturing practice conditions for clinical adoptive cellular immunotherapy. Cytotherapy 5:259–272. https://doi.org/10.1080/14653240310001523
Tan CL, Chin T, Tan CYR, Rovito HA, Quek LS, Oblong JE, Bellanger S (2019) Nicotinamide metabolism modulates the proliferation/differentiation balance and senescence of human primary keratinocytes. J Investig Dermatol 139(1638–1647):e1633. https://doi.org/10.1016/j.jid.2019.02.005
Tang X, Yang L, Li Z, Nalin AP, Dai H, Xu T, Yin J, You F, Zhu M, Shen W (2018) Erratum: First-in-man clinical trial of CAR NK-92 cells: safety test of CD33-CAR NK-92 cells in patients with relapsed and refractory acute myeloid leukemia. Am J Cancer Res 8:1899–1899
Vivier E, Tomasello E, Baratin M, Walzer T, Ugolini S (2008) Functions of natural killer cells. Nat Immunol 9:503–510. https://doi.org/10.1038/ni1582
Waldhauer I, Steinle A (2008) NK cells and cancer immunosurveillance. Oncogene 27:5932–5943. https://doi.org/10.1038/onc.2008.267
Williams B, Routy B, Wang X-H, Chaboureau A, Viswanathan S, Keating A (2015) NK-92 Therapy is well tolerated, has minimal toxicity and shows efficacy in a phase I trial of patients with relapsed/refractory hematological malignancies relapsing after autologous stem cell transplantation. Blood 126:4297. https://doi.org/10.1182/blood.v126.23.4297.4297
Xin Z, Pu L, Gao W, Wang Y, Wei J, Shi T, Yao Z, Guo C (2017) Riboflavin deficiency induces a significant change in proteomic profiles in HepG2 cells. Sci Rep 7:1–10. https://doi.org/10.1038/srep45861
Zhang H, Ryu D, Wu Y, Gariani K, Wang X, Luan P, D'Amico D, Ropelle ER, Lutolf MP, Aebersold R (2016) NAD+ repletion improves mitochondrial and stem cell function and enhances life span in mice. Science 352:1436–1443. https://doi.org/10.1126/science.aaf2693
Zhang W, Cai H, Tan W-S (2018) Dynamic suspension culture improves ex vivo expansion of cytokine-induced killer cells by upregulating cell activation and glucose consumption rate. J Biotechnol 287:8–17. https://doi.org/10.1016/j.jbiotec.2018.09.010
Zhang W, Huang H, Cai H, Tan WS (2019) Enhanced metabolic activities for ATP production and elevated metabolic flux via pentose phosphate pathway contribute for better CIK cells expansion. Cell Prolif 52:e12594. https://doi.org/10.1111/cpr.12594
Zhou M, Crawford Y, Ng D, Tung J, Pynn AF, Meier A, Yuk IH, Vijayasankaran N, Leach K, Joly J (2011) Decreasing lactate level and increasing antibody production in Chinese Hamster Ovary cells (CHO) by reducing the expression of lactate dehydrogenase and pyruvate dehydrogenase kinases. J Biotechnol 153:27–34. https://doi.org/10.1016/j.jbiotec.2011.03.003
This work was supported by the Science and Technology Innovation Action Plan of Basic Research, Shanghai, China (15JC1401402).
State Key Laboratory of Bioreactor Engineering, East China University of Science and Technology, 130 Meilong Road, P. O. Box 309#, Shanghai, 200237, People's Republic of China
Yan Fu, Yuying Chen, Zhepei Xie, Huimin Huang, Wen-Song Tan & Haibo Cai
Yan Fu
Yuying Chen
Zhepei Xie
Huimin Huang
Wen-Song Tan
Haibo Cai
H.C. and Y.F. conceived and designed the study. Y.F. performed the experiments. Y.F., Y.C., Z.X., H.H. and H.C. analysed experiments data and discussed the results. H.C., Y.F. and W.T. wrote the manuscript. All authors read and approved the final manuscript.Author contributions: Journal standard instruction requires the statement All authors read and approved the final manuscript. in the Author contributions section. This was inserted at the end of the paragraph of the said section. Please check if appropriate.We have confirmed that it is appropriate.
Correspondence to Haibo Cai.
All procedures performed in this study were approved by the State Key Laboratory of Bioreactor Engineering, East China University of Science and Technology committee.
All authors have read and approved the manuscript before submitting it to Bioresources and Bioprocessing.
There are no competing interests to report.
40643_2022_578_MOESM1_ESM.docx
Additional file 1:Table S1. ANOVA for response surface quadratic model. Fig. S1. Ex vivo expansion of NK-92 cells for 8 days. (A) Cell viability. (B) Specific growth rate of NK-92 cells. (C) Expansion fold of the cells. (*P < 0.05, n = 3). Fig. S2. Time-profiles of glucose consumption and lactate production of NK-92 cells in two kinds of mediums. (A) Glucose concentration. (B) Lactate concentration. (n = 3).
Fu, Y., Chen, Y., Xie, Z. et al. Vitamin combination promotes ex vivo expansion of NK-92 cells by reprogramming glucose metabolism. Bioresour. Bioprocess. 9, 87 (2022). https://doi.org/10.1186/s40643-022-00578-4
NK-92 cells
Ex vivo expansion
Response surface methodology
Vitamin concentration optimization
|
CommonCrawl
|
Home/Mathematics/The time between calls to a plumbing supply business is exponentially distributed with a mean time between calls of 15 minutes. (a) What is the probability that there are no calls within a 30-minute interval? (b) What is the probability that at least one call arrives within a 10-minute interval? (c) What is the probability that the first call arrives within 5 and 10 minutes after opening? (d) Determine the length of an interval of time such that the probability of at least one call in the interval is 0.90.
Question Solved1 Answer The time between calls to a plumbing supply business is exponentially distributed with a mean time between calls of 15 minutes. (a) What is the probability that there are no calls within a 30-minute interval? (b) What is the probability that at least one call arrives within a 10-minute interval? (c) What is the probability that the first call arrives within 5 and 10 minutes after opening? (d) Determine the length of an interval of time such that the probability of at least one call in the interval is 0.90.
WBARMH The Asker · Other Mathematics
The time between calls to a plumbing supply business
is exponentially distributed with a mean time between calls of
15 minutes.
(a) What is the probability that there are no calls within a
30-minute interval?
(b) What is the probability that at least one call arrives within
a 10-minute interval?
(c) What is the probability that the first call arrives within 5
and 10 minutes after opening?
(d) Determine the length of an interval of time such that the
probability of at least one call in the interval is 0.90.
DIRRHT The First Answerer
(a)From the given information, the time between calls to a plumbing supply business is exponentially distributed with a mean 15 minutes. So, here X∼Exp phi(lambda)Mean of the exponential distribution is given by,(1)/(lambda)=15=>lambda=(1)/(15)Then the corresponding probability distribution function is given by,f(x)=(1)/(15)e^(-(1)/(x^(x)))Compute the probability that there are no calls within a 30-minute interval. That is, P(X > 30){:[P(X > 30)=int_(30)^(oo)f(x)dx],[=int_(30)^(oo)(1)/(15)e^(-(1)/(15))dx],[=(1)/(15)int_(30)^(oo)e^((1)/(1^(**)))dx],[=(1)/(15)[(e^(-(x)/(5)))/((-1)/(15))]_(50)^(oo)],[=e^(-2)],[=0.1353]:}(b)Compute the probability that there are no calls within a 10-minute interval. That is, P(X > 10){:[P(X > 10)=int_(10)^(oo)f(x)dx],[=int_(10)^(oo)(1)/(15)e^(-(1)/(15)x)dx],[=(1)/(15)int_(10)^(oo)e^((1)/(5^(x)))dx],[=(1)/(15)[(e^(-(x)/(5)))/((-1)/(15))_(40):}],[=e^(-25)],[=0.5134]:}Now, th ... See the full answer
Land Inc. purchases laptops for $430.00 each and sells them at a profit of $180.00, with overhead expenses of $35.00 per unit. Calculate the following, rounding your answer to two decimal places.a. selling price of her laptop.b. amount of markup per laptop.c. rate of markup on the cost per laptop.
Air at 80 degrees Fahrenheit and 14.7 psia with a mass flow rate of 0.6 lbm/sec enters a compressor and leaves at 70 psia and 430 degrees Fahrenheit through an exit area of 0.007 ft^2. The measured heat interaction rate with the 80 degree Fahrenheit factory room is 3 Btu/sec. Use constant Cp = 0.24 Btu/lbm oF a) Calculate the compressor work rate. b) Compute the entropy generation rate for the overall process.
1 Required information A 20-mm-diameter rod made of the same material as rods AC and AD in the truss shown was tested to failure and an ultimate load of 105 kN was recorded. Use a factor of safety of 3.0. Part 1 of 2 1,5 m D B 3 m -3 m 48 kN 48 kN Determine the required diameter of rod AC. The required diameter of rod AC is mm.
Required information A 20-mm-diameter rod made of the same material as rods AC and AD in the truss shown was tested to failure and an ultimate load of 125 kN was recorded. Use a factor of safety of 3.0. 1.5 m 3 m 48 KN 48 KN Determine the required diameter of rod AC. The required diameter of rod AC is mm
thermo 2 help plz Air, at \( 80^{\circ} \mathrm{F} \) and \( 14.7 \) psia with a mass flow rate of \( 0.6 \mathrm{Ibrm} / \mathrm{sec} \) enters a compressor and leaves at 70 psia and \( 430^{\circ} \) F through an exit of area 0007 . th \( ^{2} \). The measured heat internction rate with the \( 80^{\circ} \mathrm{F} \) factory room is \( ₹ \mathrm{Bth} / \mathrm{sec} \). Use constant \( \mathrm{C}_{p}=0.24 \mathrm{Btw}_{\mathrm{f}} \mathrm{b}_{\mathrm{m}}{ }^{\circ \mathrm{F}} \) a. Calculate the compressor work rate, W. b. Compute the entropy generation rate for the overall process.
Required information Problem 15-4A (Algo) Preparing job cost sheets, recording costs, preparing inventory ledger accounts LO P1, P2, P3 [The following information applies to the questions displayed below] Watercraft's predetermined overhead rate is 200% of direct labor. Information on the company's production activities during May follows. a. Purchased raw Required information Problem 15-4A (Algo) Preparing job cost sheets, recording costs, preparing inventory ledger accounts LO P1, P2, P3 [The following information applies to the questions displayed below] Watercraft's predetermined overhead rate is 200% of direct labor. Information on the company's production activities during May follows. a. Purchased raw materials on credit, $220,000 b. Materials requisitions record use of the following materials for the month. Job 136 Job 137 Job 138 Job 139 Job 140 Total direct materials Indirect materials Total materials requisitions Job 136 Job 137 30h 138 300 119 $ 49,500 32,500 19,400 c. Time tickets record use of the following labor for the month. These wages were paid in cash Job 140 Total direct labor Indirect labor Total labor cost 23,200 6,400 131,000 20,500 $ 151,500 $ 12,300 10,500 37,900 39,200 4,000 101,900 14,500 $ 128,400 Journal entry worksheet < 3 Record raw material purchases on credit. Note: Enter debits before credits. Transaction Record entry 4 Raw materials inventory Accounts payable General Journal 5 Clear entry 6 7 8 Debit 220,000 ***** Credit 11 220,000 View general journal >
|
CommonCrawl
|
How can one feed a small black hole against 1 TW of Hawking radiation's radiation pressure?
This question was inspired by an old post here on meta, about a previous question of the same type that had been removed because it "had too much science fiction", in particular the author added a lot of references to "energy beings" and other (admittely soft! space opera!) "sci-fi" type stuff along with it, although the precise extent as to which is not clear because the original is now gone:
Off topic review: How to feed a small black hole against 1 TW Hawking radiation pressure?
But I thought it was an interesting question in its own right when considered on its own merit, without reference to the "sci-fi" type fluff material. And when I saw it, and note the question is still absent, I wanted to reopen the case, but with the objectionable parts removed or kept to only the minimum necessary to make things clear.
From what I gather of those posts, since the original has been deleted, the question would be: is there some physically possible or engineering-feasible way one could feed a black hole that was of a small enough mass it is emitting 1 TW of Hawking radiation? That is, to get matter in despite the outgoing radiation trying to blow it away?
For the relevant mathematics of this question, we have the black hole lifetime as
$$t_0 = \frac{5120 \pi G^2 M^3}{\hbar c^4}$$
and the radiated power as
$$P = \frac{\hbar c^6}{15360 \pi G^2 M^2}$$
as well as the horizon area:
$$A_H = \frac{16 \pi G^2 M^2}{c^4}$$ For any given power $P$, the relevant mass is seen to be
$$M = \sqrt{\frac{\hbar c^6}{15360 \pi G^2 P}}$$
Taking $P = 10^{12}\ \mathrm{W}$ gives the mass as $1.9 \times 10^{10}$ kg, the lifetime to die completely is 560 Ts (~18 million years), so it at least looks like it won't be vanishing too soon or increasing in power too rapidly. The horizon area is about $10^{-32}\ \mathrm{m^2}$, so the radiation intensity $I$ is $10^{44}\ \mathrm{W/m^2}$ and the radiation pressure (by $p = \frac{I}{c}$) is about $3 \times 10^{35}\ \mathrm{Pa}$. (Note this doesn't take into account near-horizon effects that may change the situation and my GR ain't good enough to work that all out, but you could use this at a few horizon widths for a rough order-of-magnitude estimate. I'd think it will definitely be more than $10^{30}\ \mathrm{Pa}$ of pressure, to really low ball even the order of magnitude, you'll be fighting.).
The question is given the very small throat size and very huge pressure, is there any physically possible way you could force enough matter into it to make its mass grow, and given the "sci-fi" like context of the original question, even just to sustain it as a power source indefinitely radiating 1 TW (which equates to forcing about 11 mg of mass per second into the hole)? This horizon radius is even smaller than an atom. FWIW the concept of a black hole as power source has been suggested by legitimate researchers:
https://en.wikipedia.org/wiki/Black_hole_starship#cite_note-cranewestmoreland2009-2
Or would the black hole have to just be used "as long as it lasts" - that is, after its creation, the only thing that can be done is to sip the energy until it either gets too hot to handle or it finally blows out? (Note that this doesn't necessarily make it useless for running a spacecraft - one could, for example, collect the power to form a beam that would be directed to propel the craft in some form of beamed propulsion perhaps with some kind of Dyson sphere-like construct encasing it, instead of carrying the BH along with it which, would arguably be a better use since the extreme BH mass would not be there to slow/dog it down. Actually if anything a lighter BH might be more suitable for this purpose so you can get much higher output power, but then comes the question of whether it'll die/get too hot before the ship comes up to cruising speed. A 1 PW BH has a lifetime of 18 Gs (about 500 years), which might or might not be enough to get a decent payload up to speed. But then this also makes the feeding problem all the more urgent, and far worse, since the rad pressure goes up rapidly.)
black-holes interstellar-travel
The_Sympathizer
asked Jan 9 '18 at 8:06
The_SympathizerThe_Sympathizer
This started as a comment but it got too big,
Note, that the temperature of the black-hole $$ T_{\mathrm {H} }=\frac {\hbar c^{3}}{8\pi GM} $$ (written in energetic units) with mass $1.9 \times 10^{10}\,\text{kg}$ would be about $0.56\, \text{GeV}$, which would mean that the Hawking radiation would include a lot of pions, muons, protons and corresponding antiparticles.
But this also implies that the equation for the radiated power (the one with $15360^{-1}$) is wrong. It is derived from Stefan–Boltzmann power law, which would be wrong for temperatures at which a lot of massive particles would be radiated. There would be additional channels for radiations of leptons, pions, with each contributing power comparable to EM radiation. So if we define 1 TW as total power of Hawking radiation in channels 'accessible' for the engineering applications (EM radiation, leptons, mesons) and ignoring neutrinos and gravitons that are not readily usable, we would need to correspondingly increase the mass of black hole by some unknown factor. This would lower the temperature and increase horizon area. So from the 'engineering' standpoint the problem becomes somewhat simpler.
A.V.S.A.V.S.
Not the answer you're looking for? Browse other questions tagged black-holes interstellar-travel or ask your own question.
On black holes, Hawking radiation and gravitational atoms
Multipolar expansion profile of Hawking radiation on Kerr black holes
Is the Hawking radiation of a charged black hole thermal?
Black hole complementarity - absorption of Hawking radiation
Won't Hawking radiation be sucked back into the black hole?
Can you ride Hawking radiation away from a black hole?
|
CommonCrawl
|
Recent questions tagged q1-4
Write the negation of the statement " There exists a number $x$ such that 0<$x$<1."
mathematical-reasoning
bookproblem
exercise-misc14
q1-4
asked Aug 4, 2014 by rvidyagovindarajan_1
Identify the connecting word and and write the components of the statement "$x=2\:\:and\:\:x=3$ are the roots of the equation $3x^2-x-10=0$."
exercise14-3
asked Jul 16, 2014 by rvidyagovindarajan_1
Write the negation of "The number $2$ is greater than the number $7$."
The square of a number is an even number. Is this sentence a statement? Give reason.
asked Jul 3, 2014 by rvidyagovindarajan_1
Find the distance between the the points $(2,-1,3)$ and $(-2,1,3)$.
introduction-to-3d-geometry
asked Apr 16, 2014 by rvidyagovindarajan_1
Use the truth table to establish which of the following statements are tautologies and which are contradictions.
tnstate
asked Sep 13, 2013 by sreemathi.v
Find the maximum and minimum values, if any, of the following functions given by?$(iv)\;f(x)=x^3+1$
Using differentials, find the approximate value of each of the following up to 3 places of decimal. $(0.009)^{\Large\frac{1}{3}}$
Evaluate the limit for the following if exists. $\;\lim\limits_{x \to 2}\large\frac{x^{n}-2^{n}}{x-2}$
sec-1
asked May 6, 2013 by poojasapani_1
Find the intervals of concavity and the points of inflection of the following functions: $f(x)=x^{4}-6x^{2}$
exercise5-11
modelpaper
Find the critical numbers and stationary points of each of the following functions.$\;f(x)=\large\frac{x+1}{x^{2}+x+1}$
Prove the following inequalities: $\log(1+ x)< x $ for all $x>0$
Obtain the Maclaurin's series expansion for:$\;\tan x,- \large\frac{\pi}{2} \lt \normalsize x \lt \large\frac{\pi}{2}$
Verify Lagrange's theorem for the following function;\[\]$f(x)=x^{\large\frac{2}{3}}[-2,2]$
Verify Rolle's theorem for the following function; $f(x)=4x^{3}-9x;-\large\frac{3}{2}\leq x\leq \frac{3}{2}$.
Find the equation of the tangent and normal to the curves.$ \;y=\large\frac{1+\sin x}{\cos x}$ at $\;x=\large\frac{\pi}{4}$
A missile fired from ground level rises $x$ metres vertically upwards in $t$ seconds and $x=100t-\large\frac{25}{2}t^{2}$ find the velocity with which the missile strikes the ground
asked Apr 30, 2013 by poojasapani_1
Varify $\large\frac{\partial^{2} y}{\partial x\partial y}=\frac{\partial^{2} y}{\partial y\partial x}$ for the following function;$\;u=\tan^{-1}\large(\frac{x}{y})$
Find the differential of the functions. $y$=$\large\frac{x-2}{2x+3}$
If $X$ a normal variate with mean $80$ and standard deviation $10$ compute the following probabilites by standardizing. $P(70$$<$X$)$
Find the angle between the line $\large\frac{x-6}{3}=\frac{y-7}{2}=\frac{z-7}{-2}$ and the plane $x+y+2z=0.$
If $\sin(xy)+\cos(xy)=1$ and $\tan(xy)\neq 1$,then show that $\large\frac{dy}{dx}=-\frac{y}{x}$
If $y=e^{\sin x^2}$,find $\large\frac{dy}{dx}$.
If $e^{x+y}=xy,$show that $\large\frac{dy}{dx}\normalsize=\large\frac{y(1-x)}{x(y-1)}$.
If $y=\sqrt{\large\frac{1-\cos x}{1+\cos x}}$,find $\large\frac{dy}{dx}$.
Find the order and degree of the following differential equation: $\large\frac{d^{2}y}{dx^{2}}+x$=$\sqrt{y+\large\frac{dy}{dx}}$
Find the derivative of $\sin x^2$ with respect to $x^3$.
Evaluate : $\lim_{x\to \large\frac{\pi}{2}}[x\tan x-\large\frac{\pi}{2}\normalsize\ sec x]$
Find the equation of the tangent and normal to the ellipse $2x^{2}+3y^{2}=6 $ at $ (\sqrt{3} , 0 )$
Find the equation of the hyperbola if centre : $(1 , -2 )$ length of the transverse axis is $8; e=\large\frac{5}{4}$ and the transverse axis is parallel to x- axis.
Two regression lines are represented by $2x+3y-10=0$ and $4x+y-5=0$.Find the line of regression of y on x.
Find the equation of the ellipse if the centre is $(3 , -4 ), $one of the foci is $(3+\sqrt{3},-4)$ and $e=\large\frac{\sqrt{3}}{2}$
From the equations of the two regression lines,4x+3y+7=0 and 3x+4y+8=0,find :
Find the equation of parabola if : Vertex $(1 , 4 );$ focus :$(-2 , 4 ).$
asked Apr 9, 2013 by poojasapani_1
Express the following in the standard form $a + ib$: $\large\frac{i^{4}+i^{9}+i^{16}}{3-2i^{8}-i^{10}-i^{15}}$
asked Apr 1, 2013 by geethradh
Examine the consistency of the following system of equation. If it is consistent than solve the same. $x-4y+7z=14\;,3x+8y-2z=13\;,7x-8y+26z=5 $
asked Mar 30, 2013 by poojasapani_1
In the event that $\ast$ is not a binary operation, give justification for this: On $Z^+,$ defined $\;\ast\;by\;a\;\ast\;b=|a-b|.$
asked Mar 13, 2013 by sreemathi.v
Determine whether Relation $R$ is reflexive, symmetric and transitive: Relation $R$ in the set $Z$ of all integers defined as $R=\{(x,y):x-y \;\; \text {is an integer}\}$
asked Mar 8, 2013 by balaji.thirumalai
Let $ A\;=\begin{bmatrix}2 & 4\\3 & 2\end{bmatrix}, B\;=\begin{bmatrix}1 & 3\\-2 & 5\end{bmatrix}, C\;=\begin{bmatrix}-2 & 5\\3 & 4\end{bmatrix}$. Find $ AB\qquad$
veryshort
asked Feb 27, 2013 by balaji.thirumalai
|
CommonCrawl
|
Resources / Application Notes / Lasers / Metrology for Laser Optics
Metrology for Laser Optics http://www.edmundoptics.com/resources/application-notes/lasers/metrology-for-laser-optics/
http://www.edmundoptics.com
Metrology for Laser Optics
This is Sections 7.1, 7.2, 7.3, 7.4, 7.5, and 7.6 of the Laser Optics Resource Guide.
Metrology is crucial for ensuring optical components consistently meet their desired specifications and function safely. This reliability is especially important for systems utilizing high-power lasers or where changes in throughput may cause inadequate system performance. A wide range of metrology is used to measure laser optics including cavity ring down spectroscopy, atomic force microscopy, differential interference contrast microscopy, interferometry, Shack Hartmann wavefront sensors, and spectrophotometers.
Cavity Ring Down Spectroscopy
Cavity ring down spectroscopy (CRDS) is a technique used to determine the composition of gas samples, but for laser optics it is used to make high sensitivity loss measurements of optical coatings. In a CRDS system, a laser pulse is sent into a resonant cavity bounded by two highly reflective mirrors. With each reflection, a small amount of light is lost to absorption, scattering, and transmission while the reflected light continues to oscillate in the resonant cavity. A detector behind the second mirror measures the decrease in intensity of the reflected light (or "ring down"), which is used to calculate the loss of the mirrors (Figure 1). Characterizing the loss of a laser mirror is essential for ensuring a laser system will achieve its desired throughput.
Figure 1: Cavity ring down spectrometers measure the intensity decay rate in the resonant cavity, allowing for higher accuracy measurements than techniques that just measure absolute intensity values
The intensity of the laser pulse inside the cavity (I) is described by:
(1)$$ I = I_{0} e^{ \frac{-T \, t \, c}{2L} } $$
I0 is the initial intensity of the laser pulse, Ƭ is the total cavity mirror loss from transmission, absorption, and scattering, t is time, c is the speed of light, and L is the length of the cavity.
The value determined in CRDS is the loss of the entire cavity. Therefore, multiple tests are required in order to determine the loss of one mirror. Two reference mirrors are used to make an initial measurement (A), and then two more measurements are taken: one with the first reference mirror replaced by the mirror being tested (B) and one with the other reference mirror replaced by the test mirror (C). These three measurements are used to determine the loss of the test mirror.
(2)$$ A = M_1 + M_2 $$
(3)$$ B = M_3 + M_2 $$
(4)$$ C = M_1 + M_3 $$
(5)$$ C + B - A = M_1 + M_3 + M_3 + M_2 - M_1 = 2 M_3 $$
(6)$$ M_3 = \frac{C + B - A}{2} $$
M1 and M2 are the loss of the two reference mirrors and M3 is the loss of the test mirror. The loss from air in the cavity is assumed to be negligible. CRDS is an ideal technique for characterizing the performance of reflective laser optics because it is much easier to accurately measure a small amount of loss rather than a large reflectance (Table 1). Transmissive components with anti-reflection coatings can also be tested by inserting them into a resonant cavity and measuring the corresponding increase in loss. CRDS must be performed in a clean environment with meticulous care, as any contamination on the mirrors or to the inside of the cavity will affect the loss measurements.
Table 1: The sensitivity of measuring the reflectance of a mirror directly with an uncertainty of ±0.1% is two orders of magnitude greater than measuring the mirrors loss with an uncertainty of ±10%. This demonstrates that loss measurements for highly reflective mirrors are much more accurate than reflectance measurements
To learn more about CRDS and its benefits for measuring high reflectivity laser mirrors, watch the webinar recording below.
Atomic Force Microscopy
Atomic force microscopy (AFM) is a technique that provides surface topography with atomic resolution (Figure 2). An extremely small and sharp tip scans across a sample's surface, resulting in a 3D reconstruction of the surface. The tip is attached to a rectangular or triangular cantilever that connects to the rest of the microscope head. The cantilever's motion is controlled by piezoelectric ceramics, which ensures 3D positioning of the cantilever with subnanometer resolution.1
In laser optics, AFM is primary used to calculate an optical component's surface roughness, which may significantly affect the performance of a laser optical system as it is often the main source of scattering. AFM can provide a 3D map of a surface with a precision of a few Angstrom's.2
Figure 2: Topography map of a grating captured using atomic force microscopy
The tip is either scanned across the sample while in constant contact with the system, known as contact mode, or in intermittent contact with the surface, known as tapping mode. In tapping mode, the cantilever oscillates at its resonant frequency, with the tip only contacting the surface for a short time during the oscillating cycle. Contact mode is less complicated than tapping mode and provides a more accurate reconstruction of the surface. However, the possibility of damaging the surface during scanning is higher and the tip wears out faster, leading to a shorter lifetime of the tip. In both modes, a laser is reflected off the top of the cantilever onto a detector. Changes in the height of the sample surface deflect the cantilever and change the position of the laser on the detector, generating an accurate height map of the surface (Figure 3).
Figure 3: Schematic of an atomic force microscope operating in tapping mode
The shape and composition of the tip play a key role in the spatial resolution of AFM and should be chosen according to the specimen requiring a scan. The smaller and sharper the tip, the higher the lateral resolution. However, small tips have longer scanning times and a higher cost than larger tips.
Control of the distance between the tip and the surface determines the vertical resolution of an AFM system. Mechanical and electrical noise limit the vertical resolution as surface features smaller than the noise level cannot be resolved.3 The relative position between the tip and the sample is also sensitive to the expansion or contraction of AFM components as a result of thermal variations.
AFM is a time-consuming metrology technique and is mainly used for process validation and monitoring, where a small fraction of a sample surface on the order of 100μm x 100μm is measured to provide a statistically significant representation of its manufacturing process as a whole.
Differential Interference Contrast Microscopy
Differential interference contrast (DIC) microscopy is used for highly sensitive defect detection in transmissive materials, particularly for identifying laser damage in optical coatings and surfaces (Figure 4). It is difficult to observe these features using traditional brightfield microscopy because the sample is transmissive, but DIC microscopy improves contrast by converting gradients in the optical path length from variations in refractive index, surface slope, or thickness into intensity differences at the image plane. Slopes, valleys and surface discontinuities are imaged with improved contrast to reveal the profile of the surface. DIC images give the appearance of a 3D relief corresponding to the variation of optical path length of the sample. However, this appearance of 3D relief should not be interpreted as the actual 3D topography of the sample.
Figure 4: Image of laser induced damage captured using DIC microscopy
DIC microscopy uses polarizers and a birefringent Wollaston or Nomarski prism to separate a light source into two orthogonally polarized rays (Figure 5). An objective lens focuses the two components onto the sample surface displaced by a distance equal to the resolution limit of the microscope. After being collimated by a condenser lens, the two components are then recombined using another Wollaston prism. The combined components then pass through a second polarizer known as an analyzer, which is oriented perpendicular to the first polarizer. The interference from the difference in the two component's optical path length leads to visible brightness variations.
Figure 5: Typical DIC microscopy setup where a Wollaston prism splits the input beam into 2 separately polarized states
One limitation of DIC microscopy is increased cost compared to other microscopy techniques. The Wollaston prisms used to separate and recombine the different polarization states are more expensive than the components needed for microscopy techniques such as phase contrast or Hoffman modulation contrast microscopy.4
Interferometers utilize interference to measure small displacements, surface irregularities, and changes in refractive index. They can measure surface irregularities <λ/20 and are used to qualify flats, spherical lenses, aspheric lenses, and other optical components.
Interference occurs when multiple waves of light are superimposed and added together to form a new pattern. In order for interference to occur, the multiple waves of light must be coherent in phase and have non-orthogonal polarization states.5 If the troughs, or low points, of the waves align they cause constructive interference add their intensities, while if the troughs of one wave align with the peaks of the other they will cause destructive interference and cancel each other out (Figure 6).
Figure 6: Illustration of constructive interference (left) and destructive interference (right), which are used in interferometry to determine surface figure
Interferometers use a beamsplitter to split light from a single source into a test beam and a reference beam. The beams are recombined before reaching a photodetector, and any optical path difference between the two paths will create interference. This allows for comparing an optical component in the path of the test beam to a reference in the reference beam (Figure 7). Constructive and destructive interference between the two paths will create a pattern of visible interference fringes. Both reflective and transmissive optical components can be measured by comparing the transmitted or reflected wavefront to a reference.
Figure 7: Sample image from an interferometer showing bright areas where the test and reference beams constructively interfered and dark rings where they destructively interfered (left), as well as the resulting 3D reconstruction of the test optic (right)
There are several common interferometer configurations (Figure 8). Mach–Zehnder interferometers utilize one beamsplitter to separate an input beam into two separate paths. A second beamsplitter recombines the two paths into two outputs, which are sent to photodetectors. Michelson interferometers use a single beamsplitter for splitting and recombining the beams. One variant of Michelson interferometers are Twyman–Green interferometers, which measure optical components with a monochromatic point source as the light source. Fabry–Pérot interferometers allow for multiple trips of light by using two parallel partially transparent mirrors instead of two separated beam paths.
Figure 8: Various common interferometer configurations
Dust particles or imperfections on optical components that make up an interferometer, besides the optic being tested, can lead to optical path differences that may be misconstrued as surface defects on the optic. Interferometry requires precise control of the beam paths, and measurements may also be subject to laser noise and quantum noise.
Shack-Hartmann Wavefront Sensors
A Shack-Hartmann wavefront sensor (SHWFS) measures the transmitted and reflected wavefront error of an optical component or system with high dynamic range and accuracy. The SHWFS has become very popular due to its ease of use, fast response, relatively low cost, and ability to work with incoherent light sources.
The wavefront of an optical wave is a surface over which the wave has a constant phase. Wavefronts are perpendicular to the direction of propagation, therefore collimated light has a planar wavefront and converging or diverging light has a curved wavefront (Figure 9). Aberrations in optical components lead to wavefront errors, or distortions in transmitted or reflected wavefronts. By analyzing transmitted and reflected wavefront error, the aberrations and performance of an optical component can be determined.
Figure 9: Perfectly collimated light has a planar wavefront. Light diverging or converging after a perfect, aberration-free lens will have a spherical wavefront
SHWFS utilizes an array of microlenses, or lenslets, with the same focal length to focus portions of incident light onto a detector. The detector is divided in small sectors, with one sector for each microlens. A perfect planar incident wavefront results in a grid of focused spots with the same separation as the center-to-center spacing of the microlens array. If a distorted wavefront with some amount of wavefront error is incident on a SHWFS, the position of the spots on the detector will change (Figure 10). The deviation, deformation, or loss in intensity of the focal spots determines the local tilt of the wavefront at each of the microlenses. The discrete tilts can be used to recreate the full wavefront.
Figure 10: Any wavefront error present in light entering a SHWFS will lead to a displacement of the focused spot positions on the detector array
One advantage of SHWFS compared to interferometry is the dynamic range is essentially independent of wavelength, offering more flexibility. However, the dynamic range of SHWFS is limited by the detector sector allocated to each microlens. The focal spot of each microlens should cover at least 10 pixels on its respective sector to achieve an accurate reconstruction of the wavefront. The larger the detector area covered by the focal spot, the greater the SHWFS' sensitivity, though this comes with a tradeoff of shorter dynamic range. In general, the focal spot of the microlens should not cover more than half of the designated detector sector; this guarantees a reasonable compromise between sensitivity and dynamic range.6
Increasing the number of microlenses in an array results in an increase in spatial resolution and less averaging of the wavefront slope over the microlens aperture, but there are less pixels allocated to each microlens. Larger microlenses produce a more sensitive and precise measurement for slowly varying wavefronts, but this may not sufficiently sample complex wavefronts and result in an artificial smoothing of the reconstructed wavefront.7
Spectrophotometers measure the transmission and reflectivity of optical components and are essential for characterizing the performance of optical coatings (Figure 11). A typical spectrophotometer consists of a broadband light source, a monochrometer, and a detector (Figure 12). Light from the light source is sent into the monochrometer's entrance slit where it splits into its component wavelengths using a dispersive element such as a diffraction grating or prism. The monochrometer's exit slit blocks all wavelengths except for a narrow band that passes through the slit, and that narrow wavelength band illuminates the test optic. Changing the angle of the diffraction grating or prism changes the wavelengths that pass through the exit slit, allowing the test wavelength band to be finely tuned. Light reflected or transmitted through the test optic is then directed onto a detector, determining the optic's reflectivity or transmission at a given wavelength.
Figure 11: Sample reflectivity spectrum of a TECHSPEC® Excimer Laser Mirror captured using a spectrophotometer
Figure 12: The test wavelength of a spectrophotometer can be finely tuned by adjusting the angle of the diffraction grating or prism in the monochrometer
The light source must be incredibly stable and have adequate intensity across a broad range of wavelengths to prevent false readings. Tungsten halogen lamps are one of the most commonly used light sources for spectrophotometers because of their long lifespan and ability to maintain a constant brightness.8
The smaller the width of the monochrometer's slits, the higher the spectral resolution of the spectrophotometer. However, reducing the width of the slits also reduces the transmitted power and may increase the reading acquisition time and amount of noise.5
A wide variety of detectors are used in spectrophotometers as different detectors are better suited for different wavelength ranges. Photomultiplier tubes (PMTs) and semiconductor photodiodes are common detectors used for ultraviolet, visible, and infrared detection.8 PMTs utilize a photoelectric surface to achieve unmatched sensitivity compared to other detector types. When light is incident on the photoelectric surface, photoelectrons are released and continue to release other secondary electrons, which causes a high gain. The high sensitivity of PMTs is beneficial for low intensity light sources or when high levels of precision are required. Semiconductor photodiodes such as avalanche photodiodes are less expensive alternatives to PMTs; however, they have more noise and a lower sensitivity than PMTs.
While most spectrophotometers are designed for use in the ultraviolet, visible, or infrared spectra, some spectrophotometers operate in more demanding spectral regions such as the extreme ultraviolet (EUV) spectrum, with wavelengths from 10-100nm. EUV spectrophotometers typically use diffraction gratings with extremely small grating spacings to effectively disperse the incident EUV radiation.
Hinterdorfer, Peter, and Yves F Dufrêne. "Detection and Localization of Single Molecular Recognition Events Using Atomic Force Microscopy." Nature Methods, vol. 3, no. 5, 2006, pp. 347–355., doi:10.1038/nmeth871.
Binnig, G., et al. "Atomic Resolution with Atomic Force Microscope." Surface Science, vol. 189-190, 1987, pp. 1–6., doi:10.1016/s0039-6028(87)80407-7.
Dr. Johannes H. Kindt. "AFM enhancing traditional Electron Microscopy Applications." Atomic Force Microscopy Webinars, Bruker, Feb. 2013, www.bruker.com/service/education-training/webinars/afm.html.
Murphey, Douglas B, et al. "DIC Microscope Configuration and Alignment." Olympus, www.olympus-lifescience.com/en/microscope-resource/primer/techniques/dic/dicconfiguration/
Paschotta, Rüdiger. Encyclopedia of Laser Physics and Technology, RP Photonics, October 2017, www.rp-photonics.com/encyclopedia.html.
Forest, Craig R., Claude R. Canizares, Daniel R. Neal, Michael McGuirk, and Mark Lee Schattenburg. "Metrology of thin transparent optics using Shack-Hartmann wavefront sensing." Optical engineering 43, no. 3 (2004): 742-754.
John E. Greivenkamp, Daniel G. Smith, Robert O. Gappinger, Gregory A. Williby, "Optical testing using Shack-Hartmann wavefront sensors," Proc. SPIE 4416, Optical Engineering for Sensing and Nanotechnology (ICOSN 2001), (8 May 2001); doi: 10.1117/12.427063
Wassmer, William. "An Introduction to Optical Spectrometry (Spectrophotometry)." Azooptics.com, https://www.azooptics.com/Article.aspx?ArticleID=753.
Was this content useful to you?
Thank you for rating this content!
|
CommonCrawl
|
(5-2)/(3)+(2-2)/(4) - adding of fractions
(5-2)/(3)+(2-2)/(4) - step by step solution for the given fractions. Adding of fractions, full explanation.
If it's not what You are looking for just enter simple or very complicated fractions into the fields and get free step by step solution. Remember to put brackets in correct places to get proper solution.
+ - * /
fill out with example data
Solve the problem
Solution for the given fractions
$ \frac{(5-2)}{3 }+ \frac{(2-2)}{4 }=? $
The common denominator of the two fractions is: 12
$ \frac{(5-2)}{3 }= \frac{(4*(5-2))}{(3*4)} =\frac{ 12}{12} $
$ \frac{(2-2)}{4 }= \frac{(3*(2-2))}{(3*4)} =\frac{ 0}{12} $
Fractions adjusted to a common denominator
$ \frac{(5-2)}{3 }+ \frac{(2-2)}{4 }=\frac{ 12}{12 }+\frac{ 0}{12} $
$ \frac{12}{12 }+\frac{ 0}{12 }= \frac{(0+12)}{12} $
$ \frac{(0+12)}{12 }=\frac{ 12}{12} $
$ \frac{12}{12 }= 1 $
see mathematical notation
| (x-2)/(3)+(x-2)/(4) - adding of fractions | | (7)/(1/4)*(1)/(18) - multiplication of fractions | | (13)/(56)+(5)/(7) - add fractions | | (2)/(15)-(1)/(20) - subtraction of fractions | | (35)/(36)*(6)/(7) - multiplying of fractions | | (150)/(45)+(180)/(45) - adding of fractions | | (150)/(45)*(180)/(45) - multiplication of fractions | | (2)/(5)+(1)/(9) - adding of fractions | | (10)/(81)+(25)/(72) - adding of fractions | | (2a)/(15)-(1)/(3) - subtraction of fractions | | (7)/(9)-(14)/(45) - subtraction of fractions | | (12)/(15)/(6)/(5) - dividing of fractions | | (2)/(7)/(10)/(3) - dividing of fractions | | (9)/(1)*(1)/(5) - multiplying of fractions | | (12)/(7)*(9)/(8) - multiply fractions | | (1)/(4)*(2)/(1) - multiplication of fractions | | (4)/(1)*(1)/(16) - multiplication of fractions | | (1)/(16)*(1)/(16) - multiplication of fractions | | (1)/(3)+(7a)/(8) - adding of fractions | | (16)/(1)*(25)/(36) - multiplication of fractions | | (12)/(1)*(4)/(3) - multiplication of fractions | | (12)/(1)+(4)/(3) - addition of fractions | | (12)/(1)*(25)/(36) - multiplication of fractions | | (15(2-x))/(1)+(13(3-x))/(1) - adding of fractions | | (7)/(15)+(13)/(18) - add fractions | | (4)/(9)+(3)/(8) - addition of fractions | | (11)/(15)*(3)/(22) - multiplication of fractions | | (9)/(14)+(10)/(21) - adding of fractions | | (3)/(4)+(1)/(9) - adding of fractions | | (7)/(30)+(5)/(18) - add fractions | | (1)/(10)-(2)/(5) - subtract fractions | | (10)/(9)*(3)/(2) - multiplying of fractions | | (4)/(49)+(1)/(7) - adding of fractions | | (5)/(3)*(6)/(7) - multiplication of fractions | | (15)/(8)+(7)/(8) - add fractions | | (4)/(3)*(2)/(5) - multiplying of fractions | | (4)/(7)*(11)/(6) - multiplying of fractions | | (5)/(21)*(7)/(15) - multiply fractions | | (3)/(14)+(10)/(7) - adding of fractions | | (3)/(14)+(7)/(10) - add fractions | | (4)/(3)*(4)/(5) - multiply fractions | | (9)/((7)x)-(1)/(x) - subtract fractions | | (3x+14)/(5)+(x+54)/(7) - add fractions | | (10)/(1)/(9)/(5) - dividing of fractions | | (5)/(4)/(1)/(8) - dividing of fractions | | (9)/(7x)-(1)/(x) - subtraction of fractions | | (x)/(5)+(2)/(7) - addition of fractions | | (4)/(10)/(4)/(4) - divide fractions | | (3)/(4)+(1)/(7) - adding of fractions | | (75)/(100)/(75)/(75) - divide fractions | | (4)/(10)/(4)/(4) - dividing of fractions | | (4)/(10)*(4)/(4) - multiplication of fractions | | (4n^2+15)/(7m)-(2n^2+4)/(7m) - subtraction of fractions | | (-2)/(9)*(5)/(8) - multiplying of fractions | | (9)/(28)*(45)/(7) - multiplication of fractions | | (1)/(7)*(1)/(8) - multiply fractions | | (-10)/(1)/(1)/(9) - dividing of fractions | | (-5)/(6)/(-8)/(10) - dividing of fractions | | (5)/(11)/(9)/(22) - divide fractions | | (-5)/(6)+(-8)/(10) - add fractions | | (1)/(6)*(24)/(3) - multiplying of fractions | | (3)/(5)/(7)/(5) - dividing of fractions | | (8)/(3)/(8)/(3) - divide fractions | | (9)/(10)/(4)/(5) - dividing of fractions | | (4)/(5)*(2)/(12) - multiplication of fractions |
|
CommonCrawl
|
Learning seminar on deformation theory
Christian Blohmann, Sylvain Lavau, Joao Nuno Mestre, Joost Nuiten
Thu, 2018-10-04 10:00 - Thu, 2019-02-28 12:00
Program and Abstracts of Learning seminar on deformation theory
The goal of the seminar is to rigorously understand the statement ''Every deformation problem in characteristic zero is controlled by a differential graded Lie algebra". This statement has long been a philosophy/guiding principle when studying deformations of algebraic or geometric structures. By the end of the seminar we aim to understand the statement and proof of its following modern incarnation:
Theorem (Lurie, Pridham)
There is an equivalence of $\infty$-categories between the $\infty$-category of formal moduli problems and the $\infty$-category of dgLa's over a field of characteristic zero.
Everyone is welcome, whether to give a talk or simply attend. If you think you'd like to give a talk please come to the first meeting or get in touch with one of the organizers.
In the first part we will see some deformation problems that naturally give rise to dgLa's, and that can also be encoded in deformation functors (also called formal moduli problems). We will see that the two are related by the Maurer-Cartan equation.
In the second part we will study how to construct a deformation functor out of a dgLa using the Maurer-Cartan equation. Conversely, we will build a dgLa out of a deformation functor. For that, we will need to understand some categorical properties of the $\infty$-category of dgLa's - roughly, that we can describe it in terms of generators and relations. This will be done making use of the Chevalley-Eilenberg complex of a dgLa, so that we can work in differential graded local Artinian rings (dgArt) instead.
In the third part we will see that the construction of a dgLa out of a deformation functor is an equivalence of $\infty$-categories between formal moduli problems and dgLa's. Finally, we will see that the inverse of this equivalence is given by the Maurer-Cartan construction.
Motivation and examples
Deformation Problems and Moduli Problems (Notes by Alex)
Deformation functors - Modern approach and the MC equation (Notes by Joao)
The Chevalley-Eilenberg complex $C^*$, and how it is related to the Maurer-Cartan equation (Notes by Joost)
The model category dgLa
Koszul duality I - $C^*$ and its adjoint
Koszul duality II - $C^*$ is an equivalence (sometimes) (Notes by Sylvain)
Proof of the equivalence in the Main Theorem, part 1(Notes by Christian)
Proof of the equivalence in the Main Theorem, part 2 (Notes by David)
The inverse of the equivalence is given by Maurer-Cartan
Jacob Lurie, DAG X: Formal moduli problems
http://www.math.harvard.edu/~lurie/papers/DAG-X.pdf
Bertrand Toën, Problèmes de modules formels
https://perso.math.univ-toulouse.fr/btoen/files/2012/04/Bourbaki-Toen-2016-final1.pdf
Vladimir Hinich, Descent of Deligne groupoids
https://arxiv.org/abs/alg-geom/9606010
Vladimir Hinich, DG coalgebras as formal stacks
https://arxiv.org/abs/math/9812034
Marco Manetti, Deformation theory via differential graded Lie algebras
Damien Calaque, Julien Grivaux, Formal moduli problems and formal derived stacks
https://arxiv.org/abs/1802.09556
Michael Schlessinger, Functors of Artin rings
https://www.ams.org/journals/tran/1968-130-02/S0002-9947-1968-0217093-3/S0002-9947-1968-0217093-3.pdf
Daniel Quillen, Rational homotopy theory
https://www.jstor.org/stable/1970725
W. G. Dwyer, J. Spalinski, Homotopy theories and model categories
hopf.math.purdue.edu/Dwyer-Spalinski/theories.pdf
On classical/motivating examples:
Murray Gerstenhaber - On the Deformation of Rings and Algebras
Marco Manetti - Lectures on deformations of complex manifolds
Albert Nijenhuis, R. W. Richardson - Cohomology and deformations in graded Lie algebras
https://projecteuclid.org/euclid.bams/1183527432
Deligne's letter to Millson (For the philosophy "Every deformation problem in characteristic zero is controlled by a dgla") https://publications.ias.edu/sites/default/files/millson.pdf
|
CommonCrawl
|
Visualising gas temperature and gas pressure
Gas pressure is created when gas molecules collide with the wall of the container creating a force. Gas temperature is a measure of how fast the molecules are moving / vibrating.
However, they both seem to be concerned by "kinetic energy" of the molecules, or in other words, the "collision" they impose on the target. How do we visualize the difference between pressure and temperature of gas? Is there any obvious difference between the two?
The same question in another form:
A gas is hot when the molecules collided with your measuring device.
A gas have high pressure when the molecules collided with your measuring device.
So, what is the difference between the two "collisions" in the physical sense and how do we visualize the difference?
For Simplicity,
How can a Hot gas be Low Pressured? ( They are supposed to have High Kinetic Energy since it is Hot. Therefore should be High Pressured at all times! But no. )
How can a High Pressured gas be Cold? ( They are supposed to collide extremely frequently with the walls of the container. Therefore should be Hot at all times! But no. )
pressure temperature ideal-gas
MrYellow
MrYellowMrYellow
Let us assume we have a function, $f_{s}(\mathbf{x},\mathbf{v},t)$, which defines the number of particles of species $s$ in the following way: $$ dN = f_{s}\left( \mathbf{x}, \mathbf{v}, t \right) \ d^{3}x \ d^{3}v $$ which tells us that $f_{s}(\mathbf{x},\mathbf{v},t)$ is the particle distribution function of species $s$ that defines a probability density in phase space. We can define moments of the distribution function as expectation values of any dynamical function, $g(\mathbf{x},\mathbf{v})$, as: $$ \langle g\left( \mathbf{x}, \mathbf{v} \right) \rangle = \frac{ 1 }{ N } \int d^{3}x \ d^{3}v \ g\left( \mathbf{x}, \mathbf{v} \right) \ f\left( \mathbf{x}, \mathbf{v}, t \right) $$ where $\langle Q \rangle$ is the ensemble average of quantity $Q$.
If we define a set of fluid moments with similar format to that of central moments, then we have: $$ \text{number density [$\# \ (unit \ volume)^{-1}$]: } n_{s} = \int d^{3}v \ f_{s}\left( \mathbf{x}, \mathbf{v}, t \right) \\ \text{average or bulk velocity [$length \ (unit \ time)^{-1}$]: } \mathbf{U}_{s} = \frac{ 1 }{ n_{s} } \int d^{3}v \ \mathbf{v}\ f_{s}\left( \mathbf{x}, \mathbf{v}, t \right) \\ \text{kinetic energy density [$energy \ (unit \ volume)^{-1}$]: } W_{s} = \frac{ m_{s} }{ 2 } \int d^{3}v \ v^{2} \ f_{s}\left( \mathbf{x}, \mathbf{v}, t \right) \\ \text{pressure tensor [$energy \ (unit \ volume)^{-1}$]: } \mathbb{P}_{s} = m_{s} \int d^{3}v \ \left( \mathbf{v} - \mathbf{U}_{s} \right) \left( \mathbf{v} - \mathbf{U}_{s} \right) \ f_{s}\left( \mathbf{x}, \mathbf{v}, t \right) \\ \text{heat flux tensor [$energy \ flux \ (unit \ volume)^{-1}$]: } \left(\mathbb{Q}_{s}\right)_{i,j,k} = m_{s} \int d^{3}v \ \left( \mathbf{v} - \mathbf{U}_{s} \right)_{i} \left( \mathbf{v} - \mathbf{U}_{s} \right)_{j} \left( \mathbf{v} - \mathbf{U}_{s} \right)_{k} \ f_{s}\left( \mathbf{x}, \mathbf{v}, t \right) \\ \text{etc.} $$ where $m_{s}$ is the particle mass of species $s$, the product of $\mathbf{A} \mathbf{B}$ is a dyadic product, not to be confused with the dot product, and a flux is simply a quantity multiplied by a velocity (from just dimensional analysis and practical use in continuity equations).
In an ideal gas we can relate the pressure to the temperature through: $$ \langle T_{s} \rangle = \frac{ 1 }{ 3 } Tr\left[ \frac{ \mathbb{P}_{s} }{ n_{s} k_{B} } \right] $$ where $Tr\left[ \right]$ is the trace operator and $k_{B}$ is the Boltzmann constant. In a more general sense, the temperature can be (loosely) thought of as a sort of pseudotensor related to the pressure when normalized properly (i.e., by the density).
How can a Hot gas be Low Pressured?
If you look at the relationship between pressure and temperature I described above, then you can see that for low scalar values of $P_{s}$, even smaller values of $n_{s}$ can lead to large $T_{s}$. Thus, you can have a very hot, very tenuous gas that exerts effectively no pressure on a container. Remember, it's not just the speed of one collision, but the collective collisions of the particles that matters. If you gave a single particle the enough energy to impose the same effective momentum transfer on a wall as $10^{23}$ particles at much lower energies, it would not bounce off the wall but rather tear through it!
How can a High Pressured gas be Cold?
Similar to the previous answer, if we have large scalar values of $P_{s}$ and even larger values of $n_{s}$, then one can have small $T_{s}$. Again, from the previous answer I stated it is the collective effect of all the particles on the wall, not just the individual particles. So even though each particle may have a small kinetic energy, if you have $10^{23}$ hitting a wall all at once, the net effect can be large.
honeste_viverehoneste_vivere
By the Ideal gas law, $PV=nRT$, or "pressure times volume equals the number of molecules times a constant times temperature". So, all else being the same, as the temperature goes up, the pressure goes up in an exact ratio.
However, all else does not have to be the same. So, for instance, if you reduce the number of molecules in a container ($n$), the pressure ($P$) will go down even though the temperature ($T$) may stay the same.
Edit: A thermometer or pressure gauge measures the molecules that collide with it. A thermometer measures the average energy of the collisions. A pressure gauge measures the average collision energy times the number of collisions per second.
As an example, the pressure at the top of the Sun's photosphere is 0.86 millibar, or less than a thousandth of our air pressure at sea level. But, the temperature is far higher: 4400 Kelvin, or about fifteen times our air temperatures. (The sun's temperature is far higher as you go further out, but that's another story.)
Daniel GriscomDaniel Griscom
$\begingroup$ I understand the ideal gas law. But the question was what is the significant difference between Temperature and Pressure in the physical way. $\endgroup$
– MrYellow
Of course, they are relate to each other but that doesn't mean they are the same things.
Temperature is the average kinetic energy of the molecules while pressure is the force they exert perpendicularly on any surface. Of course, more the temperature, more would be the pressure.
While the former is related to the energy, the later is related to the momentum; they are different things.
Pressure is a measure of force per unit area exerted on the 'measuring device', while the temperature is a measure of kinetic energy of the individual molecules of the gas. Thus, high pressure can arise when there are either many slow moving molecules with low kinetic energy colliding with the container, or a few fast moving molecules colliding with the container. Going through the derivation of pressure of a gas using the kinetic theory of gases should help. Wikipedia link: https://en.wikipedia.org/wiki/Kinetic_theory#Pressure_and_kinetic_energy
HarshaHarsha
An example of a difference where the pressure of a reasonably dilute gas depends on something else other than the kinetic energy of the particles is actually just the air on Earth. A classic exercise in statistical mechanics is to consider an ideal gas subject to gravity and find how the pressure varies with altitude.
Of course, in reality the temperature of the air on Earth varies with altitude, but doing this problem by assuming that the gas has a constant temperature provides a pretty reasonable result, that the pressure goes as $P(z) \sim \exp\{-mgz/kT\}$ (don't quote me on this) where $m$ is mean molecular mass. In this case, to a decent approximation, the pressure of the gas varies with height, but the temperature does not, because one now takes into account the gravitational potential and not just the kinetic energy.
DanielSank
sealseal
Not quite. Gas heats your measuring device when the collisions are mostly such that the colliding gas molecule has more kinetic energy than the colliding measuring device molecule.
It's instructive to think colliding molecules as sumo wrestlers: The molecule which has more momentum wins the bout, the winner does work on the loser by throwing him. Winner loses energy, loser gains energy.
The above rule works for straight head-on collisions. For other kind of collisions there are different rules. For example a molecule that experiences a collision on its rear gains energy. And a molecule with lot of kinetic energy rarely experiences rear collisions.
stuffustuffu
To measure somethings means to compare it with an etalon or a measurement instrument, made by the help of an etalon (or the combination of etalons).
To measure the pressure of a gas inside a volume one take for example a barometer and measures the pressure difference to the outer room. The measured pressure inside the volume is the result of the hitting of gas molecules with some average velocity and an average number of gas molecules on some area of the barometer.
To get the right correlation of how a volume contraction rise the pressure one has to do this contraction very slow. That allows to awoid the rise of the gas temperature (if the volume is not a thermal isolated system of course) and one get the right solution.
To study the relations between heating the gas and the temperatures rise one has to connect with the volume a second compensation volume and now it is possible to measure the temperature in The right Männer. If you take for this a mercury thermometer you could see, that this device is very similar to a mercury barometer. The scales or different.
Mercury thermometer and Mercury barometer (from Wikipedia)
So you are right that there are some similarities and pressure and temperature in closed volumes are somehow connected. Having different scales on it and changing the boundary conditions (or hold or pressure constant or temperature) one can measure temperature and pressure of the gas in a closed volume with the same measurement instrument.
HolgerFiedlerHolgerFiedler
Credits given to all answers posted. They helped me figured this out. Thanks a lot.
Temperature is heavily linked with Kinetic Energy.
Pressure is heavily linked with number of Collisions per Time AND Kinetic Energy.
A gas is hot when the molecules posses high Kinetic Energy and collides with the measuring device with great force.
A gas is hot not because there are a lot of molecules with low Kinetic Energy colliding with the measuring device. A lot of slow moving molecules does not add up to become Hot.
A gas is high pressured when there are a lot of molecules colliding with the wall either with High or Low Kinetic Energy. Higher Kinetic Energy creates more Pressure since change in momentum after each collision is high.
For short:
Hot gas need not to be pressurized. In other words, Low Pressured gas can still be Hot. This is because they just need to collide with enough force to transfer their Kinetic Energy while still remaining Low in pressure.
Pressurized gas need not to be hot. In other words, High Pressured gas can still be Cold. This is because they just need to collide frequently enough, Slow ( cold ) or Fast ( hot ).
$\begingroup$ I dont think you have distilled the answers properly. If $n$ is constant, increasing in pressure results in an increase in temperature. Similarly, increasing temperature results in an increase in pressure. Two intensive properties fully define the state- they are not independent, as your answer suggests (to me). $\endgroup$
– OnStrike
$\begingroup$ @theNamesCross Actually, I am trying to say Hot gas can still stay Low in pressure. The n ( moles ) are not kept constant. I'll try rephrase the answer. $\endgroup$
$\begingroup$ Correct, but you might explicitly state that in these cases $n$ is NOT constant. Also, remember phase diagrams- depending on the ($P, T$) values, it may not be in a gas phase. $\endgroup$
Not the answer you're looking for? Browse other questions tagged pressure temperature ideal-gas or ask your own question.
Relativity of temperature paradox
Why are ions and electrons at different temperatures in a plasma?
How do we tell which part of kinetic energy gives rise to temperature?
What is the correct relativistic distribution function?
Why in plasma physics usually is given the refractive index of electrons but not for ions?
What's the loss of information in taking the moments of the Vlasov equation for the Particle-In-Cell method
Navier-Stokes Energy Equation
How many temperatures has a plasma?
Temperature anisotropy in plasmas
What defines a cold plasma?
Ideal gas concentration under temperature gradient
Relations between pressure and temperature
Thermodynamics Work Done At constant pressure and charles law
Does a gas in a container lose kinetic energy?
What is the pressure of a charged gas?
If gases are not ideal at high pressures, why doesn't raising the temperature of a gas make it less ideal?
Pressure - temperature relationship in constant volume with high temperature and density
Temperature decreases in adiabatic expansion and gas laws
Principle of how refrigerator works: how is temperature dropping but pressure is not?
|
CommonCrawl
|
Adaptive visual target tracking algorithm based on classified-patch kernel particle filter
Guangnan Zhang1,2,3,
Jinlong Yang3,
Weixing Wang1,
Yu Hen Hu4 &
Jianjun Liu3
We propose a high-performance visual target tracking (VTT) algorithm based on classified-patch kernel particle filter (CKPF). Novel features of this VTT algorithm include sparse representations of the target template using the label-consistent K-singular value decomposition (LC-KSVD) algorithm; Gaussian kernel density particle filter to facilitate candidate template generation and likelihood matching score evaluation; and an occlusion detection method using sparse coefficient histogram (ASCH). Experimental results validate superior performance of the proposed tracking algorithm over state-of-the-art visual target tracking algorithms in scenarios that include occlusion, background clutter, illumination change, target rotation, and scale changes.
Visual target tracking (VTT) [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15] is a key enabling technology for numerous emerging computer vision applications including video surveillance, navigation, human-computer interactions, augmented reality, higher level scene understanding, and action recognition among many others. It is a challenging task because the visual observations often suffer from interference due to occlusion, scale and shape variation, illumination variation, background clutter, and related factors.
VTT differs from conventional tracking task in that the observation at each time instant is a video frame and the motion trajectory is confined to the spatial coordinates in each frame. On the other hand, like the conventional tracking, a VTT algorithm is divided into the prediction phase and an update phase. In the prediction phase, a motion model is incorporated to predict the target location based on current estimate. In the updated phase, a maximum likelihood (ML) estimate of the target location is sought based on observations made in the current frame. Then, an updated target position is decided based on predicted location and the ML estimated position. These location predictions and estimations are traditionally realized using sequential Bayesian estimation algorithms such as Kalman filters or particle filters.
Based on how the ML estimation of target location is realized, current VTT algorithms may be categorized into two families: discriminative algorithms versus generative algorithms [5]. Discriminative methods detect the presence of a tracked object using a pattern classification approach with the objective to distinguish the foreground target from the background. For example, the multiple instance learning (MIL) [6] method puts all ambiguous positive and negative samples into bags to learn a discriminative model for visual target tracking. Generative methods detect the tracked object by searching for the region most resembling to the target model, based on templates or subspace models. In [7], a robust fragments-based tracking method is proposed to handle partial occlusions or pose changes. Every patch votes on the possible positions and scales of the target in the current frame by comparing the intensity histogram against the corresponding histogram of each image patch. However, a static appearance model of the target cannot adapt to rapid appearance changes of the target. Incremental learning visual tracking (IVT) algorithm [8] handles the problem of changing target appearance. In the template update process, a forgetting factor is introduced to ensure that less modeling power is wasted fitting older observations. Visual tracking decomposition (VTD) algorithm [9] is proposed to handle the appearance and motion changes of the target occur at the same time. In the tracking process, the observation model is decomposed into multiple basic observation models that can cover different specific target appearances. The motion model is also represented by combining multiple basic models that cover different motion types. Then two types of basic models are used to construct the multiple basic trackers to handle a certain change of a target.
Tracking algorithms based on the sparse model have attracted great interests lately. Mei et al. [10, 11] formulated visual target tracking as a sparse approximation problem in the particle filtering (PF) framework [12, 13]. Using a dictionary of image patches, the target template can be represented as a weighted linear combination of very few (hence sparse representation) image templates in the dictionary. The sparse representation can be estimated by solving an l1-norm regularized least squares (LS) problem. In [14], a real-time robust l1 tracker is proposed by adding an l2-norm regularization to the coefficients associated with the trivial templates, and an accelerated proximal gradient (APG) method is employed to speed up the problem solving. Multi-task tracking (MTT) is proposed [15] as a multi-task sparse learning problem in a PF framework. The particles are modeled as linear combinations of dictionary image templates, and the interdependencies between particles are exploited to improve the tracking performance. In [5], an adaptive structural local sparse appearance model is proposed to locate the target more accurately by considering the spatial information of the target based on an alignment-pooling method. Moreover, the incremental subspace learning and sparse representation are combined to update the template, which can adapt to the appearance change of the target with less possibility of drifting. When the target exhibits dramatic appearance changes, a collaborative model is proposed [16] that combines a sparsity-based discriminative classifier and a sparsity-based generative model. With this appearance model, both holistic updates and local representations are considered. Moreover, the latest observations and the original template are used to update the model and adapt to the appearance change while mitigating the drift problem.
Most of the dictionaries based on the sparse representation theory are constructed directly by the samples of the template base or obtained by the clustering method with some constraints. The image templates in the dictionary often lack the ability of discrimination. Moreover, the templates updated by the same update scheme cannot adapt to the changes of the foreground and the background of the target. To address these concerns, in this work, we propose an adaptive visual target tracking algorithm based on classified-patch kernel particle filter (CKPF), which has the following advantages:
Classified patches and low-dimensional dictionary are considered in the CKPF. Note that low-dimensional dictionary and classification parameters (CP) are learned by the label-consistent K-SVD (LC-KSVD) [17, 18] technique. To the best of our knowledge, this is the first work to extend the LC-KSVD approach to exploit the intrinsic structure among the patches of the visual target. The image patches in the dictionary trained using LC-KSVD will be more discriminative to classify foreground from the background, and the obtained low dictionary can reduce the computational burdens.
The anti-occlusion sparse coefficient histograms (ASCHs) [16] are merged in CKPF to enhance the ability of anti-occlusion. If the reconstructed error of one patch is bigger than the threshold, the patch will be marked as occluded, and the corresponding sparse coefficients were displaced with zero to reduce the negative influence.
Gaussian kernel density (GKD) of the learned patches is considered to make the proposed algorithm more stable. The reason is that the importance of each patch is considered in the structure of candidate template according to the distance close to the center of the template.
An adaptive template update scheme is developed to adapt to the target appearance changes improving the robustness of the tracker. It is because the appearance of the target often changes significantly due to the disturbance of illumination changes, occlusion, rotation, and scale variation. When the target is occluded, the arrived template usually cannot describe the real target effectively. Therefore, the weight of the arrived template should decrease at this time. Otherwise, the weight should increase due to the accurately estimate of the arrived template without other disturbance factors.
Our proposed visual target tracker differs from existing approaches [10,11,12,13,14,15,16] in several aspects, such as the dictionary learning of the local image patches by LC-KSVD, likelihood model construction of the candidate particles, as well as the design of the adaptive parameter for the template update. The main contributions of this paper are threefold. (a) Classification parameters and low-dimensional patches are learned by LC-KSVD to construct the CKPF. (b) Isotropic Gaussian kernel density of the patches is proposed to produce the mixture likelihood of the each candidate particle. (c) An adaptive template update scheme is proposed to adapt to the target appearance changes.
The remainders of this paper are organized as follows. In Section 2, we summarize the details of the proposed adaptive visual target tracking algorithm based on CKPF. An overview of the LC-KSVD is presented. Meanwhile, adaptive template update scheme is developed and discussed. In Section 3, extensive simulation results comparing our proposed algorithm against existing visual target trackers are reported and the implications of these results are discussed. Conclusions and future works are presented in Section 4.
Overview of the algorithm
As shown in Fig. 1, the target is represented as a rectangular template in each frame, and the target template will be scaled to 32 × 32 pixels (big red boxes on the right). Candidate target region will also be scaled at the same ratio before further processing. Pixels within the template are assumed to be positive samples of the target. A 4-pixel wide strip surrounding the template is defined as the background whose edges outside and inside of the target template are denoted by B1 and B2, i.e., the gray annular area (B2-B1) with width 8 pixels are the background area. A patch is defined as a 6 × 6 square. Np = 196 image patches (positive samples) will be extracted from the template (foreground, target), and Nn = 196 patches will be extracted from the background as negative sample. These extracted image patches are regularly distributed over the foreground or the background regions respectively with overlaps as needed. Together, the positive-labeled patches represent the target and the negative-labeled patches represent the background.
Template and patches
Each patch is raster-scanned and converted into a 36 × 1 vector. Hence, there are 196 vectors labeled with + 1 (positive samples) and 192 vectors labeled with 0 (negative samples). We denote the total number of patches N = 392. A label-consistent, kernel singular value decomposition (LC-KSVD) algorithm will be applied to both the 196 positive vectors and the 196 negative vectors and select a subset of 50 vectors from each of them to form a labeled dictionary. This dictionary consists of 50 vectors with positive (+ 1) labels and 50 vectors with negative (0) labels. Let K = 100, the dictionary may be represented by a 36 × K matrix D. The dictionary will be estimated from the initial frame where the target to be tracked is specified for the tracking algorithm. It will remain unchanged until template update operation is performed.
The LC-KSVD algorithm also yields a sparse representation of each patch (36 × 1 vector) as a weighted combination of the 100 vectors selected in the dictionary. Two constraints are imposed on the potential sparse representations: (a) (discriminative constraint) Sparse vectors corresponding to foreground (or background) patches should have similar representation. This is represented by a discriminative parameter matrix AK × K. (b) (classification constraint) Class labels (+ 1, 0) can be reproduced from weighted linear combination of the sparse representation. This is represented by a 2 × K classification parameter matrix W. In addition to the sparse representation of each foreground and background patches, represented by a K × N matrix X, the LC-KSVD algorithm can estimate the dictionary D, the discriminative parameter matrix A and the classification parameter matrix W simultaneously.
Given the dictionary D and sparse representation of the template X, tracking begins by moving into the next frame. A kernel particle filter is applied to generate 100 potential target positions at (k + 1)th frame according to the particle representation of the state transition probability p(xk + 1|xk) such that E(xk + 1|xk) = xk where xk = {xk, yk, θk, sk, αk, βk} is the state vector of the target at the kth frame. The assumption is that the target motion may be described by an affine transformation, for example, (xk, yk) is the target position, θk, sk, αk, and βk are the rotation angle, the scaling factor, the aspect ratio, and the angle of inclination, respectively. We also assume p(xk + 1|xk) has a Gaussian distribution where the covariance matrix is selected based on prior knowledge of the tracking task.
Each particle corresponds to a candidate target template. Then, 196 image patches are extracted, and corresponding sparse representation X' are evaluated using LC-KSVD and the library D. A kernel density weighted sparse coefficient similarity score (SCSS) then will be applied to produce an estimate of the likelihood probability between the sparse representation of the template X and the current template candidate X'. The kernel density weightings place more weight on image patches that are closer to the center of the template and less weight on image patches on peripherals of the template. The location of the best-matched template will be designated as new target position.
Before moving into the next frame, the tracking algorithm may also adaptively update the template when occlusion of the target is detected. This is accomplished by using a sparse coefficient histogram matrix (SCHM) [16] to estimate the level of occlusion of the target. If so, the algorithm uses the newly estimated template, or a weighted linear combination of the estimated template and an initial template depending on the percentage of patches that are deemed occluded. With the newly updated template, the algorithm moves to the following frame.
A block diagram summarizing above overview of the proposed algorithm is depicted in Fig. 2. It has an initialization phase where a low-dimensional label-consistent dictionary D of image patches will be estimated, and the sparse representation X as well as classification parameters W of individual patches are also computed. Next, the kernel density-based particle filter (KPF) algorithm generates candidate templates in the following frame. For each candidate template, the likelihood score will be evaluated, and the maximum likelihood estimate of the target position will be computed. This is followed by an adaptive template update phase where occlusion of the target is detected.
Block diagram of the proposed algorithm
Theoretical backgrounds
LC-KSVD
The LC-KSVD dictionary learning algorithm [17, 18] in Fig. 2 can simultaneously train an over-complete low-dimensional dictionary and a linear classifier, i.e., the obtained dictionaries have both reconstructive and discriminative abilities. The objective function is expressed as
$$ {\displaystyle \begin{array}{l}\left\langle D,W,A,X\right\rangle =\underset{D,W,A,X}{\arg \min }{\left\Vert Y- DX\right\Vert}_2^2+\alpha {\left\Vert Q- AX\right\Vert}_2^2\\ {}\kern5.75em +\beta {\left\Vert H- WX\right\Vert}_2^2,\kern0.75em \mathrm{s}.\mathrm{t}.\kern0.5em \forall i,{\left\Vert {x}_i\right\Vert}_0\le {T}_0\end{array}} $$
where \( Y\kern0.5em =\kern0.5em {\left\{{y}_i\right\}}_{i=1}^N\in {\mathrm{R}}^{n\times N} \) denotes the input sample set, X = [x1, x2, ⋯, xN] ∈ RK × N denotes the coefficient matrix, D = [d1, d2, ⋯, dK] ∈ Rn × K denotes the low-dimensional dictionary matrix containing K ≪ N prototype sample-atoms for columns \( {\left\{{d}_j\right\}}_{j=1}^K \), and T0 denotes the degree of sparsity. Q ∈ RK × N denotes the sparse codes with discriminative power of Y for classification. A is a linear transformation matrix, which can transform the original sparse codes to be most discriminative in sparse feature space. \( {\left\Vert Q- AX\right\Vert}_2^2 \) denotes the discriminative sparse code error, which forces the samples with same class label to have the similar sparse representations. \( {\left\Vert H- WX\right\Vert}_2^2 \) denotes the classification error, W is the classification parameter matrix, and H is the class label of input samples. α and β are the scalars controlling the relative contribution of the corresponding terms [18].
The K-SVD method [19] can be used to obtain the optimal solutions for all the parameters simultaneously. Specifically, Eq. (1) can be rewritten as
$$ \left\langle D,W,A,X\right\rangle =\underset{D,W,A,X}{\arg \min }{\left\Vert \left(\begin{array}{c}Y\\ {}\sqrt{\alpha }Q\\ {}\sqrt{\beta }H\end{array}\right)-\left(\begin{array}{c}D\\ {}\sqrt{\alpha }A\\ {}\sqrt{\beta }W\end{array}\right)X\right\Vert}_2^2,\kern0.5em \mathrm{s}.\mathrm{t}.\kern0.5em \forall i,{\left\Vert {x}_i\right\Vert}_0\le T{}_0 $$
Let \( {Y}_{\mathrm{new}}={\left({Y}^{\mathrm{T}},\sqrt{\alpha }{Q}^{\mathrm{T}},\sqrt{\beta }{H}^{\mathrm{T}}\right)}^{\mathrm{T}} \), \( {D}_{\mathrm{new}}={\left({D}^{\mathrm{T}},\sqrt{\alpha }{A}^{\mathrm{T}},\sqrt{\beta }{W}^{\mathrm{T}}\right)}^{\mathrm{T}} \), then Eq. (2) can be expressed as
$$ \left\langle {D}_{\mathrm{new}},X\right\rangle =\underset{D_{\mathrm{new}},X}{\arg \min}\left\{{\left\Vert {Y}_{\mathrm{new}}-{D}_{\mathrm{new}}X\right\Vert}_2^2\right\},\kern0.5em \mathrm{s}.\mathrm{t}.\kern0.5em \forall i,{\left\Vert {x}_i\right\Vert}_0\le T{}_0 $$
Then Dnew can be obtained by using the K-SVD method, i.e., D, A, and W are learned simultaneously. More descriptions about LC-KSVD can refer to [17, 18].
In Eq. (1), the learned dictionary can be better used to represent the target due to the constraint terms. The discriminative sparse code error can force the samples with same class to have the similar sparse representations, which can enlarge the difference between classes of training data. Moreover, the classification error can effectively train a classifier to identify the foreground and background of the target.
Sparse coefficient histogram and occlusion detection
The patches of the target can be represented by using the obtained low dimensional dictionary D and the sparse coefficient of each patch can be used to construct the histogram matrix. However, some patches in the candidate target may be occluded, and the coefficient histogram cannot express the feature of candidate target accurately. As a result, the target cannot be estimated accurately. Taking this problem into account, the occlusion detection strategy [16] is employed according to the reconstruction error of each patch. And then the sparse coefficient histogram can be updated according to the occlusion detection results.
Assume that ξi denotes the sparse coefficient vector of the ith patch, we have
$$ \underset{\xi_i}{\min }{\left\Vert {y}_i-D{\xi}_i\right\Vert}_2^2+\lambda {\left\Vert {\xi}_i\right\Vert}_1 $$
The sparse coefficient histogram matrix can be established by concatenating the sparse coefficient vector ξi, i.e.,\( \rho =\left[{\xi}_1,{\xi}_2,\dots, {\xi}_{N_p}\right] \). If the target is partially occluded, then some of the patches of the target are occluded, and their corresponding sparse coefficients will be meaningless, which makes the sparse coefficient matrix ρ unable to express the candidate target well, causing big reconstruction error. Therefore, we introduce an occluded target detective mechanism to identify the occluded patches and their corresponding sparse coefficients.
It is defined that if the reconstructed error of each patch is bigger than the threshold, the patch will be marked as occluded, and then the corresponding sparse coefficient vector is reset to zero. The candidate histogram matrix after occlusion detection is defined as φ = ρ ⊙ o, where ⊙ denotes the element-wise multiplication. \( o\in {R}^{\left({K}_p+{K}_n\right)\times {N}_p} \)denotes the matrix of occluded detection, and oi is the element of the matrix o, and can be defined as:
$$ {o}_i=\left\{\begin{array}{c}1,\kern1em {\varepsilon}_i<{\varepsilon}_0\\ {}0,\mathrm{otherwise}\end{array}\right. $$
where \( {\varepsilon}_i={\left\Vert {y}_i-{D}_t{\xi}_{i\_t}\right\Vert}_2^2 \) denotes the reconstructed error of the ith patch. Note that only the positive patches are used to compute the reconstructed error, therefore Dt denotes the dictionary which only consists of the set of positive patches from the learned dictionary D, ξi _ t denotes the corresponding sparse coefficient vector of Dt, and ε0 denotes the threshold of reconstructed error of each patch. If εi ≥ ε0, then the ith patch be considered as occluded and the corresponding coefficient vector is set as zero.
Classified-patch kernel particle filter
Given the observation set of target y1 : k = {y1, y2, … , yk} up to the kth frame, the target state xk can be extracted via the maximum posterior estimation, i.e., \( {\widehat{x}}_k=\arg \underset{x_k^i}{\max }p\left({x}_k^i|{y}_{1:k}\right) \), where \( {x}_k^i \) denotes the state of the ith sampled particle of the kth frame. The posterior probability \( p\left({x}_k^i|{y}_{1:k}\right) \) can be inferred by the Bayesian recursion, i.e.,
$$ p\left({x}_k^i|{y}_{1:k}\right)\propto p\left({y}_k|{x}_k\right)\int p\left({x}_k|{x}_{k-1}\right)p\left({x}_{k-1}|{y}_{1:k-1}\right){dx}_{k-1} $$
where p(yk| xk) denotes the observation model. p(xk| xk − 1) denotes the dynamic model which describes the temporal correlation of the target states between consecutive frames. The affine transformation with six parameters is utilized to model the target motion between two consecutive frames. The state transition is formulated as p(xk| xk − 1) = N(xk; xk − 1, Σ), where Σ is a diagonal covariance matrix whose elements are the variances of the affine parameters.
The observation model p(yk| xk) denotes the likelihood of the observation yk at state xk. It plays an important role in robust tracking. In this paper, we aim to construct a robust likelihood model having the anti-occlusion ability and foreground target identification ability by merging the similarity of sparse coefficient histograms [16] and the classification information. Moreover, we consider the spatial information of each patch by using the isotropic Gaussian kernel density, which can keep the stability of the proposed algorithm for visual target tracking.
The likelihood of the lth particle is expressed as
$$ {p}_l=\sum \limits_{i=1}^{N_p}k\left({\left\Vert \frac{y_k^l-{c}_i}{h}\right\Vert}^2\right){M}_{k,i}^l{L}_{k,i}^l $$
where \( {M}_{k,i}^l \) and \( {L}_{k,i}^l \) denote the likelihood of classification and the similarity function of the target histograms between the candidate and the template. \( k\left({\left\Vert \frac{y_k^l-{c}_i}{h}\right\Vert}^2\right) \) denotes the isotropic Gaussian kernel density, where ci denotes the center of the ith patch, and \( {y}_k^l \) denotes the center of the lth particle in the kth frame. It means the distance between the patch and the candidate particle is considered, i.e., the patches far away from the center of the target will be assigned smaller weights, which can weaken the disturbance of the patches on the edge of the target.
According to the histogram intersection function [16, 20], the similarity function of the ith patch of the lth particle is defined as
$$ {L}_{k,i}^l=\sum \min \left({\varphi}_{k,i}^l,{\psi}^i\right) $$
where \( {\varphi}_{k,i}^l \) and ψi denotes the sparse coefficient histograms of the candidate target and the target template, respectively. Template histogram is computed only once for each image sequence. Moreover, the comparison between the candidate and the template should be carried out under the same occlusion condition. Therefore, the template and the ith candidate share the same matrix o of occluded detection.
The likelihood of classification of the ith patch of the lth particle is defined as
$$ {M}_{k,i}^l=\cos \angle \left(W{\varphi}_{k,i}^l,\Gamma \right) $$
where \( {\varphi}_{k,i}^l \) is the sparse coefficient vector of the candidate patch. Γ denotes the base vector of target classification, i.e., Γ = [1, 0]T, \( \cos \angle \left(\alpha, \beta \right)=\frac{\alpha \cdot \beta }{\left|\alpha \right|\left|\beta \right|} \) denotes the bearing of two vectors.
The bigger the number of patches belonging to the candidate particle is, the better the target appearance can be described. Because the selected patches may be from target templates or background templates. Therefore, if the patch belongs to the target, we should give it a bigger weight than that belong to the background.
Adaptive template update
In the tracking process, the appearance of the target often changes significantly due to the disturbance of illumination changes, occlusion, rotation, scale variation, and so on. Therefore, we need to update the template appropriately. However, if the template is updated too frequently by using new observations, the tracking results are easy to drift away from the target due to the accumulation of errors. Especially, when the target is occluded, the latest tracking result cannot describe the real target well, which will cause the later estimated targets to be lost. On the contrary, if tracking with fixed templates, it is prone to fail in dynamic scenes as it does not consider inevitable appearance change.
In this paper, we propose an improved template histogram update scheme by combining the histogram of the first frame and the latest estimated histogram with the variable μ, i.e.,
$$ {\widehat{\psi}}_n=\left\{\begin{array}{l}\mu \psi +\left(1-\mu \right){\widehat{\varphi}}_n,\kern0.5em {O}_n<{O}_0\\ {}{\widehat{\psi}}_{n\hbox{-} 1},\kern0.5em \mathrm{otherwise}\end{array}\right. $$
where \( \mu ={\mathrm{e}}^{\hbox{-} \left(1\hbox{-} \frac{O_n}{O_0}\right)} \) denotes the weighting parameter, which can adaptively adjust the update template to adapt to the change of the target appearance. \( {\widehat{\psi}}_n \) denotes the update template histogram, ψ and \( {\widehat{\varphi}}_n \) denote the template histogram of the first frame and the latest estimate, respectively. \( {O}_n=\frac{\#{\mathrm{Patch}}_{occ}}{\#\mathrm{Patch}} \) denotes the occlusion degree of the current tracking results. #Patchocc and #Patch denote the number of the occluded patches and the total patches. O0 is a threshold of the degree of occlusion. Moreover, to avoid frequent template update, we detect the occluded state every five frames, i.e., we update the template every five frames.
During the update process, the first frame template and the newly arrived template are considered simultaneously. However, when the target is occluded, the arrived template usually cannot describe the real target effectively. Therefore, the weight μ of the arrived template should decrease at this time. Otherwise, the weight μ should increase due to the accurately estimate of the arrived template without other disturbance factors. In this paper, we set the parameter μ change with the reconstruction error. If On increases, which denotes the target may be disturbed by some factors, such as illumination and occlusion, the arrived template may be inaccurate, hence the weight of the template should decrease, while the weight of the first frame template should increase.
Experiment results
To verify the effectiveness of the proposed algorithm, some challenging sequences from the public dataset of video target tracking [1] (http://cvlab.hanyang.ac.kr/tracker_benchmark/datasets.html) are used to evaluate the performance of the proposed algorithm. The main challenging features of the data are described in Table 1, including the interference of occlusion, background clutter, illumination change, target rotation, scale change, motion blur, etc. The proposed algorithm is compared with eight state-of-the-art benchmark tracking algorithms, including multiple instance learning (MIL) [6], compressive tracking (CT) [21], robust fragments-based tracking (FRAG) [7], incremental visual tracking (IVT) [8], visual tracking decomposition (VTD) [9], L1 tracker using accelerated proximal gradient (L1APG) [14], multi-task sparse learning tracking (MTT) [15], and local sparse appearance model and K-selection (LSK) [22]. The experiments are implemented on computer with Intel Core 2.4 GHz, i7-4700HQ processor with 8 GB RAM. The software tool is MATLAB 2014a and the l1 minimization problem is solved with the SPAMS package [23]. For each sequence, the location of the target is manually labeled in the first frame.
Table 1 The features of the video sequences
The learned low-dimensional dictionary consists of 50 positive templates and 50 background templates which are from the sampled templates by LC-KSVD dictionary learning. In the framework of PF, 100 candidate particles are sampled according to the same partition patch method, and the most similarity candidate particle is extracted as the estimated target. Set the threshold of the occlusion degree as O0 = 0.8 in Eq. (10).
Qualitative evaluation
Figure 3 shows the tracking results of different algorithms when the target undergoes heavy occlusion, illumination variation, background clutter, rotation, scale change, fast motion, and motion blur.
Tracking results of different algorithms. a FaceOcc2. b Woman. c Shaking. d Singer1. e Deer f Board. g Trellis. h Walking2. i Girl. j Jumping. k Human8. l Car4
Occlusion and illumination variation
In order to demonstrate the anti-occlusion and anti-illumination-variation performances of the proposed algorithm, some challenging video sequences are used in this experiment. Especially in (a) FaceOcc2 and (b) Woman sequences, the targets are heavily occluded or long-time partial occluded. However, the proposed algorithm can extract the targets accurately. The reason is that the local detection strategy for occlusion and illumination changes as well as the adaptive template update scheme are employed, which can easily describe and detect the variations of the local details of the targets and help to decrease the influence of the disturbances including occlusion, illumination change, rotation, etc. Moreover, the Gaussian kernel density of the patches is considered in the CKPF, which considers the global information of the local patches, improving the tracking performance. Taking the 181th, 273th, and 659th frames in FaceOcc2 sequences as examples, the target is occluded heavily by the book and the hat; the proposed algorithm has the highest tracking accuracy. In the 127th, 172th, and 495th frames in the Woman sequences, the target is partial occluded by the car and disturbed by the background clutters; some of the benchmark algorithms cannot estimate the target accurately with heavily position drift, while the proposed algorithm can successfully track the target throughout the entire sequences.
In (c) Shaking and (d) Singer1 sequences, there exists large illumination variation, and partial scale change, the benchmark algorithms FRAG, IVT, MTT, and CT cannot extract the target correctly following with heavily drift. LSK and MIL have good estimated results, but the proposed algorithm and the VTD approach have better tracking results. For the VTD algorithm, the observation model is decomposed into multiple basic observation models that can cover different specific target appearances, which can adapt to the illumination changes; however, it is hard to deal with the scale variation problem of the target while the proposed algorithm can do it adaptively. Therefore, in the Singer1 sequences, its tracking results are worse than those of the proposed algorithm due to the scale variation of the targets.
Background clutter
In the video sequences of (f) Board, (e) Deer, and (c) Shaking, the targets are disturbed by some background clutters, especially in Board sequences; the background is complex and there exists partial target rotation and fast motion. L1APG, MTT, and IVT cannot extract the target correctly due to the use of the fixed global model, while the proposed algorithm employs the local patch features to describe the details of the target, and the LC-KSVD method is introduced to learn dictionaries and train the classification parameters simultaneously, which can decrease the influence of the background disturbance. In the 42th frame of the Deer sequence, there is another deer in the background. Most of the algorithms have the results with largely drift due to the clutter disturbance. However, the proposed algorithm obtains an accurate result; the reason is that the set of background models is considered simultaneously and effectively updated in the tracking process.
Rotation and scale change
In (i) Girl and (f) Board sequences, there exists heavily target rotation. In the 94th and 119th frames of the Girl sequences, the girl turns around. It is clear that heavily drift exists in the results obtained by FRAG and LSK, while the proposed algorithm can adapt to the case of target rotation due to the use of the effectively update strategy, which considers the initial target model and the last estimate target model simultaneously. In the 434th frame of the Girl sequences, the face of the girl is occluded by the man and the scale makes a little change during the process of target rotation; the proposed algorithm also obtains a good tracking result. From the Board sequences, we can draw the same conclusions, in which the proposed algorithm has a good performance of target tracking under the scenario with target rotation and scale variation.
Moreover, in the Singer1 sequences, it is clear that the scale of the target changes heavily; the proposed algorithm can obtain accurate results, because the scale parameter sk is estimated simultaneously in the implement process of CKPF.
Fast motion and motion blur
In (j) Jumping and (e) Deer sequences, there exists fast motion of the target and motion blur. For the Jumping sequences, L1APG, LSK, and MTT cannot extract the target correctly due to the motion blur, while the proposed algorithm has a good tracking result. In the 109th and 262th frames of the Jumping sequence, fast motion and motion blur make some of the benchmark algorithms have heavily drift results, while the proposed algorithm has good results. The reason is that the background templates are considered to restrain the influence of the background, and the updated positive template can adapt to the case with motion blur. From the Deer sequences, we can conclude the same conclusions.
Quantitative evaluation
Two evaluation criteria are employed to quantitatively assess the performance of the proposed algorithm. One is average center location error (ACLE), and the other is tracking success rate (SR). Figure 4 shows the relative position error (in pixels) between the center and the tracking results. ACE is defined as the average relative position error. Assume the tracking result is Rr, and the ground truth is Rg, then SR is defined as ϒ = (Rr ∪ Rg)/(Rr ∪ Rg). Tables 2 and 3 give values of ACLE and SR for different tracking algorithms.
Position errors (in pixel) between the center and the tracking results. a FaceOcc2. b Woman. c Shaking. d Singer1. e Deer. f Board. g Trellis. h Walking2. i Girl. j Jumping. k Human8. l Car4
Table 2 Average center location error (in pixel). The best and second best results are shown in italic and bold
Table 3 Success rate. The best and second best results are shown in italic and bold
As can be seen from Fig. 4, the proposed algorithm has a better performance than those of the benchmark algorithms. The tracking result of each frame is accurate and the curve of the error is stable without high changing. While part of the benchmark algorithms are instable, and have big errors between some frames due to different disturbances.
From Tables 2 and 3, it is clear that the proposed algorithm can adapt to most of the video sequences with the best and second best results except the (i) Girl sequences. The performance of the proposed algorithm can be attributed to the detailed description of the local patches by the LC-KSVD dictionary learning and adaptive template update scheme. Moreover, the Gaussian kernel density of the patches as the global information is considered in CKPF. The algorithm of VTD can also adapt to the scenarios with illumination change and lightly occlusion (e.g., Shaking and Singer1); the reason is that the appearance change is considered in the target template, but its performance decreases when the rotation and the motion blur happen on the targets (e.g., Deer, Board, and Jumping). L1APG has a good performance on the Girl sequence; the reason is that the last tracking result is used directly as the updated template, which can effectively adapt to the Girl sequence with the turn of the girl. However, it cannot extract the target correctly due to the motion blur and illumination variation, such as in (f) Board, (j) Jumping, (c) Shaking, and (l) Car4 sequences. For the Girl sequences, the tracking result of the proposed algorithm is not the best, but it is only slightly below the L1APG and MTT algorithms.
Discussion of adaptive parameter μ
To verify the effectiveness of the adaptive template update scheme, two special challenging sequences, the first 200 frames of FaceOcce2 and the first 170 frames of Woman with big variance of appearance, are chosen in this experiment. The tracking results with different constant values (e.g., 0.1, 0.4, 0.7, and 0.9) of the weighting parameter μ of Eq. (10) are compared to those with adaptive parameter value, and these are demonstrated in Table 4.
Table 4 Discussion of constant and adaptive parameter μ. The best results are shown in italic
As can be seen that there are different values of ACLEs and SRs by choosing different constant values of μ, and smaller value of μ (e.g., 0.1) gets higher accuracy for the first 200 frames FaceOcce2 sequences, while bigger value of μ (e.g., 0.9) gets higher accuracy for the first 170 frames Woman sequences. The reason is that the variations of the target appearance are small during the 1st frame to 140th frame of FaceOcce2 sequences, and the updated templates mainly rely on the latest templates. But the target appearances are severely occluded between 141st and 190th fames; the updated templates more rely on the template of the first frame. Therefore, it is noted that the differences of the tracking accuracy are small with different values of μ for this sequences. But for the Woman sequences, the target appearances are slightly disturbed by the background clutters between 36th and 170th frames, and there only exists partial occlusion between 106th and 165th frames. Therefore, most of the updated templates mainly rely on the latest frame templates, and the bigger value of μ gets better results. While for the proposed algorithm with adaptive weight parameter, it is clear that it can obtain an ideal tracking result without manually setting the parameter values.
In this paper, we present an adaptive visual tracking algorithm based on CKPF. The template sets constructed by the local patch features from both foreground and background of the target are used to learn the dictionaries simultaneously. The low-dimensional dictionary and target classification parameters are trained by using the LC-KSVD dictionary learning. To robustly decide the final tracking states, an adaptive template update scheme is designed, and the classification information, the target candidate histogram, and the Gaussian kernel density are merged to form CKPF. The effectiveness of the proposed algorithm is experimentally demonstrated by comparing with 8 state-of-the-art trackers on 12 challenging video sequences, and experimental results show that the proposed algorithm has a better tracking performance than some benchmark methods in the scenarios with the interference of occlusion, background clutter, illumination change, target rotation, and scale change. However, the computation cost is high; in the future, we would like to improve the computational efficiency by considering the reverse-low-rank representation scheme [24], and some optimal particle pruning schemes.
APG:
Accelerated proximal gradient
ASCH:
Anti-occlusion sparse coefficient histograms
CKPF:
Classification parameters
Compressive tracking
GKD:
Gaussian kernel density
IVT:
Incremental learning visual tracking
LC-KSVD:
Label-consistent K-singular value decomposition
LS:
Least squares
LSK:
Local sparse appearance model and K-selection
MIL:
Multiple instance learning
Maximum likelihood
MTT:
Multi-task tracking
Particle filtering
VTD:
Visual tracking decomposition
VTT:
Visual target tracking
Y. Wu, J. Lim, M.H. Yang, Object tracking benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1834–1848 (2015).
M. Kristan, J. Matas, A. Leonardis, et al., in Proceedings of the IEEE international conference on computer vision workshops. The visual object tracking vot2015 challenge results (2015), pp. 1–23.
H. Fan, J. Xiang, Robust visual tracking with multitask joint dictionary learning. IEEE Trans. Circuits Syst. Video Technol. 27(5), 1018–1030 (2017).
H. Li, Y. Li, F. Porikli, Deep track: learning discriminative feature representations online for robust visual tracking. IEEE Trans. Image Process. 25(4), 1834–1848 (2016).
Article MathSciNet Google Scholar
X. Jia, H.C. Lu, M.H. Yang, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Visual tracking via adaptive structural local sparse appearance model (IEEE Computer Society Press, Los Alamitos, 2012), pp. 1822–1829.
B. Babenko, M.H. Yang, S. Belongie, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Visual tracking with online multiple instance learning (IEEE Computer Society Press, Los Alamitos, 2009), pp. 983–990.
A. Adam, E. Rivlin, I. Shimshoni, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Robust fragments-based tracking using the integral histogram (IEEE Computer Society Press, Los Alamitos, 2006), pp. 798–805.
D.A. Ross, J. Lim, R.S. Lin, et al., Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77(1–3), 125–141 (2008).
J. Kwon, K.M. Lee, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Visual tracking decomposition (IEEE Computer Society Press, Los Alamitos, 2010), pp. 1269–1276.
X. Mei, H.B. Ling, in Proceedings of IEEE 12th International Conference on Computer Vision. Robust visual tracking using L1 minimization (IEEE Computer Society Press, Los Alamitos, 2009), pp. 1436–1443.
X. Mei, H.B. Ling, Y. Wu, et al., in Proceedings of IEEE conference on computer vision and pattern recognition. Minimum error bounded efficient L1 tracker with occlusion detection (IEEE Computer Society Press, Los Alamitos, 2011), pp. 1257–1264.
M.S. Arulampalam, S. Maskell, N. Gordon, et al., A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 50(2), 174–188 (2002).
S.P. Zhang, H.X. Yao, X. Sun, et al., Sparse coding based visual tracking: Review and experimental comparison. Pattern Recogn. 46(7), 1772–1788 (2013).
C.L. Bao, Y. Wu, H.B. Ling, et al., in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Real time robust L1 tracker using accelerated proximal gradient approach (IEEE Computer Society Press, Los Alamitos, 2012), pp. 1830–1837.
T.Z. Zhang, B. Ghanem, S. Liu, et al., in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Robust visual tracking via multi-task sparse learning (IEEE Computer Society Press, Los Alamitos, 2012), pp. 2042–2049.
W. Zhong, H.C. Lu, M.H. Yang, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Robust object tracking via sparsity-based collaborative model (IEEE Computer Society Press, Los Alamitos, 2012), pp. 1838–1845.
Z.L. Jiang, Z. Lin, L.S. Davis, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Learning a discriminative dictionary for sparse coding via label consistent k-svd (IEEE Computer Society Press, Los Alamitos, 2011), pp. 1697–1704.
Z.L. Jiang, Z. Lin, L.S. Davis, Label consistent K-SVD: learning a discriminative dictionary for recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(11), 2651–2664 (2013).
M. Aharon, M. Elad, A. Bruckstein, K-SVD: An algorithm for designing Overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 54(11), 4311–4322 (2006).
Article MATH Google Scholar
J.X. Wu, J.M. Rehg, in Proceedings of IEEE 12th International Conference on Computer Vision. Beyond the Euclidean distance: creating effective visual codebooks using the histogram intersection kernel (IEEE Computer Society Press, Los Alamitos, 2009), pp. 630–637.
K.H. Zhang, L. Zhang, M.H. Yang, in Proceedings of the 11th European Conference on Computer Vision. Real-time compressive tracking (IEEE Computer Society Press, Los Alamitos, 2012), pp. 864–877.
B.Y. Liu, J.Z. Huang, L. Yang, et al., in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Robust tracking using local sparse appearance model and K-selection (IEEE Computer Society Press, Los Alamitos, 2011), pp. 1313–1320.
J. Mairal, F. Bach, J. Ponce, et al., Online learning for matrix factorization and sparse coding. J. Mach. Learn. Res. 11(1), 19–60 (2010).
MathSciNet MATH Google Scholar
Y. Yang, W. Hu, Y. Xie, et al., Temporal restricted visual tracking via reverse-low-rank sparse learning. IEEE Trans. Cybern. 47(2), 485–498 (2017).
The authors would like to thank the Editor and anonymous reviewers for their constructive suggestion.
Natural Science Foundation of Jiangsu Province (Nos. BK20181340, BK20130154), National Natural Science Foundation of China (Nos. 61305017, 61772237), and The Cyber-Physical Systems program of the U.S. National Science Foundation (CNS 1329481).
All data and material are available.
College of Information Engineering, Chang'an University, Xi'an, 710064, China
Guangnan Zhang & Weixing Wang
School of Computer Science and Technology, Baoji University of Arts and Science, Baoji, 721076, China
Guangnan Zhang
School of Internet of Things Engineering, Jiangnan University, Wuxi, 214122, China
Guangnan Zhang, Jinlong Yang & Jianjun Liu
Department of Electrical and Computer Engineering, University of Wisconsin–Madison, Madison, WI, 53706, USA
Yu Hen Hu
Jinlong Yang
Weixing Wang
Jianjun Liu
JY initiated the project. GZ, JY, and JL designed the algorithms, performed the experiments, and drafted the manuscript. WW and YH participated in the proposed method and analyzed the experiment results. All authors read and approved the final manuscript.
Correspondence to Jinlong Yang.
Zhang, G., Yang, J., Wang, W. et al. Adaptive visual target tracking algorithm based on classified-patch kernel particle filter. J Image Video Proc. 2019, 20 (2019). https://doi.org/10.1186/s13640-019-0411-1
K-singular value decomposition
Sparse coding
Dictionary learning
|
CommonCrawl
|
October 2016 , Volume 291, Issue 8, pp 2197–2212 | Cite as
Synchronized Helicity Oscillations: A Link Between Planetary Tides and the Solar Cycle?
F. Stefani
A. Giesecke
N. Weber
T. Weier
First Online: 01 September 2016
Recent years have seen an increased interest in the question of whether the gravitational action of planets could have an influence on the solar dynamo. Without discussing the observational validity of the claimed correlations, we examine which possible physical mechanism might link the weak planetary forces with solar dynamo action. We focus on the helicity oscillations that were recently found in simulations of the current-driven, kink-type Tayler instability, which is characterized by an \(m=1\) azimuthal dependence. We show how these helicity oscillations may be resonantly excited by some \(m=2\) perturbations that reflect a tidal oscillation. Specifically, we speculate that the tidal oscillation of 11.07 years induced by the Venus–Earth–Jupiter system may lead to a 1:1 resonant excitation of the oscillation of the \(\alpha\)-effect. Finally, we recover a 22.14-year cycle of the solar dynamo in the framework of a reduced zero-dimensional \(\alpha\)–\(\Omega\) dynamo model.
Solar cycle Models helicity Theory
This work was supported by the Deutsche Forschungsgemeinschaft in the frame of the SPP 1488 (PlanetMag), as well as by the Helmholtz-Gemeinschaft Deutscher Forschungszentren (HGF) in the frame of the Helmholtz alliance LIMTECH. Wilcox Solar Observatory data used in this study were obtained via the web site wso.stanford.edu (courtesy of J.T. Hoeksema). The sunspot data are SILSO data from the Royal Observatory of Belgium, Brussels, obtained via www.sidc.be/silso/infosnytot . F. Stefani thanks R. Arlt, A. Bonnano, A. Brandenburg, A. Choudhuri, D. Hughes, M. Gellert, G. Rüdiger, and D. Sokoloff for fruitful discussion on the solar-dynamo mechanism.
Appendix: The Numerical Model
In this appendix we sketch the integro-differential equation scheme that was used in Section 2 to calculate the oscillations of the helicity and \(\alpha\). More details can be found in Weber et al. (2013, 2015). For an alternative numerical method to treat the TI, see Herreman et al. (2015).
In our code we circumvent the usual \(\mathit{Pm}\) limitations of pure differential-equation codes by replacing the solution of the induction equation for the magnetic field by invoking the so-called quasi-static approximation (Davidson, 2001). We replace the explicit time stepping of the magnetic field by computing the electrostatic potential by a Poisson solver and by deriving the electric-current density. In contrast to many other inductionless approximations in which this procedure is sufficient, in our case we cannot avoid computing the induced magnetic field as well. The reason for this is the presence of an externally applied electrical current in the fluid. Computing the Lorentz-force term it turns out that the product of the applied current with the induced field is on the same order as the product of the magnetic field (due to the applied current) with the induced current. The induced magnetic field is computed as follows: in the interior of the domain, we apply the quasi-stationary approximation and solve the vectorial Poisson equation for the magnetic field that results when the temporal derivative in the induction equation is set to zero. At the boundary of the domain, however, the induced magnetic field is computed from the induced current density by means of Biot–Savart's law. In this way, we arrive at an integro-differential equation approach, similar to the method used by Meir et al. (2004).
In detail, the numerical model as developed by Weber et al. (2013) works as follows: it uses the OpenFOAM library to solve the Navier–Stokes equations (NSE) for incompressible fluids
$$\begin{aligned} \dot{\boldsymbol {u}} + ({\boldsymbol {u}}\cdot \nabla){\boldsymbol {u}} = - \nabla p + \nu\Delta{\boldsymbol {u}} + \frac{\boldsymbol {f}_{\mathrm {L}} }{\rho}\quad \textrm{and}\quad\nabla\cdot\boldsymbol {u} = 0, \end{aligned}$$
with \(\boldsymbol {u}\) denoting the velocity, \(p\) the (modified) pressure, \(\boldsymbol {f}_{\mathrm {L}} = \boldsymbol {J} \times\boldsymbol {B} \) the electromagnetic Lorentz force density, \(\boldsymbol {J}\) the total current density, and \(\boldsymbol {B}\) the total magnetic field. The NSE is solved using the PISO algorithm and applying no-slip boundary conditions at the walls.
Ohm's law in moving conductors
$$\begin{aligned} {\boldsymbol {j}} = \sigma(-\nabla\varphi+ {\boldsymbol {u}}\times {\boldsymbol {B}} ) \end{aligned}$$
allows us to compute the induced current [\(\boldsymbol {j}\)] by previously solving a Poisson equation for the perturbed electric potential [\(\varphi= \phi-J_{0}z/\sigma\)]:
$$\begin{aligned} \Delta\varphi= \nabla\cdot({\boldsymbol {u}} \times{\boldsymbol {B}} ). \end{aligned}$$
We concentrate now on cylindrical geometries with an axially applied current. After subtracting the (constant) potential part [\(J_{0}z/\sigma\)], with \(z\) as the coordinate along the cylinder axis, we use the simple boundary condition \(\varphi= 0\) at the top and bottom and \(\boldsymbol {n}\cdot\nabla \varphi=0\) at the mantle of the cylinder, with \(\boldsymbol{n}\) as the surface normal vector.
The induced magnetic field at the boundary of the domain can then be calculated by Biot–Savart's law
$$\begin{aligned} {\boldsymbol {b}}({\boldsymbol {r}}) = \frac{\mu _{0}}{4\pi} \int \mathrm{d}V' \, \frac{{\boldsymbol {j}}({\boldsymbol {r}}') \times ({\boldsymbol {r}}-{\boldsymbol {r}}')}{\vert {\boldsymbol {r}}-{\boldsymbol {r}}'\vert ^{3}}. \end{aligned}$$
In the bulk of the domain, the magnetic field is computed by solving the vectorial Poisson equation
$$\begin{aligned} \Delta{\boldsymbol {b}}=\mu_{0} \sigma\nabla \times( { \boldsymbol{u}} \times{\boldsymbol{B}} ), \end{aligned}$$
which results from the full time-dependent induction equation in the quasi-stationary approximation.
Knowing \(\boldsymbol {b}\) and \(\boldsymbol {j}\), we compute the Lorentz force \({\boldsymbol {f}}_{\mathrm {L}}\) for the next iteration. For more details about the numerical scheme, see Sections 2 and 3 of Weber et al. (2013).
Abreu, J.A., Beer, J., Ferriz-Mas, A., McCracken, K.G., Steinhilber, F.: 2012, Is there a planetary influence on solar activity? Astron. Astrophys. 548, A88. DOI. ADSCrossRefGoogle Scholar
Babcock, H.W.: 1961, The topology of the suns magnetic field and the 22-year cycle. Astrophys. J. 133, 572. DOI. ADSCrossRefGoogle Scholar
Bollinger, C.J.: 1952, A 44.77 year Jupiter–Venus–Earth configuration Sun-tide period in solar-climatic cycles. Proc. Okla. Acad. Sci. 33, 307. Google Scholar
Bonanno, A., Brandenburg, A., Del Sordo, F., Mitra, D.: 2012, Breakdown of chiral symmetry during saturation of the Tayler instability. Phys. Rev. E 86, 016313. DOI. ADSCrossRefGoogle Scholar
Brandenburg, A.: 2005, The case for a distributed solar dynamo shaped by near-surface shear. Astrophys. J. 625, 625. DOI. CrossRefGoogle Scholar
Brown, T.M., Christensen-Dalsgaard, J., Dziembowski, W.A., Goode, P., Gough, D.O., Morrow, C.: 1989, Inferring the Sun's internal angular velocity from observed p-mode frequency splitting. Astrophys. J. 343, 526. DOI. ADSCrossRefGoogle Scholar
Callebaut, D.K., de Jager, C., Duhau, S.: 2012, The influence of planetary attractions on the solar tachocline. J. Atmos. Solar-Terr. Phys. 80, 73. DOI. ADSCrossRefGoogle Scholar
Cébron, D., Hollerbach, R.: 2014, Tidally driven dynamos in a rotating sphere. Astrophys. J. Lett. 789, L25. DOI. ADSCrossRefGoogle Scholar
Charbonneau, P.: 2010, Dynamo models of the solar cycle. Living Rev. Solar Phys. 7, 3. DOI. ADSCrossRefGoogle Scholar
Charbonneau, P., Dikpati, M.: 2000, Stochastic fluctuations in a Babcock-model of the solar cycle. Astrophys. J. 543, 1027. DOI. ADSCrossRefGoogle Scholar
Charvatova, I.: 1997, Solar-terrestrial and climatic phenomena in relation to solar inertial motion. Surv. Geophys. 18, 131. DOI. ADSCrossRefGoogle Scholar
Chatterjee, P., Mitra, D., Brandenburg, A., Rheinhardt, M.: 2011, Spontaneous chiral symmetry breaking by hydromagnetic buoyancy. Phys. Rev. E 84, 025403. DOI. ADSCrossRefGoogle Scholar
Chiba, M., Tosa, M.: 1990, Swing excitation of galactic magnetic-fields induced by spiral density waves. Mon. Not. Roy. Astron. Soc. 244, 714. DOI. ADSGoogle Scholar
Choudhuri, A.R., Karak, B.B.: 2009, A possible explanation of the Maunder minimum from a flux transport dynamo model. Res. Astron. Astrophys. 9, 953. ADSCrossRefGoogle Scholar
Choudhuri, A.R., Schüssler, M., Dikpati, M.: 1995, The solar dynamo with meridional circulation. Astron. Astrophys. 303, L29. ADSGoogle Scholar
Cole, T.W.: 1973, Periodicities in solar activities. Solar Phys. 30, 103. DOI. ADSCrossRefGoogle Scholar
Condon, J.J., Schmidt, R.R.: 1975, Planetary tides and the sunspot cycles. Solar Phys. 42, 529. DOI. ADSCrossRefGoogle Scholar
Courvoisier, A., Hughes, D.W., Tobias, S.M.: 2006, \(\alpha\) effect in a family of chaotic flows. Phys. Rev. Lett. 96, 034503. DOI. ADSCrossRefGoogle Scholar
Davidson, P.A.: 2001, An Introduction to Magnetohydrodynamics, Cambridge University Press, Cambridge. CrossRefzbMATHGoogle Scholar
De Jager, C., Versteegh, G.: 2005, Do planetary motions drive solar variability? Solar Phys. 229, 175. DOI. ADSCrossRefGoogle Scholar
Dikpati, M., Gilman, P.: 2001, Flux-transport dynamos with alpha-effect from global instability of tachocline differential rotation: A solution for magnetic parity selection in the Sun. Astrophys. J. 559, 428. DOI. ADSCrossRefGoogle Scholar
D'Silva, S., Choudhuri, A.R.: 1993, A theoretical model for tilts of bipolar magnetic regions. Astron. Astrophys. 272, 621. ADSGoogle Scholar
Fan, Y.: 2009, Magnetic fields in the solar convection zone. Living Rev. Solar Phys. 6, 4. DOI. ADSCrossRefGoogle Scholar
Ferriz Mas, A., Schmitt, D., Schüssler, M.: 1994, A dynamo effect due to instability of magnetic flux tubes. Astron. Astrophys. 289, 949. ADSGoogle Scholar
Gellert, M., Rüdiger, G., Hollerbach, R.: 2011, Helicity and alpha-effect by current-driven instabilities of helical magnetic fields. Mon. Not. Roy. Astron. Soc. 414, 2696. DOI. ADSCrossRefGoogle Scholar
Giesecke, A., Stefani, F., Burguete, J.: 2012, Impact of time-dependent nonaxisymmetric velocity perturbations on dynamo action of von Kármán-like flows. Phys. Rev. E 86, 066303. DOI. ADSCrossRefGoogle Scholar
Gray, L.J., Beer, J., Geller, M., Haigh, J.D., Lockwood, M., Matthes, K., Cubasch, U., Fleitmann, D., Harrison, G., Hood, L., Luterbacher, J., Meehl, G.A., Shindell, D., van Geel, B., White, W.: 2010, Solar influences on climate. Rev. Geophys. 48, RG4001. DOI. ADSCrossRefGoogle Scholar
Herreman, W., Nore, C., Cappanera, L., Guermond, J.-L.: 2015, Tayler instability in liquid metal columns and liquid metal batteries. J. Fluid Mech. 771, 79. DOI. ADSMathSciNetCrossRefGoogle Scholar
Hoyng, P.: 1993, Helicity fluctuations in mean-field theory: An explanation for the variability of the solar cycle? Astron. Astrophys. 272, 321. ADSMathSciNetGoogle Scholar
Hung, C.-C.: 2007, Apparent relations between solar activity and solar tides caused by the planets. NASA/TM-2007-214817, 1. Google Scholar
Jiang, J., Chatterjee, P., Choudhuri, A.R.: 2007, Solar activity forecast with a dynamo model. Mon. Not. Roy. Astron. Soc. 381, 1527. DOI. ADSCrossRefGoogle Scholar
Jose, P.D.: 1965, Suns motion and sunspots. Astron. J. 70, 193. DOI. ADSCrossRefGoogle Scholar
Krause, F., Rädler, K.-H.: 1980, Mean-Field Magnetohydrodynamics and Dynamo Theory, Akademie Verlag, Berlin. zbMATHGoogle Scholar
Leighton, R.B.: 1964, Transport of magnetic field on the Sun. Astrophys. J. 140, 1547. DOI. ADSCrossRefzbMATHGoogle Scholar
Meir, A.J., Schmidt, P.G., Bakhtiyarov, S.I., Overfelt, R.A.: 2004, Numerical simulation of steady liquid – Metal flow in the presence of a static magnetic field. J. Appl. Mech. 71, 786. DOI. ADSCrossRefzbMATHGoogle Scholar
Okhlopkov, V.P.: 2014, The 11-year cycle of solar activity and configurations of the planets. Moscow Univ. Phys. Bull. 69, 257. DOI. ADSCrossRefGoogle Scholar
Palus, M., Kurths, J., Schwarz, U., Novotna, D., Charvatova, I.: 2000, Is the solar activity cycle synchronized with the solar inertial motion? Int. J. Bifurc. Chaos Appl. Sci. Eng. 10, 2519. DOI. zbMATHGoogle Scholar
Parker, E.N.: 1955, Hydromagnetic dynamo models. Astrophys. J. 122, 293. DOI. ADSMathSciNetCrossRefGoogle Scholar
Pikovsky, A., Rosenblum, M., Kurths, J.: 2001, Synchronizations: A Universal Concept in Nonlinear Sciences, Cambridge University Press, Cambridge. CrossRefzbMATHGoogle Scholar
Pitts, E., Tayler, R.J.: 1985, The adiabatic stability of stars containing magnetic-fields. 6. The influence of rotation. Mon. Not. Roy. Astron. Soc. 216, 139. DOI. ADSCrossRefGoogle Scholar
Proctor, M.: 2006, Dynamo Action and the Sun, EAS Pub. Ser. 21, 241. DOI. Google Scholar
Rädler, K., Stepanov, R.: 2006, Mean electromotive force due to turbulence of a conducting fluid in the presence of mean flow. Phys. Rev. E 73, 056311. DOI. ADSCrossRefGoogle Scholar
Rüdiger, G., Kitchatinov, L.L., Hollerbach, R.: 2013, Magnetic Processes in Astrophysics, Wiley-VCH, Berlin. CrossRefGoogle Scholar
Rüdiger, G., Schultz, M., Gellert, M., Stefani, F.: 2015, Subcritical excitation of the current-driven Tayler instability by super-rotation. Phys. Fluids 28, 014105. DOI. CrossRefGoogle Scholar
Scafetta, N.: 2010, Empirical evidence for a celestial origin of the climate oscillations and its implications. J. Atmos. Solar-Terr. Phys. 72, 951. DOI. ADSCrossRefGoogle Scholar
Scafetta, N.: 2014, The complex planetary synchronization structure of the solar system. Pattern Recogn. Phys. 2, 1. DOI. ADSCrossRefGoogle Scholar
Schmitt, D., Schüssler, M., Ferriz Mas, A.: 1996, Intermittent solar activity by an on-off dynamo. Astron. Astrophys. 311, L1. ADSGoogle Scholar
Seilmayer, M., Stefani, F., Gundrum, T., Weier, T., Gerbeth, G., Gellert, M., Rüdiger, G.: 2012, Experimental evidence for Tayler instability in a liquid metal column. Phys. Rev. Lett. 108, 244501. DOI. ADSCrossRefGoogle Scholar
Spruit, H.: 2002, Dynamo action by differential rotation in a stably stratified stellar interior. Astron. Astrophys. 381, 923. DOI. ADSCrossRefGoogle Scholar
Steenbeck, M., Krause, F.: 1969, Zur Dynamotheorie stellarer und planetarer Magnetfelder. I. Berechnung sonnenähnlicher Wechselfeldgeneratoren. Astron. Nachr. 291, 49. DOI. ADSCrossRefzbMATHGoogle Scholar
Steenbeck, M., Krause, F., Rädler, K.-H.: 1966, Berechnung der mittleren Lorentz-Feldstärke vxB für ein elektrisch leitendes Medium in turbulenter durch Coriolis-Kräfte beeinflusster Bewegung. Z. Naturforsch. A, J. Phys. Sci. 21(4), 369. DOI. ADSGoogle Scholar
Stefani, F., Kirillov, O.N.: 2015, Destabilization of rotating flows with positive shear by azimuthal magnetic fields. Phys. Rev. E 92, 051001(R). DOI. ADSCrossRefGoogle Scholar
Stix, M.: 1972, Nonlinear dynamo waves. Astron. Astrophys. 20, 9. ADSzbMATHGoogle Scholar
Svensmark, H., Friis-Christensen, E.: 1997, Variation of cosmic ray flux and global cloud coverage – A missing link in solar-climate relationships. J. Atmos. Solar-Terr. Phys. 59, 1225. DOI. ADSCrossRefGoogle Scholar
Takahashi, K.: 1968, On the relation between the solar activity cycle and the solar tidal force induced by the planets. Solar Phys. 3, 598. DOI. ADSCrossRefGoogle Scholar
Tayler, R.J.: 1973, The adiabatic stability of stars containing magnetic fields-I: Toroidal fields. Mon. Not. Roy. Astron. Soc. 161, 365. DOI. ADSCrossRefGoogle Scholar
Vainshtein, S.I., Cattaneo, F.: 1992, Nonlinear restrictions on dynamo action. Astrophys. J. 393, 165. DOI. ADSCrossRefGoogle Scholar
Weber, M.A., Fan, Y., Miesch, M.S.: 2013, Comparing simulations of rising flux tubes through the solar convection zone with observations of solar active regions: Constraining the dynamo field strength. Solar Phys. 287, 239. DOI. ADSCrossRefGoogle Scholar
Weber, N., Galindo, V., Stefani, F., Weier, T., Wondrak, T.: 2013, Numerical simulation of the Tayler instability in liquid metals. New J. Phys. 15, 043034. DOI. ADSCrossRefGoogle Scholar
Weber, N., Galindo, V., Stefani, F., Weier, T.: 2015, The Tayler instability at low magnetic Prandtl numbers: between chiral symmetry breaking and helicity oscillations. New J. Phys. 17, 113013. DOI. ADSCrossRefGoogle Scholar
Weiss, N.O., Tobias, S.M.: 2016, Supermodulation of the Sun's magnetic activity: The effect of symmetry changes. Mon. Not. Roy. Astron. Soc. 456, 2654. DOI. ADSCrossRefGoogle Scholar
Wilmot-Smith, A.L., Nandy, D., Hornig, G., Martens, P.C.H.: 2006, A time delay model for solar and stellar dynamos. Astrophys. J. 652, 696. DOI. ADSCrossRefGoogle Scholar
Wilson, I.R.G.: 2013, The Venus–Earth–Jupiter spin-orbit coupling model. Pattern Recogn. Phys. 1, 147. DOI. ADSCrossRefGoogle Scholar
Wood, K.: 1972, Sunspots and planets. Nature 240(5376), 91. DOI. ADSCrossRefGoogle Scholar
Yoshimura, H.: 1975, Solar-cycle dynamo wave propagation. Astrophys. J. 201, 740. DOI. ADSMathSciNetCrossRefGoogle Scholar
Zahn, J.-P., Brun, A.S., Mathis, S.: 2007, On magnetic instabilities and dynamo action in stellar radiation zones. Astron. Astrophys. 474, 145. DOI. ADSCrossRefzbMATHGoogle Scholar
Zhang, K., Chan, K.H., Zou, J., Liao, X., Schubert, G.: 2003, A three-dimensional spherical nonlinear interface dynamo. Astrophys. J. 596, 663. DOI. ADSCrossRefGoogle Scholar
Zhang, H., Moss, D., Kleeorin, N., Kuzanyan, K., Rogachevskii, I., Sokoloff, D., Gao, Y., Xu, H.: 2012, Current helicity of active regions as a tracer of large-scale solar magnetic helicity. Astrophys. J. 751. DOI.
1.Helmholtz-Zentrum Dresden – RossendorfDresdenGermany
Stefani, F., Giesecke, A., Weber, N. et al. Sol Phys (2016) 291: 2197. https://doi.org/10.1007/s11207-016-0968-0
Accepted 05 August 2016
First Online 01 September 2016
|
CommonCrawl
|
Mechanical analysis of a novel biodegradable zinc alloy stent based on a degradation model
Kun Peng1,
Xinyang Cui1,
Aike Qiao1 &
Yongliang Mu2
Biodegradable stents display insufficient scaffold performance due to their poor Young's Modulus. In addition, the corresponding biodegradable materials harbor weakened structures during degradation processes. Consequently, such stents have not been extensively applied in clinical therapy. In this study, the scaffold performance of a patented stent and its ability to reshape damaged vessels during degradation process were evaluated.
A common stent was chosen as a control to assess the mechanical behavior of the patented stent. Finite element analysis was used to simulate stent deployment into a 40% stenotic vessel. A material corrosion model involving uniform and stress corrosion was implemented within the finite element framework to update the stress state following degradation.
The results showed that radial recoiling ratio and mass loss ratio of the patented stent is 7.19% and 3.1%, respectively, which are definitely lower than those of the common stent with the corresponding values of 22.6% and 14.1%, respectively. Moreover, the patented stent displayed stronger scaffold performance in a corrosive environment and the plaque treated with patented stents had a larger and flatter lumen.
Owing to its improved mechanical performance, the novel biodegradable zinc alloy stent reported here has high potential as an alternative choice in surgery.
Biodegradable stents provide temporary scaffolds to stenotic vessels. They can be absorbed by the human body once the remodeling of the stenotic vessel is completed [1]. These devices have great potential for decreasing risks related to long-term biological incompatibility between permanent stents and arteries, especially in case of late thrombosis [2], in-stent restenosis [3] and hypersensitivity reactions [4, 5]. However, only few biodegradable stents have been used in clinical therapy because of their poor scaffolding ability. Both the Young's Modulus [6] and structure stability during degradation are low compared to permanent materials [7, 8] thus compromising the radial stiffness of the stent. Structural designs can be effective in improving biodegradable stents and consequently to face the related contemporary challenges.
In a previous study, we reported on a patented stent with a novel design. We confirmed that this stent has strong scaffold performance and a positive impact on the reshaping of stenotic vessels in a non-corrosive environment [9]. However, the scaffold performance in degradation conditions was not explored in that work. Indeed, the scaffold performance of biodegradable stents is extremely affected by material degradation. The structures of biodegradable stents are gradually weakened when exposed to a corrosive environment. This can even lead to mass loss if damages are severe enough. Thus, the scaffold performance of biodegradable stents is gradually decreased and eventually lost following degradation. Mechanical equilibrium between the vessel and the degraded stent evolves during the corrosion process. Thus, changes in scaffold performance significantly affect the treatment of stenotic vessels. Rapid decrease of scaffolding properties causes a severe decline of the vessel lumen size and ineffective treatment. Therefore, it is crucial to analyze the dynamics of scaffolding performance of the patented stent in a corrosive environment.
Stent degradation is a complex process simultaneously influenced by different corrosion phenomena. In previous studies, several degradation models involving uniform corrosion [10,11,12,13], stress corrosion [10, 14] and pitting corrosion [13] were reported. The corrosion mechanisms in these models were explained and changes of scaffold performance of biodegradable stents with common designs were evaluated. For example, Grogan et al. [13] developed a degradation model and predicted the corrosion effects on the mechanical integrity of bioabsorbable metallic stents. Wu et al. [14] investigated the service time of three stents with different designs. Optimized stents displayed an increase in half normalized recoil time of nearly 120% compared to common stents. Nevertheless, mechanical analyses of the stenotic vessels deployed with biodegradable stents were not performed.
Therefore, in the present study, we investigated the scaffold performance of the patented stent and its effect on reshaping stenotic vessel in a corrosive environment using finite element analysis (FEA). The patented stent and a common stent used as a control were implanted into 40% stenotic vessels. A corrosion model was subsequently applied to simulate the degradation of both stents. Radial recoiling ratio, mass loss ratio as well as von Mises stress distribution in the stents and stenotic vessels were recorded during the degradation process. It is widely established that stent geometries have strong influence on their mechanical performance. Thus, structural innovations are expected to lead to the development of high performance stents. This study will represent a significant reference for further structural designs of biodegradable stents.
Geometry models
Figure 1a, b depict the patented stent and the common stent used as a control, respectively. Both stents harbor circumferential cycle structures and are composed of six identical units. The patented stent was designed as already described in our previous study [9]. In both the patented stent and the control, two sinusoidal struts are connected by straight links. Dimensions of the struts and the links are similar for both stents. Contrary to the control stent, each unit of the patented stent contains a short strutting ring within the link which allow the stent to expand while preventing contraction. Figure 2 illustrates details of the short strutting ring and the links. The short strutting ring consists of a wedge, a connection and a stopping part. It is tied on a solid link (link A) and runs through another link (link B). More detailed dimensions of the short strutting ring and the link are shown in Fig. 3. The cooperation between the strutting ring and the link was described in a previous work [15] (Fig. 4). The stopping part can prevent the short strutting ring from sliding out from the link.
Geometries of two stent units. a The patented stent unit. b The common stent unit
Sectional view of the short strutting ring and the links
Dimensions of the short strutting ring and the link
Interaction between the strutting ring and the link. a Initial status of the strutting ring and the link. b The wedge part can be compressed through the link when the strutting ring moves from right to left. c The wedge part recoils after through the link, while the stopping part can prevent the strutting ring from sliding out the link. d The interaction between the wedge part and the link can prevent the strutting ring from moving back
Both stents are assumed to be fabricated with biodegradable zinc alloy. The biodegradable zinc alloy was made and tested at the Metallurgical Research Institute, Northeastern University, China. The biodegradable zinc alloy is made of zinc, magnesium and aluminum in the following proportions: Zn-3Al-1 Mg. The biodegradable zinc alloy is an elastic–plastic material and its properties are as follows: Young's Modulus E = 74.5 GPa, Poisson's ratio v = 0.3, yield strength = 220 MPa and ultimate strength = 325 MPa.
The mechanical behavior of vessels and plaque are highly nonlinear and these objects are assumed to be incompressible hyperelastic materials presented by a third-order Ogden and a first-order isotropic hyperelastic material model, respectively [16, 17]. The constitutive equation is described as follow:
$$W = \sum\limits_{i = 1}^{3} {\frac{{2\upmu_{i} }}{{\alpha_{i}^{2} }}} (\uplambda_{1}^{{\alpha_{i} }} + \uplambda_{2}^{{\alpha_{i} }} + \uplambda_{3}^{{\alpha_{i} }} - 3) + \sum\limits_{i = 1}^{3} {\frac{1}{D}}_{i} (J - 1)^{2i}$$
where W is the strain-energy density function. Both μi (MPa) and αi are associated with the shear behaviour of materials and Di describes material compressibility. The assumption of material incompressibility is realized by specifying a Poisson's ratio of 0.49 and infinitesimal values for \(D_{1} = \left( {D_{2} = D_{3} = 0} \right)\) [16]. The material coefficients are specified in Table 1 [16].
Table 1 Material coefficients [16]
Material degradation model
Continuum damage mechanism (CDM) illustrates the mechanical strength reduction of a material with damage accumulation [18]. The relationship between the effective stress tensor (\(\sigma\)) and the undamaged stress tensor (\(\bar{\sigma }\)) is described in Eq. (2). D, a damage variable, increases monotonously from 0 to 1. There is no damage in the material if \(D\) is equal to 0, while D = 1 means that the material completely lost its properties.
$$\sigma = \left( {1 - {\text{D}}} \right)\bar{\sigma }$$
A biodegradable material model referring to uniform and stress corrosion was built. The global damage variable D is assumed to be a linear superposition of the uniform corrosion damage DU and the stress corrosion damage DSC (as shown in Eq. 3).
$$D = D_{U} + D_{{SC}}$$
The uniform corrosion damage DU describes the mass loss of material when the material is exposed to aggressive environment. The damage evolution law of uniform corrosion process is supposed to be functions of \(\delta_{U}\), kU and Le with the following formula:
$$\dot{D}_{U} = \frac{{\delta_{U} }}{{L_{e} }}k_{U}$$
where \(\dot{D}_{U}\) means time derivative, \(k_{U}\) is a parameter related to the kinetics of the uniform corrosion process and \(\delta_{U}\) is a characteristic dimension of the uniform corrosion process. \(L_{e}\) is the characteristic length of a finite element.
DSC depicts the damage related to stress corrosion (SC) process. The damage evolution law assumed for the SC process is shown in Eq. (5) and was used by da Costa-Mattos et al. [19] to model the same phenomenon on stainless steel.
$$\left\{ {\begin{array}{ll} {\dot{D}_{SC} = \frac{{L_{e} }}{{\delta_{sc} }}\left( {\frac{{S\sigma_{eq}^{*} }}{1 - D}} \right)^{R} } & {\sigma_{eq}^{*} \ge \sigma_{th} > 0} \\ {\dot{D}_{SC} = 0} & {\sigma_{eq}^{*} < \sigma_{th} } \\ \end{array} } \right.$$
where \(\sigma_{eq}^{*}\) is the equivalent von Mises stress, and \(\sigma_{th}\) is a stress threshold under which the stress corrosion does not occur. In this model, \(\sigma_{th}\) is set to 50% of the yield stress of the biodegradable zinc alloy [20]. \(\delta_{SC}\) is a characteristic dimension of the stress corrosion process. S and R relate to the kinetics of the stress corrosion process and are a function of the corrosive environment. S and R are kept constant because the corrosive environment had a constant pH. Details of these relevant parameters are listed in Table 2 [14]. The remarkable table are given by Wu et al. [14].
Table 2 Parameters for the material degradation model [14]
The material degradation model was implemented into a finite element framework using the commercial code ABAQUS/Explicit 6.13 by means of a user subroutine (VUSDFLD). The stress state was calculated and updated in the explicit time integration during the whole corrosion process.
FEA models and meshing
As shown in Fig. 5, Model I and Model II represent the FEA models of stenotic vessels treated with the patented stent and the common reference stent, respectively. Vessel tissues in both models are illustrated by a cylinder with a length of 5 mm, an inner diameter of 4.2 mm and a wall thickness of 0.2 mm. Plaque tissues in both models have a crescent shape and are located at the middle of the vessel. Plaque tissues correspond to a maximum stenosis of 40%. The FEA models only include one-sixth of the circumferential geometries for saving computational consumption.
FEA models. a Model I. b Model II
Hexahedral and pentahedral elements were used to mesh both models with the use of Hypermesh 13.0 (Altair, USA). The types of the hexahedral and the pentahedral elements were defined as C3D6R and C3D8R, respectively. To avoid elements distortion, elements were defined hourglass allowed in analysis. Different element sizes were chosen with regards to the geometry. Mesh sensitivity was tested by decreasing element sizes of the stents by five times. This showed that the maximum stress in the different FEA models was less than 5%.
Boundary conditions and loads
ABAQUS/Explicit 6.13 was used for the simulations. Cyclic symmetric constraints were imposed on the corresponding symmetry nodes of both FEA models. Radial constraints were free. The whole simulation contained three simulation steps. The first two steps simulated the conventional stent implantation procedures. In step-1, both stents were expanded to the target radial displacement (0.66 mm) in stenotic vessel by exerting expansion pressure on the inner surfaces of the stents. An expansion pressure of 2.8 MPa or 2.1 MPa was used to inflate the patented stent or the common stent, respectively. Indeed, a little higher expansion pressure was applied within the patented stent to overcome the resistance caused by the interaction between the strutting ring and the link. The contact between the outer surface of the strut and the inner surface of the plaque was set to "surface-to-surface contact" and the friction coefficient was 0.1. In step-2, both stents recoiled under compression by the plaque-vessel tissues. In step-3, both stents were submitted to degradation based on the material degradation model. The degradation process was analyzed for both stents during damage evolution.
The stent radial recoiling ratio and mass loss ratio are defined by Eqs. (6) and (7), respectively.
$${\text{Radial recoiling ratio }} = \frac{{D_{l} - D_{s} }}{{D_{l} }} \times 100\%$$
$${\text{Mass loss ratio}} = \frac{{M_{loss} }}{{M_{initial} }} \times 100{\text{\% }}$$
where Dl is the radial expansion displacement (Dl = 0.66 mm), Ds is the radial displacement after recoiling. \(M_{loss}\) is the mass loss of the stent. \(M_{\text{initial}}\) is the initial mass of stent.
In step-1, the wedge part was compressed through the link and the patented stent was expanded to target displacement, as shown in Fig. 6a. In step-2, the patented stent was prevented from recoiling owing to the interaction between the wedge part and the link (Fig. 6b). The radial displacements of the stents are plotted in Fig. 7. The radial recoiling ratio of the patented and control stents were 6.2% and 18.2%, respectively. The radial recoiling ratio of the patented stent showed a marked decrease of 65.9%. In line with our previous study, the patented stent thus provided a stronger scaffold for stenotic vessel in a non-corrosive.
The von Mises stress distribution in Model I. a Step-1. b Step-2
Radial displacements of the two stents during the first two steps
In step-3, the simulation time, t had almost no physical meanings because of the lack of an experimental-based identification of the corrosion parameters. However, this parameter can simply represent an evolutionary variable to allow the comparison of different designs. In order to make an effective comparison between different designs, a normalized unit time (t*) based on the longest simulation time before the fracture of the stent was chosen to illustrate the results.
Figure 8 depicts von Mises stress distribution in both stents during the degradation process. While degrading, materials harbored weakened structures and mass loss. Ultimately, the common stent broke down. In contrast, only few elements failed in the patented stent at t* = 1, showing that the patented stent have a longer service time. The maximum von Mises stresses of both stents at different times are listed in Table 3. During the initial period of degradation, the stents became weaker but did not loose mass. With damage accumulation, stents were gradually degraded and some thinner parts suffered from severe deformation. This explains that the maximum von Mises stress first decreased for both stents prior to increasing. The maximum von Mises stress of the patented stent located at the contact position between the strutting ring and the link. The maximum von Mises stress of the patented stent was higher compared to the common stent, indicating strong interaction between the strutting ring and the link that continuously prevented the patented stent from recoiling. Apart from the contact position, the patented stent did not suffer from extreme deformation. Therefore, the average von Mises stress only decreased from 50.71 to 39.6 MPa in the patented stent. On the other hand, that of the common stent decreased from 80.99 to 51.9 MPa during the degradation process. These data corroborated that less recoiling and failed elements occur with the patented stent.
Von Mises stress distribution in the two stents during degradation. a Common stent. b Patented stent
Table 3 Maximum von Mises stresses at different time points
Von Mises stress distribution in a cross section of the strutting ring and the link of the patented stent are shown in Fig. 9 at t* 0, 0.5, 0.75 and 1. From t* = 0 to 1, stress corrosion attacked the touching position between the wedge part and the link B because of high stress concentration. Suffering stress corrosion, the strutting ring gradually moved back and the interaction between the strutting ring and the link was updated, as depicted by the relative position between the wedge part and the red dash line. According to the high stress region located at the wedge part and the link B, the updated interaction worked continuously to prevent the patented stent from recoiling.
Von Mises stress distribution in the strutting ring and the link of the patented stent during degradation process
Figures 10 and 11 show the mass loss and radial recoiling ratios of both stents. It is established that an element will be deleted once the accumulated damage (D) reaches a threshold value of 0.9. Below this threshold, the accumulated damage reduces the stiffness of materials. The radial recoiling ratio of both stents slowly increased during the initial period while the mass loss ratio was almost 0. The mass loss ratio of the patented stent (3.1%) was finally lower than that of the common stent (14.1%). The radial recoiling ratio of the common stent increased from 18.2 to 22.6% during degradation, while that of the patented stent increased from 6.2 to 7.19%. This mild increase indicates a lower rate of scaffold performance loss in corrosive environment.
Mass loss ratio of the two stents in degradation process
Radial recoiling ratio of the two stent in degradation process
Figures 12 and 13 show the stress distribution in the plaque-vessel tissue of Model I and Model II, respectively, when t* is equal to 0, 0.75 and 1. Plaque-vessel systems are hyperelastic and resist to deformation, thus supplying radial force to the stents. This theory held the stress distribution in plaque-vessel system of both models. From t* = 0 to 1, the maximum stress in plaques of the two models decreased with plaque recoiling. When t* = 1, the maximum stress in the plaque-vessel system of Model I (2.03 MPa) was definitely higher than that of Model II (0.85 MPa). This demonstrates that the stenotic vessel treated with the patented stent recoils less than that treated with the common stent in a corrosive environment. Therefore, the patented stent also had a positive effect on reshaping stenotic vessel during degradation.
Von Mises stress distribution in plaque-vessel systems of Model I
Von Mises stress distribution in plaque-vessel systems of Model II
In the present study, we investigated the scaffold performance of a previously described patented stent in a corrosive environment. The patented stent displayed a stronger scaffold performance and more efficiency in reshaping stenotic vessel during degradation. It has been suggested that the scaffold performance of biodegradable stents highly relies on their geometries. Benefiting from a clever design, a biodegradable alloy stent with strong scaffold performance can be an alternative choice in surgery.
A pressure of 2.1 MPa was exerted for expansion of the common stent. In previous studies, Li et al. [21] applied a pressure of 1.9 MPa for stent expansion in 30% stenotic vessels. Because a stenosis ratio of 40% was used in this study, a little higher expansion pressure was applied on both stents. Moreover, Lally et al. [22] applied a pressure of 13 MPa to expand vessels to a diameter greater than the diameter of the stent at the first step of simulation. The expansion pressure of 2.1 MPa was lower than 13 MPa, meaning that it would not damage the stenotic vessel and that it can be accepted in clinical surgery.
Corrosion mechanisms suggest that stress corrosion evolves rapidly during the degradation process [10, 14] and causes severe mass loss at the areas with high stress concentration (Figs. 8, 9). In the case of the patented stent, the clever designs of the strutting ring and the link significantly decreased the maximum stress and stress distribution in struts was uniform. A few elements disappeared in struts of the patented stent even when a crack was generated (Fig. 8). Furthermore, the updated interaction between the strutting ring and the link could continuously prevent the patented stent from recoiling (Fig. 9). Thus, this kind of stent can supply strong support for stenotic vessels although it weakens in a corrosive environment.
6–12 months are required for the remodeling process to be completed [23]. Therefore, biodegradable stents should provide enough support to stenotic vessels for this period of time. However, most biodegradable stents do not meet this requirement due to poor scaffold performance and mass loss. Thus, mass increase and structure optimization are usually targeted to improve the scaffold performance of biodegradable stents [24, 25]. Increasing mass allows the device resisting to uniform corrosion [24] and extended service time. Simultaneously, structural optimization, which contributes to uniform stress distribution on the stent [24, 25], is conductive to inhibit stress corrosion. However, the improvement of scaffold performance using these two conventional methods is limited. In comparison, expectations are greater regarding novel structure design of biodegradable stents. As is shown in Fig. 8, the common stent breaks down at t* = 1. In contrast, only few elements failed in the patented stent, which implies that the patented stent is promised to have a longer service time. Furthermore, optimization design referring to the length, width, thickness and diameter offers better mechanical performance.
Radial recoiling of was small for both stents, even when the crack occurred in the common stent (Fig. 8). The common stent did not completely loose its scaffold ability. Indeed, the other parts of the common stent still afforded the vessel scaffold. In addition, according to the interaction between the stent and the vessel, high stress distributed in the plaque-vessel system may help to stimulate endothelial hyperplasia. Biodegradable stents wrapped with growing vessels are predicted to have a prolonged service time as well as lower risks of thrombosis [26]. This interaction between biodegradable stents and arteries should be explored in further research. In further studies, hemodynamic characteristics induced by stent deployment should be investigated because the wall shear stress (WSS) and von Mises stress affects the endothelial hyperplasia and the reshaping of stenotic vessels.
One limitation is that the vessel wall as an isotropic hyperelastic material whose constitutive relationship is present as a six-parameter Odgen hyperelastic constitutive equation, although the constitutive equation of the vessel varies according to the vessel type [27, 28]. Fetamifar et al. [27] developed the lumen buckling equation for nonlinear anisotropic thick-walled arteries to determine the effect of axial tension based on exponential Fung strain function. Garcia et al. [28] used a two-fiber strain energy density function to characterize the mechanical behavior of veins under torsion. In future research, scaffold effects of the patented stent when it is deployed into stenotic vessels with different types and materials will be investigated.
In this study, the mechanical performance of the patented stent and its effect on reshaping stenotic vessels upon degradation in a corrosive environment were revealed using the FEA approach. Our results suggest that the patented stent could provide much stronger scaffold for stenotic vessels and have positive influence on reshaping stenotic vessels in a corrosive environment compared with common stents. This implies that structural innovation is very helpful for strong scaffold performance and corrosion resistance. A novel biodegradable zinc alloy stent with sufficient scaffold performance can be a new competitive intervention device for future clinical cardiovascular applications.
FEA:
CDM:
continuum damage mechanism
stress corrosion
WSS:
wall shear stress
Boland EL, Shine R, Kelly N, et al. A review of material degradation modelling for the analysis and design of bioabsorbable stents. Ann Biomed Eng. 2016;44(2):341–56.
Kuk KH, Ho JM. Coronary stent thrombosis: current insights into new drug-eluting stent designs. Chonnam Med J. 2012;48(3):141–9.
Throndson K, Sawatzky JA. Angina following percutaneous coronary intervention: in-stent restenosis. Can J Cardiovasc Nurs. 2009;19(3):16–23.
Sweeney CA, McHugh PE, McGarry JP, et al. Micromechanical methodology for fatigue in cardiovascular stents. Int J Fatigue. 2012;44:202–16.
Ormiston JA, Serruys PWS. Bioabsorbable coronary stents. Circ Cardiovasc Interv. 2009;2(3):255–60.
Karanasiou GS, Papafaklis MI, Conway C, et al. Stents: biomechanics, biomaterials, and insights from computational modeling. Ann Biomed Eng. 2017;45(4):853–72.
Barlis P, Tanigawa J, Di Mario C. Coronary bioabsorbable magnesium stent: 15-month intravascular ultrasound and optical coherence tomography findings. Eur Heart J. 2007;28(19):2319.
Tenekecioglu E, Farooq V, Bourantas CV, et al. Bioresorbable scaffolds: a new paradigm in percutaneous coronary intervention. BMC Cardiovasc Disord. 2016;16(1):38.
Peng K, Qiao A, Ohta M, et al. A novel structure design of biodegradable zinc alloy stent and its effects on reshaping stenotic vessel. 2018. Manuscript submitted for publication.
Gastaldi D, Sassi V, Petrini L, et al. Continuum damage model for bioresorbable magnesium alloy devices—application to coronary stents. J Mech Behav Biomed Mater. 2011;4(3):352–65.
Grogan JA, Leen SB, McHugh PE. A physical corrosion model for bioabsorbable metal stents. Acta Biomater. 2014;10(5):2313–22.
Grogan JA, Leen SB, McHugh PE. Computational micromechanics of bioabsorbable magnesium stents. J Mech Behav Biomed Mater. 2014;34:93–105.
Grogan JA, O'Brien BJ, Leen SB, et al. A corrosion model for bioabsorbable metallic stents. Acta Biomater. 2011;7(9):3523–33.
Wu W, Gastaldi D, Yang K, et al. Finite element analyses for design evaluation of biodegradable magnesium alloy stents in arterial vessels. Mater Sci Eng B. 2011;176(20):1733–40.
Peng K, Qiao A, Ohta M, et al. Structural design and mechanical analysis of a novel biodegradable zinc alloy stent. Comput Model Eng Sci. 2018;117(1):17–28.
Martin D, Boyle F. Finite element analysis of balloon-expandable coronary stent deployment: influence of angioplasty balloon configuration. Int J Numer Methods Biomed Eng. 2013;29(11):1161–75.
Zahedmanesh H, Lally C. Determination of the influence of stent strut thickness using the finite element method: implications for vascular injury and in-stent restenosis. Med Biol Eng Comput. 2009;47(4):385–93.
Bolotin VV, Shipkov AA. Mechanical aspects of corrosion fatigue and stress corrosion cracking. Int J Solids Struct. 2001;38(40–41):7297–318.
Da Costa-Mattos HS, Bastos IN, Gomes J. A simple model for slow strain rate and constant load corrosion tests of austenitic stainless steel in acid aqueous solution containing sodium chloride. Corros Sci. 2008;50(10):2858–66.
Atrens A, Winzer N, Dietzel W. Stress corrosion cracking of magnesium alloys. Adv Eng Mater. 2011;13(1–2):11–8.
Hongxia L, Tianshuang Q, Bao Z, et al. Design optimization of coronary stent based on finite element models. Sci World J. 2013;2013:1–10.
Lally C, Dolan F, Prendergast PJ. Cardiovascular stent design and vessel stresses: a finite element analysis. J Biomech. 2005;38(8):1574–81.
Hermawan H, Dubé D, Mantovani D. Developments in metallic biodegradable stents. Acta Biomaterialia. 2010;6(5):1693–7.
Grogan JA, Leen SB, McHugh PE. Optimizing the design of a bioabsorbable metal stent using computer simulation methods. Biomaterials. 2013;34(33):8049–60.
Wu W, Petrini L, Gastaldi D, et al. Finite element shape optimization for biodegradable magnesium alloy stents. Ann Biomed Eng. 2010;38(9):2829–40.
Boland EL, Grogan JA, Conway C, et al. Computer simulation of the mechanical behaviour of implanted biodegradable stents in a remodelling artery. JOM. 2016;68(4):1198–203.
Fatemifar F, Han HC. Effect of axial stretch on lumen collapse of arteries. J Biomech Eng. 2016;138(12):1245031–6. https://doi.org/10.1115/1.4034785.
Garcia JR, Sanyal A, Fatemifar F, et al. Twist buckling of veins under torsional loading. J Biomech. 2017;58:123–30.
All authors have made substantial contributions to design and perform of experiments, analysis and interpretation of data. All authors read and approved the final manuscript.
All of the datasets related to the current study are available from the corresponding author on reasonable request.
All the authors of the paper approved the publication of the article.
This study was supported by Major Project of Science and Technology of Beijing Municipal Education Commission and Type B Project of Beijing Natural Science Foundation (KZ201710005007).
College of Life Science and Bioengineering, Beijing University of Technology, No.100, Pingleyuan, Chaoyang District, Beijing, 100124, China
Kun Peng, Xinyang Cui & Aike Qiao
Northeastern University, Shenyang, 110819, Liaoning, China
Yongliang Mu
Kun Peng
Xinyang Cui
Aike Qiao
Correspondence to Aike Qiao.
Peng, K., Cui, X., Qiao, A. et al. Mechanical analysis of a novel biodegradable zinc alloy stent based on a degradation model. BioMed Eng OnLine 18, 39 (2019). https://doi.org/10.1186/s12938-019-0661-2
Biodegradable stent
Stent design
|
CommonCrawl
|
benzaldehyde resonance structure
HomeAll Posts...benzaldehyde resonance structure
[17] Toxicology studies indicate that it is safe and non-carcinogenic in the concentrations used for foods and cosmetics,[17] and may even have anti-carcinogenic (anti-cancer) properties. If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking for misconfigured or infected devices. 4-hydroxybenzaldehyde is a hydroxybenzaldehyde that is benzaldehyde substituted with a hydroxy group at position C-4. Check All That Apply.
View more. Benzaldehyde is commonly employed to confer almond flavor to foods and scented products. Completing the CAPTCHA proves you are a human and gives you temporary access to the web property. Benzaldehyde is also a precursor to certain acridine dyes. [18] A small amount of benzaldehyde solution is placed on a fume board near the honeycombs. Benzaldehyde (C6H5CHO) is an organic compound consisting of a benzene ring with a formyl substituent. • Join now. It has a role as a plant metabolite, a mouse metabolite and an EC 1.14.17.1 ( dopamine beta-monooxygenase) inhibitor. Performance & security by Cloudflare, Please complete the security check to access. Benzaldehyde is found in many foods and is widely used in the chemical industry.
[20], As used in food, cosmetics, pharmaceuticals, and soap, benzaldehyde is "generally regarded as safe" (GRAS) by the US FDA[21] and FEMA. Resonance structure of benzaldehyde resonance structure of benzaldehyde Ask for details ; Follow Report by Eswarchaitanya9006 22.06.2019 60 views. Related Questions. [13], Benzaldehyde and similar chemicals occur naturally in many foods. Almonds, apricots, apples, and cherry kernels contain significant amounts of amygdalin. The aniline dye malachite green is prepared from benzaldehyde and dimethylaniline. [17] It is metabolized and then excreted in urine. "Site-specific nuclear magnetic resonance spectroscopy", which evaluates 1 H/ 2 H isotope ratios, has been used to differentiate between naturally occurring and synthetic benzaldehyde. Reaction of benzaldehyde with anhydrous sodium acetate and acetic anhydride yields cinnamic acid, while alcoholic potassium cyanide can be used to catalyze the condensation of benzaldehyde to benzoin. Log in.
The synthesis of mandelic acid starts with the addition of hydrocyanic acid to benzaldehyde: The resulting cyanohydrin is hydrolysed to mandelic acid. Join now. Which Position(s) On The Aromatic Ring Are Most Likely To React With An Electrophile? Resonance structure of benzaldehyde resonance structure of benzaldehyde - 10683711 1. Author has 1.3k answers and 164.3k answer views. [11], A significant quantity of natural benzaldehyde is produced from cinnamaldehyde obtained from cassia oil by the retro-aldol reaction:[10] the cinnamaldehyde is heated in an aqueous/alcoholic solution between 90 °C and 150 °C with a base (most commonly sodium carbonate or bicarbonate) for 5 to 80 hours,[12] followed by distillation of the formed benzaldehyde. Tetraphenylporphyrin has a strong absorption band with maximum at 419 nm (so called Soret band) and four weak bands with maxima at 515, 550, 593 and 649 nm (so called Q-bands). Benzaldehyde is used as a bee repellent. Your IP: 157.230.25.18 You may need to download version 2.0 now from the Chrome Web Store. (1985), United States Environmental Protection Agency, "Untersuchungen über das Radikal der Benzoesäure", "The benzaldehyde oxidation paradox explained by the interception of peroxy radical by benzyl alcohol", "Toxicity of JUUL Fluids and Aerosols Correlates Strongly with Nicotine and Some Flavor Chemical Concentrations", "Health and Environmental Effects Profile for Benzaldehyde", SIDS Initial Assessment Report for benzaldehyde, Organisation for Economic Co-operation and Development, Benzaldehyde description at ChemicalLand21.com, https://en.wikipedia.org/w/index.php?title=Benzaldehyde&oldid=986932229, Creative Commons Attribution-ShareAlike License, This page was last edited on 3 November 2020, at 20:51. Answer to Draw all secondary resonance structures for benzaldehyde. In industrial settings, benzaldehyde is used chiefly as a precursor to other organic compounds, ranging from pharmaceuticals to plastic additives. [19] The beekeeper can then remove the honey frames from the bee hive with less risk to both bees and beekeeper. The natural status of benzaldehyde obtained in this way is controversial. Sep 15, 2012 . [14] This status was reaffirmed after a review in 2005. The benzoic acid show resonance. [17], Except where otherwise noted, data are given for materials in their, Scott, Howard R. and Scott, Lillian E. (1920), In 1803 C. Martrès published a manuscript on the oil of bitter almonds: "Recherches sur la nature et le siège de l'amertume et de l'odeur des amandes amères" (Research on the nature and location of the bitterness and the smell of bitter almonds).
[7] Further work on the oil by Pierre Robiquet and Antoine Boutron-Charlard, two French chemists, produced benzaldehyde. Question: Draw All Secondary Resonance Structures For Benzaldehyde. The net sum of valid resonance structures is defined as a resonance hybrid, which represents the overall delocalization of electrons within the molecule.
By Observing The Resonance Structures Above, At What Position(s) Will Electrophilic Aromatic Substitution Occur?
Numerous other methods have been developed, such as the partial oxidation of benzyl alcohol, alkali hydrolysis of benzal chloride, and the carbonylation of benzene. Draw all secondary resonance. [8] In 1832, Friedrich Wöhler and Justus von Liebig first synthesized benzaldehyde. Occurrence. Resonance structures are used when one Lewis structure for a single molecule cannot fully describe the bonding that takes place between neighboring atoms relative to the empirical data for the actual bond lengths between those atoms. [5] Synthetic benzaldehyde is the flavoring agent in imitation almond extract, which is used to flavor cakes and other baked goods.
Benzyl alcohol can be formed from benzaldehyde by means of hydrogenation. Show transcribed image text . Cloudflare Ray ID: 5f8167af1f0d0746 Since the boiling point of benzoic acid is much higher than that of benzaldehyde, it may be purified by distillation.
However, the memoir was largely ignored until an extract was published in 1819: Martrès, CS1 maint: multiple names: authors list (, Brühne, Friedrich and Wright, Elaine (2002) "Benzaldehyde" in, Wienes, Charles and Pittet, Alan O. This reaction also yields acetaldehyde. The resonating structures are as follows - I They are given below. Ask your question. 1 ... ces structures, c'est-à-dire que sa structure réelle est une moyenne de toutes ces formes limites. Is the aldehyde group electron donating or electron withdrawin. This glycoside breaks up under enzyme catalysis into benzaldehyde, hydrogen cyanide and two equivalents of glucose. Log in. I Want To Scream At The Top Of My Lungs Lyrics, Mortal Kombat: Shaolin Monks All Red Coins, Introductory And Intermediate Algebra For College Students, Books A La Carte Edition Plus Mylab Math. [17], For a 70 kg human, the lethal dose is estimated at 50 mL.
[10] Liquid phase chlorination and oxidation of toluene are the main routes.
It shows red fluorescence with maxima at 649 and 717 nm. It is a colorless liquid with a characteristic almond-like odor. It is sometimes used in cosmetics products.[17]. We do not say resonating structures, but resonance structures. Benzaldehyde contributes to the scent of oyster mushrooms (Pleurotus ostreatus).[15]. 1. Question: Add Curved Arrows To The Resonance Structures For Benzaldehyde To Illustrate The Electron-withdrawing Groups Effect On The Aromatic Ring.
See the answer. Another way to prevent getting this page in the future is to use Privacy Pass. Significant amount of geminal diol of benzaldehyde exists in an aqueous solution of benzaldehyde at 25 °C because $\mathrm{p}K_{\text{hyd}} = 2$ (). It has a role as an insect repellent, a human urinary metabolite, a plant metabolite and a bacterial metabolite. (The scheme above depicts only one of the two formed enantiomers).
Most of the benzaldehyde that people eat is from natural plant foods, such as almonds.[14]. Correction. It is more acidic than aliphatic acid because the carboxylate ion is stabilised by resonance.
Loyd Grossman Tomato And Basil Recipes, Salmonella Outbreak 2020, Medical Infrared Thermometer Manual, New World Warblers Sound, Yugioh Gba Rom Hack, Mexican Rice With Minute Rice And Salsa, Inductive Sensor Advantages And Disadvantages, Cajun Power Promo Code, Ube Kutsinta Without Lye Water, Oni Knee Sleeves,
|
CommonCrawl
|
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.
What does it mean when an isotope is stable?
Does stable mean that an isotope has a very long half life, for example xenon-124 has a half life of 1.8 x 10^22 years, or does it mean that fissure is theoretically not possible, or does it mean that the isotope has a very long half life, but the exact number is unknown?
nuclear-physics stability isotopes
Qmechanic♦
inf3rnoinf3rno
Does stable mean that an isotope has a very long half life... or does it mean that fissure is theoretically not possible, or does it mean that the isotope has a very long half life, but the exact number is unknown?
"Stable" effectively means that there is no experimental evidence that it decays. However, there are nuances within that statement.
Most of the "stable" light nuclei can also be shown to be theoretically stable. Such nuclei would have to absorb energy to decay via any of the known decay modes, and so such decay cannot happen spontaneously.
Many heavier nuclei are energetically stable to most known decay modes (alpha, beta, double beta, etc.) but could potentially release energy via spontaneous fission. However, they have never been observed to do so; so for all practical purposes they are considered stable.
Some nuclei could potentially release energy via emission of small particles (alpha, beta, etc.), but have never actually been observed to do so. Such nuclei are often called "observationally stable".
Several nuclides are radioactive, but have half-lives so long that they don't decay significantly over the age of the Earth. These are the radioactive primordial nuclides; your example of xenon-124 is one of them.
Note that nuclides can in principle be moved from categories 2 or 3 into category 4 via experimental observations. For example, bismuth was long thought to be the heaviest element with a stable isotope. However, in 2003, its lone primordial isotope (bismuth-209) was observed to decay via alpha emission, with a half-life of $\approx 10^{19}$ years.
One could defensibly claim that the nuclei in categories 2 & 3 are radioactive but their half-life is unknown; after all, the totalitarian principle says that any quantum-mechanical process that is not forbidden is compulsory. If you want to take this perspective, though, you have to assume that we have a good enough grasp on nuclear physics to know what is forbidden or not.
Michael SeifertMichael Seifert
$\begingroup$ I just expected that basic things like this have a good definition in physics. After all it is a hard science. $\endgroup$ – inf3rno Jul 13 '20 at 19:29
$\begingroup$ "Hard science" doesn't imply "unambiguous definition of words". Even in math sometimes different authors mean slightly different things by the same word - the only thing that's important is that it's unambiguous in context! $\endgroup$ – ManfP Jul 14 '20 at 15:37
$\begingroup$ @inf3rno: And yet, here we are. $\endgroup$ – Michael Seifert Jul 14 '20 at 18:15
$\begingroup$ For category 3 there are a great many "stable" nuclides with even numbers of neutrons and protons that would need to undergo double beta decay to avoid higher energy nuclides. They all almost certainly will never be observed to decay even though it may be possible $\endgroup$ – Steve Cox Jul 14 '20 at 19:34
$\begingroup$ @inf3rno I'd imagine an astronomer might be interested in very different time scales than, say, a nuclear engineer. And from a purely theoretical standpoint the difference between "the radiation of one gram will quickly kill you" and "has decayed approximately twice since the universe existed" might not even be all that interesting. $\endgroup$ – ManfP Jul 15 '20 at 20:39
This half-life of $1.8\cdot 10^{22}$ years was actually measured. At first glance it seems impossible to measure such a long half-life. But let's go through the numbers to see that is indeed scarcely measurable.
The actual measurement has been done with the XENON1T detector. This experiment used 3 tons of liquid xenon, which are around $10^{28}$ xenon atoms. Natural xenon is known to contain $0.1$ % of the isotope xenon-124. So they had around $10^{25}$ xenon-124 atoms. The experiment detected a few xenon-124 atoms per day decaying to tellurium-124 by double electron capture (see "Dark-matter detector observes exotic nuclear decay"). Now you can use $\frac{dN}{dt}=-\frac{N}{t_{1/2}}\ln(2)$, and find the half-life of xenon-124 to be $t_{1/2}=1.8\cdot 10^{22}$ years.
Thomas FritschThomas Fritsch
$\begingroup$ Related: What are the longest half-lives we can detect experimentally? What stops us going further? Are we trying to? $\endgroup$ – Emilio Pisanty Jul 14 '20 at 19:04
A stable isotope does not decay naturally. Usually through experiments, one can determine a lower limit for the half-life. Theory or location on the chart of Nucleids may predict this to be an unstable isotope but the half life-life is either non-existent (stable isotope) or is too long to measure. Since it's thought to be unstable it listed with a long half-life.
jmhjmh
$\begingroup$ Doesn't everything decay, given enough time? $\endgroup$ – vsz Jul 15 '20 at 9:27
$\begingroup$ @vsz no, there are isotopes for which we know that they can't decay without gaining energy from somewhere. It's just that not all "stable" isotopes are like that, some could decay but either don't or do it very, very rarely. $\endgroup$ – Peteris Jul 15 '20 at 17:04
$\begingroup$ @vsz there is no predicted decay of massless particles -- that's just photons for what we know. Otherwise everything is predicted to decay. However, Proton decay, while predicted, has never been observed. That's not because they never tried either. $\endgroup$ – Martijn Jul 16 '20 at 8:03
$\begingroup$ @Martijn : well, if the half-life of something is long enough, it's unlikely to observe it decaying during a human lifetime... $\endgroup$ – vsz Jul 16 '20 at 8:14
$\begingroup$ @vsz that depends on how many of them you have. $\endgroup$ – Martijn Jul 16 '20 at 8:46
Thanks for contributing an answer to Physics Stack Exchange!
Not the answer you're looking for? Browse other questions tagged nuclear-physics stability isotopes or ask your own question.
What are the longest half-lives we can detect experimentally? What stops us going further? Are we trying to?
Why some nuclei with "magic" numbers of neutrons have a half-life less than their neighbor isotopes?
Why is technetium unstable?
Why does the metastable form of Technetium-95 have a longer half-life than its most stable state?
Is it possible that every single isotope is radioactive, and isotopes which we call stable are actually unstable but have an extremely long half-life?
Why is caesium-137 more stable than caesium-134?
Question on two-neutrino double electron capture
Why are all isotopes of tungsten considered (theoretically) unstable?
|
CommonCrawl
|
<< Week of April 08 >>
SOGA Open Hours
Tour/Open House | January 28 – May 13, 2018 every Sunday | 10 a.m.-12 p.m. | Garden Location: on the corner of Walnut St. and Virginia St. in north Berkeley, CA
Sponsor: Campus Gardens
SOGA is an educational space designed for the community to share knowledge about organic urban agriculture and self-sufficiency in the food system. This type of experiential learning takes place during open volunteer hours, workshops, and DeCals (student-led courses at UCB). The garden, located on UC Berkeley property, is managed by undergraduate students and funded primarily through grants from... More >
Film - Feature | April 8 | 12:30 p.m. | Berkeley Art Museum and Pacific Film Archive
Sponsor: Berkeley Art Museum and Pacific Film Archive
Bringing to light a little-known piece of Cuban history, this moving and understated medical drama set in 1989 Havana tells the story of a Russian teacher drafted to serve as a translator for children from Chernobyl. Westworld's Rodrigo Santoro stars.
Baseball vs. Utah
Sport - Intercollegiate - Baseball/Softball | April 8 | 1:05 p.m. | Evans Field
Sponsor: Cal Bears Intercollegiate Sports
Cal Baseball hosts Utah in conference action at Evans Diamond.
Docent-led tour
Tour/Open House | January 6, 2017 – December 30, 2018 every Sunday, Thursday, Friday & Saturday with exceptions | 1:30-2:45 p.m. | UC Botanical Garden
Sponsor: Botanical Garden
Join us for a free, docent-led tour of the Garden as we explore interesting plant species, learn about the vast collection, and see what is currently in bloom. Meet at the Entry Plaza.
Free with Garden admission
Advanced registration not required
Tours may be cancelled without notice.
For day-of inquiries, please call 510-643-2755
For tour questions, please email [email protected]... More >
Film - Feature | April 8 | 3 p.m. | Berkeley Art Museum and Pacific Film Archive
The seductions and disillusionments of city life play counterpoint to provincial goodness in this morality tale of a young daughter pulled between the worlds of her two mothers.
Seattle Symphony
Performing Arts - Music | April 8 | 3-5 p.m. | Zellerbach Hall
Sponsor: Cal Performances
Saturday, April 7, 8pm
John Luther Adams/Become Desert (California Premiere)
featuring Volti San Francisco; Robert Geary, artistic director
Sibelius/Symphony No. 2 in D Major, Op. 43
Sunday, April 8, 3pm
Sibelius/The Oceanides, Op. 73
Britten/Four Sea Interludes from Peter Grimes, Op. 33a
John Luther Adams/Become Ocean
This performance is part of Cal Performances' Berkeley RADICAL... More >
Tickets required: $38-98 prices subject to change
Ticket info: Buy tickets online or by calling 510-642-9988, or by emailing [email protected]
Film - Feature | April 8 | 3:15 p.m. | Berkeley Art Museum and Pacific Film Archive
Power struggles and moral compromises feed an escalating conflict when an uncompromising fish farmer clashes with his neighbor and a powerful company that sets its sights on his land. Winner of the Un Certain Regard prize at Cannes.
The Shape of a Surface: Experimental Shorts
In this diverse program showcasing the film medium itself, history and the world are reframed. Featuring works by arc, Stephanie Barber, Paul Clipson, Nazli Dincel, Jim Jennings, Pablo Mazzolo, Alee Peoples, and Jennifer Separzadeh.
Performing Arts - Music | April 8 | 8-10 p.m. | CNMAT (1750 Arch St.)
Sponsor: Center for New Music and Audio Technologies (CNMAT)
scapegoat is an experimental saxophone and percussion duo. Close creative collaboration and multi-media projects form the basis of their pursuit for artistic innovation and expression.
Tickets: $10 General, $5 Students and seniors
A minister finds newfound meaning and reawakened desire when a lovely parishioner seeks his counsel in this film from legendary writer/director Paul Schrader. With memorable performances by Ethan Hawke, Amanda Seyfried, and Cedric the Entertainer.
CNMAT Users Group presents: Scapegoat
Performing Arts - Music | April 8 | 8 p.m. | CNMAT (1750 Arch St.)
Sponsor: CNMAT (Center for New Music and Technology)
scapegoat is an experimental saxophone and percussion duo. Close creative collaboration and multi-media projects form the basis of their pursuit for artistic innovation and expression. Programmes are designed to broaden and
challenge the musical experience of the audience, through original works featuring live electronics, performer controlled sonic and visual amplification, video and lighting... More >
Tickets: $10 G.A., $5 seniors and students
What's Happening in Federal Court?: Recent Findings and Strategies for the Future
Conference/Symposium | April 9 | 9 a.m.-7 p.m. | Bancroft Hotel, Great Hall
Location: 2680 Bancroft Way, Berkeley, CA 94704
Sponsors: Law, Boalt School of, Civil Justice Research Initiative
"What's Happening in Federal Court?" is the inaugural symposium of the Civil Justice Research Initiative at Berkeley Law. It will bring together leading legal scholars and social scientists from around the United States to share their research and discuss the legal process in federal courts around the country. The... More >
Institutional Coordination in Asia-Pacific Disaster Management
Conference/Symposium | April 9 | 9 a.m.-6 p.m. | 180 Doe Library
Sponsors: Institute of East Asian Studies (IEAS), Center for Chinese Studies (CCS), BASC, UC San Diego Medical School, Center for Southeast Asia Studies, Center for Korean Studies (CKS), Center for Japanese Studies (CJS)
East Asian countries frequently face earthquakes, tsunamis, tropical storms, flooding, and landslides, leading to the proliferation of actors in the disaster management sphere. Indeed, the private sector, military, non-governmental and governmental organizations, and national and regional bureaucracies are involved in providing different services across phases of disaster management... More >
Rani D. Mullen | China and India in Afghanistan: A long-term strategic loss for Afghanistan or a win-win for all?
Lecture | April 9 | 12-1:30 p.m. | Stephens Hall, 10 (ISAS Conference Room)
Speaker: Rani D. Mullen, Associate Professor of Government at the College of William and Mary
Moderator: Lowell Dittmer, Professor of Political Science, UC Berkeley
Sponsors: Institute for South Asia Studies, Institute of East Asian Studies (IEAS), Department of Political Science, Center for Chinese Studies (CCS)
Dr. Rani D. Mullen, Associate Professor of Government at the College of William and Mary
Graduate Students Seminar
Seminar | April 9 | 12-1 p.m. | 489 Minor Hall
Speakers/Performers: Stephanie Wan, UC Berkeley, Fleiszig Lab; Kathryn Bonnen, University of Texas at Austin, Huk Lab
Sponsor: Neuroscience Institute, Helen Wills
Stephanie Wan's Talk Title: Impact of contact lens wear and dry eye on the amicrobiomic status of the murine cornea
Abstract: Contrasting with the conjunctiva and other exposed body surfaces, the cornea does not host a stable bacterial population (amicrobiomic). Yet, the cornea and conjunctiva are not usually distinguished in ocular surface microbiome research. Additionally, commonly used... More >
Post-earthquake damage assessment, earthquake damage repair and seismic vulnerability assessment of the Washington Monument: Semm Seminar
Seminar | April 9 | 12-1 p.m. | 502 Davis Hall
Speaker/Performer: Terrence Paret, Wiss, Janey, Elstner Assoc. Inc.
Sponsor: Civil and Environmental Engineering (CEE)
On August 23, 2011, the Washington Monument was subjected to ground shaking from the Magnitude 5.8 Mineral, Virginia earthquake, whose epicenter was roughly 80 miles from the National Mall in Washington, D.C. Shaking of the 555-foot tall unreinforced stone masonry structure resulted in damage, most significantly to the pyramidion, the construction comprising its upper 55 feet.
Edible Book Festival
Special Event | April 9 | 12-1:30 p.m. | 405 Moffitt Undergraduate Library
Sponsor: Library
The Berkeley University Library is pleased to host its second Edible Book Festival!
Edible Book Festivals feature creative food projects that draw their inspiration from books and stories. Edible books might physically resemble books, or they might refer to an aspect of a story, or they might incorporate text.
All members of the UC Berkeley community are encouraged to participate! The... More >
Attendance restrictions: The Edible Books Festival is open to anyone with a CalID.
From Congress to a University Presidency - Notes on Leading a Liberal Arts Institution
Seminar | April 9 | 12-1:30 p.m. | Moses Hall, Harris Room (119 Moses Hall)
Speaker: Stephanie Herseth Sandlin, President, Augustana University
Sponsors: Center for Studies in Higher Education , Institute of Governmental Studies, Robert T. Matsui Center for Politics and Public Service
Augustana University President Stephanie Herseth Sandlin, who also served in the Congress for seven years, discusses her experiences as a higher education leader, and as a member of Congress setting national education policy. Augustana University, located in Sioux Falls, South Dakota, serves more than 2,000 students from 33 states and 32 countries, offering more than 100 majors, minors and... More >
Registration recommended
Registration info: Register online
Trans Memoir/Memory: Migrations and Territories of Racial Gender Becoming
Lecture | April 9 | 12-2 p.m. | 602 Barrows Hall
Speaker: Jian Chen, Assistant Professor of English, The Ohio State University
Sponsor: Haas Institute for a Fair and Inclusive Society
Janet Mock's coming-of-age stories as a Black and Native Hawaiian trans woman in Redefining Realness (2014) create points of transmission between cis-heterosexual civil society and emergent transgender, especially trans of color, communities in the second decade of the twenty-first century.
Combinatorics Seminar: Combinatorics of X-variables in finite type cluster algebras
Seminar | April 9 | 12-1 p.m. | 939 Evans Hall
Speaker: Melissa Sherman-Bennett, UC Berkeley
Sponsor: Department of Mathematics
A cluster algebra is a commutative ring determined by an initial "seed," which consists of A-variables, X-variables, and some additional data. Given a seed, one can produce new seeds via a combinatorial process called mutation. The cluster algebra is generated by the variables obtained from all possible sequences of mutations. In this talk, we will focus on cluster algebras of finite type, which... More >
What is Torture and How Did We Get Here?
Panel Discussion | April 9 | 12:45-2 p.m. | 170 Boalt Hall, School of Law
Sponsors: Berkeley Law Committee Against Torture, Human Rights Center
What is torture and how did we get here? Torture has been prevalent both domestically, within the prison industrial complex, and as a part of the "war on terror." This discussion will focus on the origins of modern forms of torture and they ways in which torture has been employed by U.S. officials both within and outside of the U.S.
Joined by Prof. Laurel Fletcher, Prof. Jonathan Simon & Brad... More >
Mapping the History of Aesthetic Concepts
Lecture | April 9 | 2-5 p.m. | Doe Library, Visual REsource Center
Featured Speaker: Pete de Bolla, Professor of Cultural History and Aesthetics, University of Cambridge, the Faculty of English
Speaker: Ewan Jones, University Lecturer in the Nineteenth Century, University of Cambridge, the Faculty of English
Sponsors: Department of English, Townsend Center for the Humanities
A presentation and discussion of the Concept Lab's work on the structure and data of social/intellectual "concepts."
The Concept Lab studies the architectures of conceptual forms. It is committed to the view that concepts are not equivalent to the meanings of the words which express them. The Lab considers conceptual architectures as generating structured environments for sensing that one has... More >
Seminar 211, Economic History: Time for Growth: The Public Mechanical Clock
Seminar | April 9 | 2-3:30 p.m. | 639 Evans Hall
Featured Speaker: Battista Severgini, Copenhagen Business School
Sponsor: Department of Economics
Cognitive Neurosciences Seminar
Seminar | April 9 | 3-4:30 p.m. | 5101 Tolman Hall
Speaker/Performer: Dr. Zhaoping Li, Computer Sciences. UCL
Sponsor: Department of Psychology
Abstract: Investigations in the recent years have revealed an important functional role of the primary visual cortex (V1): it creates a bottom-up saliency map to guide attentional shifts exogenously. I will review these findings to motivate a new path to understanding vision. This new path views vision as made of three stages: encoding, selection, and decoding; the selection and decoding stages... More >
Tracking the Concept of Government, 1700-1800: University of Cambridge Concept Lab
Lecture | April 9 | 3-5 p.m. | Doe Library, Doe 308A, Visual Resource Center
Sponsor: Digital Humanities at Berkeley
In the final event for the 2018 DH Faire, Peter de Bolla and Ewan Jones from the University of Cambridge Concept Lab will showcase a range of techniques that build upon and refine procedures common to corpus linguistics, such as pointwise mutual information. We will also chart a number of specific case studies, using the large dataset of Eighteenth Century Collections Online so as to demonstrate... More >
STROBE Seminar Series: 3-Minute Thesis Graduate Student Talks
Seminar | April 9 | 3-4 p.m. | 433 Latimer Hall
Sponsor: College of Chemistry
Please join us for two special STROBE Seminars on April 9 & 16 at 3 PM PT/4 PM MT. The graduate students will be presenting their engaging 3-Minute Thesis Talks. See GoToMeeting and flyer information below.
A reminder that STROBE will be sending 3-4 senior graduate students to the NSF STC Professional Development workshop in early August 2018 (here's a link to last year's workshop website:... More >
Arithmetic Geometry and Number Theory RTG Seminar: Ordinary primes in Hilbert modular varieties
Seminar | April 9 | 3:10-5 p.m. | 748 Evans Hall
Speaker: Junecue Suh, UCSC
A well-known conjecture (often attributed to Serre) asserts that any motive over any number field has infinitely many ordinary primes, in the sense of the Newton Polygon coinciding with the Hodge Polygon. We will present a few methods for producing more ordinary primes in the case of modular Jacobians — and more generally the part of the (intersection) cohomology of Hilbert modular varieties... More >
Genetics and education: Recent developments in the context of an ugly history and an uncertain future
Colloquium | April 9 | 4-5:30 p.m. | 2515 Tolman Hall
Speaker/Performer: Ben Domingue, Stanford Graduate School of Education, Stanford University; Institute of Behavioral Science, University of Colorado Boulder
Sponsor: Graduate School of Education
Driven by our recent mapping of the human genome, genetics research is increasingly prominent and is likely to re-intersect with education research. I begin by giving background on the current state of the art regarding methods for linking genotype to phenotype, focusing specifically on molecular genetics and genome-wide association studies. I emphasize both what genetic studies of educational... More >
Complicity and Dissent: Literature in the Cold War
Lecture | April 9 | 4-6 p.m. | 300 Wheeler Hall
Featured Speaker: Duncan White, Lecturer on History and Literature, Harvard University
Sponsors: Department of English, Department of Slavic Languages and Literatures
At the outbreak of the Second World War Vladimir Nabokov stood on the brink of losing everything all over again. The reputation he had built as the pre-eminent Russian novelist in exile was imperilled. In Nabokov and his Books, Duncan White shows how Nabokov went to America and not only reinvented himself as an American writer but also used the success of Lolita to rescue those Russian books that... More >
Transport and biosynthesis of a novel copper-chelating natural product
Seminar | April 9 | 4-5 p.m. | 106 Stanley Hall
Speaker: Amy Rosenzweig, Northwestern University
Seminar 208, Microeconomic Theory: "Additive-Belief-Based Preferences"
Featured Speaker: David Dillenberger, University of Pennsylvania
Scheiber Lecture: What Lies Ahead for the Ocean
Lecture | April 9 | 4-6 p.m. | Boalt Hall, School of Law, Room 100
Speaker/Performer: Ronán Long, World Maritime University
Sponsor: Law of the Sea Institute
Please join us for the first annual Harry and Jane Scheiber Lecture in Ocean Law and Policy. Professor Ronán Long, Director of the WMU-Sasakawa Global Ocean Institute at the World Maritime University, will provide the inaugural lecture and explore the future of global ocean governance amidst increasing ecological and political challenges.
IB Finishing Talk: Range-wide Population Dynamics in Heterogeneous Landscapes – A Case Study of a Xeric-adapted Alpine Plant
Seminar | April 9 | 4-5 p.m. | 2040 Valley Life Sciences Building
Featured Speaker: Meagan Oldfather, UCB (Ackerly Lab)
Sponsor: Department of Integrative Biology
Seminar 271, Development: "Competition in Network Industries: Evidence from Mobile Telecommunications in Rwanda"
Featured Speaker: Daniel Björkegren, Brown University
Towards a Subaltern History of the Crusades?
Lecture | April 9 | 5-7 p.m. | 3335 Dwinelle Hall
Speaker: Christopher J. Tyerman, Oxford University Professor of the History of the Crusades
Sponsors: Department of History, Medieval Studies Program, Berkeley Center for the Study of Religion
Christopher J. Tyerman is Professor of the History of the Crusades at Oxford University. His research considers the cultural, religious, political and social phenomenon of crusading in medieval Western Europe between the eleventh and sixteenth centuries. He has published widely on various aspects of the crusades and on crusade historiography from the Middle Ages to the present day. Recent books... More >
Imagining The Future Of War
Lecture | April 9 | 5-6:30 p.m. | Alumni House, Toll Room
Speaker/Performer: Sir Lawrence Freedman, Emeritus Professor of War Studies, King's College London
Sponsor: Institute of International Studies
Professor Sir Lawrence Freedman was Professor of War Studies at King's College London from 1982 to 2014 and Vice-Principal from 2003 to 2013. He was educated at Whitley Bay Grammar School and the Universities of Manchester, York and Oxford. Before joining King's, he held research appointments at Nuffield College Oxford, IISS and the Royal Institute of International Affairs. Elected a Fellow of... More >
Swahili Weekly Social Hour
Social Event | January 22 – April 30, 2018 every Monday with exceptions | 5:30-6:30 p.m. | Jupiter
Location: 2181 Shattuck Avenue, Berkeley
Sponsor: Center for African Studies
Speak Swahili with your fellow Swahili students and enthusiasts over a drink at Jupiter (check for location updates). This is an informal gathering to connect with other Swahili speakers on campus and in Berkeley. Each person will support their own beverage purchases (water, soda, coffee, tea, beer, etc.), but we will provide the good company! And of course, Swahili speaking only! All skill and... More >
Proud to be "Tribeless" - Cato Institute President Peter Goettler at the Berkeley Forum
Lecture | April 9 | 6-7 p.m. | 125 Morrison Hall
Speaker/Performer: Peter Goettler, Cato Institute
Sponsor: The Berkeley Forum
In our lifetimes, we've never seen a more divisive period in American politics. According to the Pew Research Center, the partisan gap on political values is now the widest it has been in decades. But is this divide based on actual principles, or merely on differentiating ourselves from the other political "tribe"? Peter Goettler, president of the Cato Institute, will make the case that tribalism... More >
Tickets recommended
30th Annual I-House Annual Celebration and Awards Gala: Welcoming UC Berkeley Chancellor Carol T. Christ and honoring the 2018 I-House Alumni of the Year
Special Event | April 9 | 6 p.m. | International House
Sponsor: International House
6 pm Music & Mingling Dinner & Program Follow
$150 per person for Young Diplomats (35 years of years of age and under)
Chevron Auditorium International House Berkeley
2299 Piedmont Avenue, Berkeley, CA
Valet Parking Provided
Formal cocktail & international attire encouraged Catering provided by International House
Limited seating is available. RSVP by Tuesday, April 3,... More >
Registration info: Register online or by calling 510-642-4128, or by emailing [email protected]
ihouse.berkeley.edu/gala
30th Annual I-House Annual Celebration and Awards Gala
Proud to be "Tribeless": Cato Institute President Peter Goettler at the Berkeley Forum
Lecture | April 9 | 6-7:30 p.m. | 125 Morrison Hall
Tickets recommended: Free
Disability and Climate Resilience
Workshop | April 9 | 6-8 p.m. | 88 Dwinelle Hall
Sponsor: Student Environmental Resource Center
Come join SERC for an important workshop on the intersectionality of disability and climate resilience, something that often gets left out of the conversation when talking about climate change.
Leading this workshop is Alex Ghenis and Marsha Saxton from the World Institute of Disability. Alex is a Policy and Research Specialist at WID. He is currently managing the New Earth Disability (NED)... More >
Smart City of Edinburgh: Routing Enlightenment, 1660-1750
Lecture | April 9 | 6-7:30 p.m. | 300 Wheeler Hall
Featured Speaker: Murray Pittock, Professor, University of Glasgow, School of Critical Studies
Sponsor: Florence Green Bixby Chair in English
Using data, evidence and the models provided by modern innovation and urban studies theory, "Smart City of Edinburgh" identifies the particular features of Edinburgh which made the Enlightenment possible. Focused on culture, society, education and cosmopolitan networks rather than people and ideas, it identifies the special qualities of 'Enlightenment' as a term rather than the controversial... More >
LAEP Lecture: Christophe Girot
Lecture | April 9 | 6-7 p.m. | 112 Wurster Hall
Sponsor: College of Environmental Design
MON, APRIL, 6:00PM - Christophe Girot is Professor and Chair of Landscape Architecture at the Architecture Department of the ETH (Swiss Federal Institute of Technology) in Zürich since 2001.
UROC DeCal – Demystifying the Research Process: Decolonizing Methods in Academic Research (Hosted by UROC: Undergraduate Researchers of Color)
Course | January 29 – April 30, 2018 every Monday with exceptions | 6-8 p.m. | 174 Barrows Hall
Speaker/Performer: Istifaa Ahmed, UROOC
Ethnic Studies 98/198
Class Time: Mondays, 6pm-8pm, 1/22/18 - 4/30/18
Course Control Number (CCN): 24251
Units: 1-3 units
Student Instructor: Istifaa Ahmed
Welcome to our student-led organization and DeCal, Underrepresented Researchers of Color (UROC) – Demystifying the Research Process: Decolonizing Methods in Academic Research! We seek to build a community of researchers of color... More >
new art, flag art, good art, portal art
Lecture | April 9 | 6:30-8 p.m. | Berkeley Art Museum and Pacific Film Archive
Speaker/Performer: Ian Cheng, Artist
Sponsor: Arts + Design
Ian Cheng's work explores the nature of mutation and the capacity of humans to relate to change. Drawing on principles of video game design, improvisation, and cognitive science, Cheng has developed "live simulations", living virtual ecosystems that begin with basic programmed properties, but are left to self-evolve without authorial intent or end. His simulations model the dynamics of often... More >
Small-scale Gold Mining and Biocontamination
Seminar | April 10 | Barrows Hall, Radio Broadcast, ON AIR ONLY, 90.7 FM
Speakers/Performers: Jimena Diaz, PhD Student, Department of Environmental Science, Policy, and Management; Mattina Alonge, PhD Student, Department of Integrative Biology
Sponsor: KALX 90.7 FM
Tune in to The Graduates next Tuesday for a rocking interview with Jimena Diaz from the Department of Environmental Science Policy and Management at UC Berkeley. Jimena is an interdisciplinary scientist who combines insights from political ecology and ecology to better understand the complexities of society-nature interactions. In the interview, Jimena tells us all about the ways in which small... More >
Seeing and Listening in the Garden: a Painting and Drawing Workshop
Workshop | April 10 | 10 a.m.-3 p.m. | UC Botanical Garden
In this workshop you will have the opportunity to enhance your senses through looking and listening. Listen to a line of music, draw the bending gesture of a tree. We will walk in the garden to explore color and focused listening in the soundscape of the garden.
Registration required: $100, $90 members
Registration info: Register online or by calling 510-664-9841, or by emailing [email protected]
Science and Literacy Playgroup
Meeting | October 31, 2017 – May 15, 2018 every Tuesday with exceptions | 10:30 a.m.-12:30 p.m. | Berkeley Youth Alternatives (BYA)
Location: 1255 Allston Way, Berkeley, CA 94702
Sponsors: Chancellor's Community Grant, Trybe Inc.
Have fun and meet other families in West and South Berkeley.
For Children ages 05 and their caregivers.
Free, drop-in, snacks, circle time, arts and crafts and science activities.
New insights into acetylation and oncometabolism from chemoproteomics
Seminar | April 10 | 11 a.m.-12 p.m. | 120 Latimer Hall
Featured Speaker: Jordan Meier, National Cancer Institute
A paradox of modern biology is that while metabolism is known to influence epigenetic signals (including, but not limited to histone acetylation), the specific proteins that sense these metabolic cues remain uncharacterized. Here we describe the utility of chemical methods to discover novel epigenetic mechanisms and characterize their metabolic regulation. Our initial studies have led to the... More >
A Farewell to Arms: Broken Hopes and Total Departure from the Homeland, in The Heroic Battle of Aintab
Lecture | April 10 | 12-1:30 p.m. | 270 Stephens Hall
Speaker: Umit Kurt, Polonsky Fellow, The Van Leer Jerusalem Institute
Sponsors: Institute of Slavic, East European, and Eurasian Studies (ISEEES), Armenian Studies Program
Umit Kurt earned his PhD in history at the Strassler Center for Holocaust and Genocide Studies, Clark University in 2016. He is Polonsky Fellow in the Van Leer Institute in Jerusalem. Dr. Kurt is engaged in his work with examining transfer of Armenian wealth, transformation of space, elite-making process, ordinary perpetrators, collective violence, microhistories, inter-ethnic conflicts, Armenian... More >
C. Judson King: Building Research Eminence in the Physical Sciences at Berkeley
Seminar | April 10 | 12-1 p.m. | Dwinelle Hall, Academic Innovation Studio, 117 Dwinelle Hall (Level D)
Speaker: C. Judson King, Former director of the Center for Studies in Higher Education (2004 - 2014) and Provost and Senior Vice President - Academic Affairs of the University of California system (1995-2004)., University of California
Sponsor: Center for Studies in Higher Education
The physical sciences at Berkeley were built to the highest stature in the first half of the twentieth century through an ad-hoc process driven by several key intellectual leaders among the faculty. Some of the most important factors were the strong institutional interests of these
faculty leaders, enablement by the administration, the establishment of the Board of Research, chartering of formal... More >
RSVP recommended
RSVP info: RSVP online
Analyzing European Foreign Policy in a Post-Western World: Operationalizing the Decentring Agenda
Lecture | April 10 | 12-1 p.m. | 201 Moses Hall
Speaker/Performer: Stephan Keukeleire, University of Leuven, Belgium
Sponsor: Institute of European Studies
Building on Chakrabarty's "Provincializing Europe" (2000) and Fisher Onar and Nicolaïdis' "Decentring Agenda" (2013), Stephan Keukeleire presents an analytical framework to operationalize the decentring agenda and support scholars in analysing European foreign policy in an increasingly non-European and post-Western World. The framework consists of six partially overlapping decentring categories... More >
Meeting | February 20, 2018 – January 5, 2021 every Tuesday | 12:15-1 p.m. | 3110 Tang Center, University Health Services
Sponsor: Tang Center (University Health Services)
The Mindfulness Meditation Group meets every Tuesday at 12:15-1:00 pm at 3110 Tang Center on campus. All campus-affiliated people are welcome to join us on a drop-in basis, no registration or meditation experience necessary. We start with a short reading on meditation practice, followed by 30 minutes of silent sitting, and end with a brief discussion period.
Regime Type and Minister Tenure in Africa's Authoritarian Regimes
Colloquium | April 10 | 12:30-2 p.m. | 223 Moses Hall
Speaker: Alex Kroeger, Lecturer, UC Merced Department of Political Science
What explains the wide variation in the tenure of cabinet ministers in authoritarian regimes? While existing research has focused on differences in the tenure of ministers in democracies and dictatorships, I examine the influence of regime type on minister tenure in authoritarian regimes. I argue that authoritarian regime type determines both the level of dismissal risk that ministers face as... More >
Alex Kroeger
Reimagining Morocco's Cultural Heritage for the 21st Century
Lecture | April 10 | 12:30-2 p.m. | 340 Stephens Hall
Speaker/Performer: Ashley Miller, Visiting Scholar, Center for Middle Eastern Studies
Sponsor: Center for Middle Eastern Studies
In July of 2011, King Mohammed VI of Morocco (r.1999-present) endorsed a constitutional referendum that acknowledged his country's plural identities and histories in an unprecedented way, describing a Moroccan national identity "forged through the convergence of its Arab-Islamic, Amazigh, and Saharan-Hassanic components, nourished and enriched by its African, Andalusian, Hebraic, and... More >
Development Lunch: "Determinants of the Cost of Electricity Supply in India"
Seminar | April 10 | 12:30-1:30 p.m. | 648 Evans Hall
Speaker/Performer: Louis Preonas
3-Manifold Seminar: Tait colorings and instanton homology (continued)
Seminar | April 10 | 12:40-2 p.m. | 891 Evans Hall
Speaker: Ian Agol, UC Berkeley
We'll discuss Kronheimer-Mrowka's twisted instanton invariant of webs and foam cobordisms. The rank of this invariant for planar webs gives the number of Tait colorings, but the torsion can contain more information (in particular, admits a spectral sequence to their previous untwisted invariant).
Do Medical Marijuana Laws Harm Youth and Young Adults?
Colloquium | April 10 | 12:40-2 p.m. | 104 Genetics & Plant Biology Building
Speaker: Joanne Spetz, PhD, Professor, UCSF School of Medicine, Institute for Health Policy Studies
Sponsor: Public Health, School of
Medical marijuana laws have been enacted in more than half of U.S. states, and studies have found that they increase the use of illicit marijuana among adults but reduce traffic fatality rates, suggesting there may be both positive and negative consequences. Using repeated-cross section data from the restricted-use version of the National Survey of Drug Use and Health, we delve more deeply into... More >
Race and "Othering: Making torture possible
Panel Discussion | April 10 | 12:45-2 p.m. | 170 Boalt Hall, School of Law
Torture involves a fundamental act of "othering" in order for it to be possible. Who do we torture and why do we torture them? How has torture been mobilized by "benign" states and who do we conceptualize as the architects of torture beyond those in the room with the detainee? What are the ramifications of this legacy for the disparate impact torture has on people of color today?
Joined by... More >
Adaptive Traffic Control Systems
Special Event | April 3 – 12, 2018 every Tuesday & Thursday | 1-4 p.m. | Online
Instructor: Joy Bhattacharya, PE, PTOE, Principal, Stantec
Instructor: Aleksandar Stevanovic, PhD, Assistant Professor, Florida Atlantic University
Sponsor: Technology Transfer Program
This new online course offers summary of fundamental principles, operational requirements and expected benefits of some of the frequently deployed Adaptive Traffic Control Systems. The first session presents differences between adaptive and responsive traffic controls and introduces briefly three ATCS deployed in California (ACS Lite, QuicTrac, and SCOOT). The second session addresses InSync, a... More >
Registration: $145.00 CA Public Agency, $290.00 Standard Fee
Registration info: Register online or by calling 510-643-4393, or by emailing [email protected]
Seminar 211, Economic History: The Captured Economy: How the Powerful Enrich Themselves, Slow Down Growth, and Increase Inequality
Seminar | April 10 | 2-3:30 p.m. | Blum Hall, Plaza Level
Speaker: Brink Lindsey and Steven Teles, Niskanen Center
*Note change in time and location. Joint with Political Economy Seminar
ISF 110 - Free Speech in the Public Sphere: An Interdisciplinary Approach
Course | January 16 – May 3, 2018 every Tuesday & Thursday | 2-3:30 p.m. | 102 Wurster Hall
Sponsor: Division of Undergraduate Education
In this spring 2018 class, we shall take up the nature of public speech from Socrates' public dissent to social media messaging today. The course reading will combine classic philosophical statements about the value of free, subversive and offensive speech; histories of the emergence of public spheres; and sociologies of technologically-mediated speech today.
Seminar 237/281, Macro/International Seminar: Topic Forthcoming
Speaker: Wenxin Du
The Value of Space: Geopolitics, Geography and the Search for International Theory in the United States in the 1950s
Seminar | April 10 | 2-4 p.m. | 223 Moses Hall
Speaker/Performer: Or Rosenboim, Lecturer in Modern History, City, University of London
Sponsors: Institute of International Studies, Department of History
Cognitive Neuroscience Colloquium: Computational dysfunctions in anxiety: Failure to differentiate signal from noise
Colloquium | April 10 | 3:30-5 p.m. | 5101 Tolman Hall
Speaker: Martin Paulus, Scientific Director and President, Laureate Institute for Brain Research
Student Harmonic Analysis and PDE Seminar (HADES): Fourier transforms of measures and distance sets
Seminar | April 10 | 3:30-5 p.m. | 740 Evans Hall
Speaker: James Rowan, UC Berkeley
The Falconer distance problem asks what the smallest Hausdorff dimension of a compact set E in $R^d$ can be such that its distance set D(E) has positive Lebesgue measure. It is conjectured that if dim E is greater than d/2, then dim D(E) is at least 1. We will discuss the relationship between this problem and spherical averages of Fourier transforms of measures and present a result of Wolff that... More >
Beyond Diversity: Building A Culture of Inclusion in STEM Education
Special Event | April 10 | 3:30-5 p.m. | Sutardja Dai Hall, Banatao Auditorium
Featured Speaker: Tracy L. Johnson, Ph.D., Department of Molecular, Cell and Developmental Biology, UCLA
Panelist/Discussant: Paul Barber, Ph.D., Department of Ecology and Evolutionary Biology, UCLA
Moderator: Tyrone B. Hayes, Ph.D., Department of Integrative Biology, UCB
Sponsor: College of Division of Biological Sciences Letters & Science
A lecture and panel discussion (3:30-5:00 pm) followed by a reception (after 5 pm).
The greatest population growth in the US is happening in precisely the populations that remain profoundly underrepresented in the sciences. It is clear that the long-term vitality of the scientific enterprise in the US is dependent on preparing a broader population of young people to be the scientific leaders... More >
Commutative Algebra and Algebraic Geometry: The Fellowship of the Ring: Random monomial ideals and their homological properties
Speaker: Lily Silverstein, UC Davis
Randomness is an important tool in algebra, especially from an algorithmic perspective. I will discuss our recent work looking at the random behavior of monomial ideals. We describe several random models, inspired by earlier models for random graphs and random simplicial complexes, and give results on properties such as Hilbert function and Krull dimension. We also prove "threshold behavior" in... More >
Managing Marginalization: Poverty Politics in Post-Unification Germany
Lecture | April 10 | 4-5 p.m. | 201 Moses Hall
Speaker/Performer: Alexander Graser, University of Regensburg.
Poverty has become an issue in Germany. Whereas the phenomenon has been present for quite a while, public attention has grown only recently. The talk will review and contextualize poverty-related legislation since the early nineties, highlight trends, identify seeming paradoxies, and discuss potential explanations: Among the candidates are exogenous ones like the fashions of policy diffusion or... More >
The Security of the Korean Peninsula after the Olympics: Perspectives on South Korea, North Korea, China Trilateral Relations
Panel Discussion | April 10 | 4-6 p.m. | 180 Doe Library
Speakers: Soojin Park, Wilson Center; Yun Sun, Stimson Center; Mark Tokola, Korea Economic Institute of America
Moderator: T.J. Pempel, Political Science, UC Berkeley
Sponsors: Institute of East Asian Studies (IEAS), Korea Economic Institute of America, Center for Korean Studies (CKS)
The 2018 Winter Olympics presented an opportunity for reduced tensions on the Korean Peninsula, but can it help lead to a better outcome for the North Korea nuclear crisis or is it just a one-off event? At this time of heightened uncertainty in Northeast Asia, please join us for a panel co-sponsored by the Korea Economic Institute of America to discuss the increasingly complex relations among... More >
Popular Neoliberalism: Readers' and Viewers' Reactions to Milton Friedman
Colloquium | April 10 | 4-5:30 p.m. | 2538 Channing (Inst. for the Study of Societal Issues), Wildavsky Conference Room
Speaker: Dr. Maurice Cottier, Visiting Fellow, History Department, Harvard University
Sponsor: Center for Right-Wing Studies
Milton Friedman was not only a leading neoliberal economist in the second half of the 20th century but, due to his popular books and appearances on TV, also a well-known public intellectual. Focusing on the reactions by viewers and readers of his book Capitalism and Freedom (1962) and book and TV series Free to Choose (1980), Maurice Cottier's paper discusses how the broader public received... More >
Design Field Notes: Melissa Cefkin
Seminar | April 10 | 4-5 p.m. | 220 Jacobs Hall
Sponsor: Jacobs Institute for Design Innovation
Melissa Cefkin, a design anthropologist who works as principal scientist for Nissan Research, will speak at Jacobs Hall.
Symmetry, degeneracy, and strong correlation
Seminar | April 10 | 4-5 p.m. | 120 Latimer Hall
Featured Speaker: Gustavo Scuseria, Department of Chemistry, Rice University
Schrodinger's equation has been known for more than 90 years, yet many pressing questions in electronic structure theory remain unanswered. Quantum Chemistry is a successful field: the weak correlation problem has been solved; we can get "the right answer for the right reason" at reasonably low polynomial computational cost instead of the combinatorial expense of brute force approaches. Despite... More >
Seminar 218, Psychology and Economics: "What Do Consumers Consider Before They Choose?"
Featured Speaker: Jason Abaluck, Yale School of Management
Joint with Industrial Organization Seminar. Please note change in time due to joint event.
Seminar 221, Industrial Organization: "What Do Consumers Consider Before They Choose?" (Joint with 218)
Joint with Psychology and Economics Seminar. Please note change in location due to joint event.
Sarah Baker (Berkeley Center for the Study of Religion): How to Sing with Syriac Christians (and Why): Kinship, Politics, Liturgy, and Sound in the Dutch-Syriac Diaspora
Colloquium | April 10 | 5 p.m. | 3401 Dwinelle Hall
Sponsors: Department of Music, Berkeley Center for the Study of Religion
Sarah Baker (Berkeley Center for the Study of Religion)
"How to Sing with Syriac Christians (and Why): Kinship, Politics, Liturgy, and Sound in the Dutch-Syriac Diaspora"
How to Sing with Syriac Christians (and Why): Kinship, Politics, Liturgy, and Sound in the Dutch-Syriac Diaspora
Colloquium | April 10 | 5-7 p.m. | 3401 Dwinelle Hall
Speaker/Performer: Sarah Bakker Kellogg, Hunt Postdoctoral Fellow and Visiting Scholar at the Berkeley Center for the Study of Religion
Sponsor: Berkeley Center for the Study of Religion
To the extent that Middle Eastern Christians register in Euro-American public discourse at all, they are usually invoked either to justify military intervention in the Middle East for the sake of their "religious freedom," or they are cited as potential exemptions to policies intended to restrict asylum-seekers from Muslim-majority countries. This binary frame rests on a wide-spread assumption... More >
FinTech For Good Panel: Digital Financial Inclusion, the business opportunity of serving the underserved in a digital world
Panel Discussion | April 10 | 5-8 p.m. | Haas School of Business
Sponsor: Haas Fin Tech Club
FinTech is disrupting the financial services industry, delivering improved customer experience, faster transactions, and cheaper products to a wider audience. With the application of AI, Big Data, mobile, blockchain, and other technologies; unbanked and underserved customers who previously could not be served profitably can now be valuable new customers in a big emerging market that only in the... More >
Commutative Algebra and Algebraic Geometry: The Fellowship of the Ring: Liaison among curves in $ \mathbf P^3 $
Speaker: Ritvik Ramkumar, UC Berkeley
In this talk we will study the equivalence relation generated by linked curves in $ \mathbf P^3 $. In particular we will define the Rao module and show that (up to shifts and duals) it determines the equivalence class. Time permitting we will study curves that are cut out by three surfaces.
The Persistent Geography of the indio bárbaro: Racial Representation, Racism, and the Mexican Migrant: María Josefina Saldaña-Portillo
Lecture | April 10 | 5:30-8 p.m. | Martin Luther King Jr. Student Union, Multicultural Community Center
Sponsor: Center for Race and Gender
The Persistent Geography of the indio bárbaro: Racial Representation, Racism, and the Mexican Migrant
MARÍA JOSEFINA SALDAÑA-PORTILLO
Visiting Professor, English, UC Berkeley
Professor of Social & Cultural Analysis, New York University
Introduction by Prof. Susan Schweik, Department of English
Pura Lopez Colome with Dan Bellm
Reading - Literary | April 10 | 6:30-7:30 p.m. | Wheeler Hall, Maude Fife Room, 315 Wheeler Hall
Featured Performer: Pura Lopez Colome
Performer: Dan Bellm
Sponsor: Department of English
Poetry Reading by Pura Lopez Colome with American translator and poet Dan Bellm.
Baseball vs. San Francisco
Sport - Intercollegiate - Baseball/Softball | April 10 | 7:05 p.m. | Evans Field
Cal Baseball hosts San Francisco at Evans Diamond.
Cal Night at the SF Giants: vs. Arizona Diamondbacks
Special Event | April 10 | 7:15-10:30 p.m. | AT&T Park
Location: , San Francisco, CA
Sponsor: Cal Alumni Association
The Giants are proud to invite all students, alumni, and fans of UC Berkeley to the fourth annual University of California, Berkeley Night at AT&T Park! Your special event ticket includes a game ticket, as well as a limited-edition Cal-Giants beanie!
Ticket info: Tickets go on sale February 13. Buy tickets online
Change Without a Footprint: A Student's Role in Global Health
Panel Discussion | April 10 | 7:30-9 p.m. | 109 Dwinelle Hall
Speaker/Performer: Sangeeta Tripathi, HEAL Initiative
Sponsor: GlobeMed at Berkeley
Change Without a Footprint: A Student's Role in Global Health is a facilitated discussion on the implications of undergraduates working in global health led by the Director of Operations and Strategy at the HEAL Initiative, Sangeeta Tripathi. All students are welcome!
Sangeeta brings more than a decade of work in global health to the conversation. She has worked on the rapid acceleration of... More >
Performing Arts - Dance | April 10 | 8-10 p.m. | Zellerbach Hall
Under the direction of Robert Battle, Alvin Ailey American Dance Theater continues to nurture a new generation of choreographers steeped in the African-American experience. With repertoire that looks back to seminal works like Ailey's own Revelations, and new material that engages with vital social movements, the company creates dances with the power to transform.
This performance is part of... More >
Tickets required: $36-135 prices subject to change
|
CommonCrawl
|
Cost Functions
Marginal Cost
Average Variable Cost
Average Fixed Cost
DefinitionTypes of Cost FunctionsExample
Home Economics Cost Curves Cost Functions
A cost function is a mathematical relationship between cost and output. It tells how costs change in response to changes in output.
Even though relationship between a firm's costs and output can be studies using cost tables (which show total cost, total variable cost and marginal cost for each unit) or graphs which plot different cost curves, a cost function is the most compact and direct method of encapsulating information about a firm's costs.
Cost functions typically have cost as a dependent variable and output i.e. quantity as an independent variable. Such cost functions do not account for any changes in cost of inputs because they assume fixed input prices.
Types of Cost Functions
Typical cost functions are either linear, quadratic and cubic.
A linear cost function is such that exponent of quantity is 1. It is appropriate only for cost structures in which marginal cost is constant.
A quadratic cost function, on the other hand, has 2 as exponent of output. It represents a cost structure where average variable cost is U-shaped.
A cubic cost function allows for a U-shaped marginal cost curve. The cost function in the example below is a cubic cost function.
Total cost function is the most fundamental output-cost relationship because functions for other costs such as variable cost, average variable cost and marginal cost, etc. can be derived from the total cost function.
Imagine you work at a firm whose total cost (TC) function is as follows:
$$ \text{TC}\ =\ \text{0.1Q}^\text{3}-\ \text{2Q}^\text{2}+\text{60Q}+\text{200}\ $$
Average total cost function can be derived by dividing the total cost function by Q:
$$ \text{ATC}\ =\ \frac{\text{TC}}{\text{Q}}=\text{0.1Q}^\text{2}-\ \text{2Q}+\text{60}+\frac{\text{200}}{\text{Q}}\ $$
The constant value in a total cost function represent the total fixed cost. Function for total variable cost can be arrived at by subtracting the constant value from the total cost function:
$$ \text{VC}=\text{TC}\ -\ \text{FC}\ $$
$$ \text{VC}=\ \text{0.1Q}^\text{3}-\ \text{2Q}^\text{2}+\text{60Q} $$
Average variable cost function equals total variable cost divided by Q:
$$ \text{AVC}=\frac{\text{VC}}{\text{Q}}=\ \text{0.1Q}^\text{2}-\ \text{2Q}+\text{60} $$
Marginal cost equals the slope of the total cost curve which in turn equals the first derivative of the total cost function.
$$ {\text{MC}} _ \text{Q}=\frac{\text{dTC}}{\text{dQ}}\ =\ \text{0.3Q}^\text{2}-\ \text{4Q}+\text{60}\ $$
Cost functions can be used to create cost tables and cost curves. By plugging different quantity levels in the cost functions determined above, we can create a cost table which can be used to plot the cost curves.
The total cost and total variable cost curves represented by functions discussed above give us the following graph:
Since the total cost function is a cubic-function, the average variable cost curve and the marginal cost curve are U-shaped as shown below.
by Obaidullah Jan, ACA, CFA and last modified on Feb 11, 2019
|
CommonCrawl
|
IB Math Stuff
IB Stuff Q&A
New Q&A Site
Ask it Here
Physics Site
+ Algebra
- Hide Algebra
Arithmetic & Geometric Series
+ Functions
- Hide Functions
Composite Function
Concept And Notation
Exponentials & Logarithms
Factor Theorem
Inverse Function
Quadratic Functions
Quadratic Forms
+ Trig
- Hide Trig
Area of Sectors
Circular (Trig) Functions
Oblique Triangles
Right Angle Trig
Right Angle Trig on the Unit Circle
Tangent and Slope of Lines
Trig Identities
+ Matrices
- Hide Matrices
Determinants and Inverses
Matrix Definitions
Matrix Operations
System of Equations
+ Vectors
- Hide Vectors
2D and 3D Lines
Basic Vector Arithmetic
Intro To Vectors
More on Lines
Scalar Product
+ Stats & Probability
- Hide Stats & Prob
Continuous Variables
Discrete Variables
Statistics Representations
StatsVocabulary
Using Normal Distributions
Binomial Distribution
Probability Rules
Representations of Probability
+ Calculus
- Hide Calculus
Derivatives by First Principle
Intro to Derivatives
Derivatives Of More Complex Functions
Derivatives Part 1
Intro To Integration
Indefinite Integrals
Local Max and Min
Tangent and Normals
Volumes of Revolution
Things Seen on Past IB Exams
Find the site useful?
Give us a little social love…
Generating Bitcoin for
You and Me.
Statistics & Probability » Statistics » Continuous Variables
A continuous random variable can take on any real value. This means that even in a finite range of possible values, say from 0 to 1, that there is an infinite number of possible values.
Because there is an infinite range of possible values there is zero probability that a variable takes on an exact value…! Rather we can only describe the probability that the variable falls within a range of values. Yes, this should make your head hurt. The probability of a continuous variable is best described by a probability density function or probability function. An example would be the function shown below.
In general PDF's (for lack of a better acronym) have one or more peaks indicating the most likely values that the variable can take on. The height or value of the function indicates probability (density). The area under the curve represents the actual probability, thus the total area under the curve must be 1. In fancier math classes you may have to force the area to be 1 by "normalizing" the function.
Thus for a given probability density function f(x) the probability that the variable X will be between c and d is given by the integral:
\begin{align} \int_c ^d f(x) dx = P(c\leq X \leq d) \end{align}
The mean or expected value for a PDF is given by:
\begin{align} E(X)=\mu = \int_a ^b x f(x) dx \end{align}
Where a and b are the max and min possible values (which can be infinite).
Many continuous random variables are described by well known probability density functions. One of the most common or at least most important (certainly according to the IB) is the Normal Distribution. The Normal Distribution is symmetric around a mean value and has no limits in terms of max or min values for the variable.
The normal distribution does a good job of modeling things such as heights, weights, IB test scores, etc. With the notable exception that there is a minimum height and often a maximum score on a test… Whereas the normal distribution has no such limitations.
+ Show Connection of Normal Distribution to Binomial Distribution
- Hide Plinko
The simulation below suggests the connection between the Binomial Distribution and the normal distribution. As the number of rows increases the distribution of the balls in the bins below becomes more and more continuous. Choose the "continuous" option at the top right and press start. It's faster.
A normal distribution is given by the equation:
\begin{align} f(x)=\frac{1}{ \sigma \sqrt{2 \pi}} e^{-\frac{1}{2} \left ( \frac{x-\mu}{\sigma} \right ) ^2} \end{align}
Where $\mu$ is the mean of the distribution and $\sigma ^2$ is the variance thus making $\sigma$ the standard deviation. The value of the mean simply shifts where the peak is in the distribution and the standard deviation affects the spread or how wide the distribution is. The smaller the $\sigma$ the taller and narrower the peak.
Properties of the Normal Distribution
A normal distribution takes on a "bell curve" shape as shown in the graph at the top of the page and is symmetrical around the mean value. Its worth noting that the value of the probability density function at the mean is:
\begin{align} f(x=\mu)=\frac{1}{ \sigma \sqrt{2 \pi}} \end{align}
Thus the coordinates of peak of a normal distribution is $(\mu, \frac{1}{ \sigma \sqrt{2 \pi}})$. I would guess that info could come in handy.
The standard deviation also has a special role geometrically. It turns out that if you differentiate the probability density function twice and set it equal to zero, thus finding the inflection points, that the inflection points occurs when:
\begin{align} x=\mu \pm \sigma \end{align}
In other words, the $\sigma$ is the horizontal distance from the mean of the distribution to the inflection points!
People often refer to "how many sigma's" a value is from the mean. The diagram below shows the distribution of probability as a function of "sigma's." Notice that approximately 68% of the values fall within one sigma of the mean. While roughly only 4% fall more than 2 sigma's from the mean!
Want to add to or make a comment on these notes? Do it below.
page revision: 15, last edited: 26 Feb 2013 10:18
Edit Tags Discuss (0) History Files Print Site tools + Options
|
CommonCrawl
|
theorems:goldstones_theorem
====== Goldstone's theorem ====== <tabbox Intuitive> * For an intuitive explanation of Goldstone's theorem, see [[http://jakobschwichtenberg.com/understanding-goldstones-theorem-intuitively/|Understanding Goldstone's theorem intuitively]] by J. Schwichtenberg <tabbox Concrete> <blockquote> Long after phonons were understood, Jeffrey Goldstone started to think about broken symmetries and order parameters in the abstract. He found a rather general argument that, whenever a continuous symmetry (rotations, translations, SU(3), ...) is broken, long–wavelength modulations in the symmetry direction should have low frequencies. The fact that the lowest energy state has a broken symmetry means that the system is stiff: modulating the order parameter will cost an energy rather like that in equation 2. In crystals, the broken translational order introduces a rigidity to shear deformations, and low frequency phonons (figure 8). In magnets, the broken rotational symmetry leads to a magnetic stiff- ness and spin waves (figure 9a). In nematic liquid crystals, the broken rotational symmetry introduces an orientational elastic stiffness (it pours, but resists bending!) and rotational waves (figure 9b). In superfluids, the broken gauge symmetry leads to a stiffness which results in the superfluidity. Superfluidity and superconductivity really aren't any more amazing than the rigidity of solids. Isn't it amazing that chairs are rigid? Push on a few atoms on one side, and 109 atoms away atoms will move in lock–step. In the same way, decreasing the flow in a superfluid must involve a cooperative change in a macroscopic number of atoms, and thus never happens spontaneously any more than two parts of the chair ever drift apart. The low–frequency Goldstone modes in superfluids are heat waves! (Don't be jealous: liquid helium has rather cold heat waves.) This is often called second sound, but is really a periodic modulation of the temperature which passes through the material like sound does through a metal. O.K., now we're getting the idea. Just to round things out, what about superconductors? They've got a broken gauge symmetry, and have a stiffness to decays in the superconducting current. What is the low energy excitation? It doesn't have one. But what about Goldstone's theorem? **Well, you know about physicists and theorems** . . . That's actually quite unfair: Goldstone surely had conditions on his theorem which excluded superconductors. Actually, I believe Goldstone was studying superconductors when he came up with his theorem. It's just that everybody forgot the extra conditions, and just remembered that you always got a low frequency mode when you broke a continuous symmetry. We of course understood all along why there isn't a Goldstone mode for superconductors: it's related to the Meissner effect. The high energy physicists forgot, though, and had to rediscover it for themselves. **Now we all call the loophole in Goldstone's theorem the Higgs mechanism, because (to be truthful) Higgs and his high–energy friends found a much simpler and more elegant explanation than we had.** (11In condensed-matter language, the Goldstone mode produces a chargedensity wave, whose electric fields are independent of wavelength. This gives it a finite frequency (the plasma frequency) even at long wavelength. In high-energy language the photon eats the Goldstone boson, and gains a mass. The Meissner effect is related to the gap in the order parameter fluctuations (~ times the plasma frequency), which the high-energy physicists call the mass of the Higgs boson.) <cite>https://arxiv.org/pdf/cond-mat/9204009.pdf and http://pages.physics.cornell.edu/~sethna/StatMech/EntropyOrderParametersComplexity.pdf</cite> </blockquote> <blockquote> Goldstone's theorem shows that the existence of an observable with a nonvanishing vacuum expectation value implies the existence of states whose energy goes to zero as the momentum does; that is, as E(p) r 0 p r 0. In relativistic field theory, this implies the existence of massless particles since . **For an intuitive picture**, imagine ap- 2 24 22 E p m c p c plying the operator corresponding to the broken symmetry to the vac- Qˆ uum state F0S. The result would be a distinct vacuum state, but with the same energy since commutes with the Hamiltonian. Now consider in- Qˆ stead the operator defined over some finite region V; the states Qˆ V should have the same energy as F0S except for boundary terms. But Qˆ VF0S since this operator implements a continuous symmetry, the region V can be smoothly deformed so that the boundary terms vanish as , which V r 0 implies that the energy of the state must go to zero for short wave- Qˆ VF0S lengths. To make this (slightly) more rigorous, consider an observable whose commutator with has a nonzero vacuum expectation value, A Q ˆ ˆ . 6 Rewriting as an integral of the charge ˆˆ ˆ limVr A0F[QV V , A]F0S p c ( 0 Q density, we have . Assuming that the cur- 3 0ˆ ˆ limVr ∫V d xA0F[ j (x), A]F0S p c rent is conserved, if the boundary terms vanish then this integral will be time invariant. Manipulation of this expression shows that massive particles would, however, lead to explicit time dependence. For the left-hand side to be nonzero, there must be states FnS such that , with ˆ0 A0Fj FnS ( 0 vanishing spatial momenta; these states are the massless Goldstone modes.7 <cite>http://www.jstor.org/stable/pdf/10.1086/518324.pdf</cite> </blockquote> ---- **Examples** --> Landau phonons in Bose-Einstein condensates# "The Bose-Einstein condensation is characterized by the breaking of a global U(1) gauge group (acting on the Bose particle field as the U(1) group of Example 1), as very clearly displayed by the free Bose gas.5 The U(1) breaking leads to the existence of Goldstone modes, the so-called Landau phonons, and the existence of such excitations may in turn indicate the presence of a broken U(1) symmetry" [[https://arxiv.org/pdf/1502.06540.pdf |Source]] <-- ---- * For a nice summary see http://pages.physics.cornell.edu/~ajd268/Notes/GoldstoneBosons.pdf <tabbox Abstract> <blockquote> It was known from perturbative investigations of self-interacting scalar fields by Goldstone that the local current conservation may lead to a divergent global charge resulting from the contribution of a massless scalar ("Goldstone") boson which impedes the large distance convergence and in this way causes a situation which was appropriately referred to as spontaneous symmetry breaking (SSB). Kastler, Swieca and Robinson showed that this cannot happen in the presence of a mass gap [12], and in a follow up paper (based on the use of the Jost-Lehmann-Dyson representation) Swieca together with Ezawa [13] succeeded to prove the Goldstone theorem in a model- and perturbation- independent way. The Goldstone theorem states that a Noether symmetry in QFT is spontaneously broken precisely if a massless scalar "Goldstone boson" prevents the convergence of some of the global charge $Q= \int j_0 = \infty.$ This quasiclassical prescription leads to a model-defining first order interaction density which maintains the conservation of the symmetry currents in all orders. There are symmetry-representing unitary operators for each finite spacetime region O but the global charges $Q= \int j_0$ of same symmetry generating currents diverge. This is the definition of SSB whereas the shift in field space procedure is a way to prepare such a situation whenever SSB is possible. For the later presentation of the Higgs model it is important to be aware of a fine point about SSB whose nonobservance led to a still lingering confusion. As soon as scalar self-interacting fields are coupled to s = 1 potentials the physical interpretation of the field shift manipulation on a Mexican hat potential as a SSB is incorrect; one obtains the Higgs model for the wrong physical reasons and misses the correct reasons why there can be no self-interacting massive vectormesons without the presence of a H-field. Although this can be described correctly in the gauge theoretic formulation, a better understanding is obtained in the positivity preserving string-local setting of LQP (see section 6) <cite>https://arxiv.org/pdf/1612.00003.pdf</cite></blockquote> <tabbox Why is it interesting?> <blockquote> Goldstone's theorem states that whenever a continuous global symmetry is spontaneously broken, there exists a massless excitation about the spontaneously broken vacuum. Decomposing $\Phi(x)=|\Phi(x) |e^{i\rho(x)}$, $\rho$ transforms as $\rho(x) \to \rho(x) + \theta$. Hence the Lagrangian can depend on $\rho $ only via the derivative = $\partial_\mu \rho$; there cannot be any mass term for $\rho$, and it is a massless field. $\rho$ --- identified as the field which transforms inhomogeneously under the broken symmetry --- is referred to as the Goldstone boson. <cite>https://arxiv.org/pdf/1703.05448.pdf</cite> </blockquote> </tabbox>
Please solve the following equation to prove you're human. 205+2 = Please keep this field empty:
theorems/goldstones_theorem.txt · Last modified: 2018/05/15 05:00 by jakobadmin
|
CommonCrawl
|
Boundary-Layer Meteorology
October 2013 , Volume 149, Issue 1, pp 85–103 | Cite as
The Effect of Wind-Turbine Wakes on Summertime US Midwest Atmospheric Wind Profiles as Observed with Ground-Based Doppler Lidar
Michael E. Rhodes
Julie K. Lundquist
We examine the influence of a modern multi-megawatt wind turbine on wind and turbulence profiles three rotor diameters (\(D\)) downwind of the turbine. Light detection and ranging (lidar) wind-profile observations were collected during summer 2011 in an operating wind farm in central Iowa at 20-m vertical intervals from 40 to 220 m above the surface. After a calibration period during which two lidars were operated next to each other, one lidar was located approximately \(2D\) directly south of a wind turbine; the other lidar was moved approximately \(3D\) north of the same wind turbine. Data from the two lidars during southerly flow conditions enabled the simultaneous capture of inflow and wake conditions. The inflow wind and turbulence profiles exhibit strong variability with atmospheric stability: daytime profiles are well-mixed with little shear and strong turbulence, while nighttime profiles exhibit minimal turbulence and considerable shear across the rotor disk region and above. Consistent with the observations available from other studies and with wind-tunnel and large-eddy simulation studies, measurable reductions in wake wind-speeds occur at heights spanning the wind turbine rotor (43–117 m), and turbulent quantities increase in the wake. In generalizing these results as a function of inflow wind speed, we find the wind-speed deficit in the wake is largest at hub height or just above, and the maximum deficit occurs when wind speeds are below the rated speed for the turbine. Similarly, the maximum enhancement of turbulence kinetic energy and turbulence intensity occurs at hub height, although observations at the top of the rotor disk do not allow assessment of turbulence in that region. The wind shear below turbine hub height (quantified here with the power-law coefficient) is found to be a useful parameter to identify whether a downwind lidar observes turbine wake or free-flow conditions. These field observations provide data for validating turbine-wake models and wind-tunnel observations, and for guiding assessments of the impacts of wakes on surface turbulent fluxes or surface temperatures downwind of turbines.
Diurnal cycle Turbine wakes Wind energy Wind profiles Lidar
A global transition to renewable energy sources is possible due to abundant renewable resources and technology (Jacobson and Delucchi 2011). Wind energy is a leading renewable energy source for the United States (Milligan et al. 2009), partially due to the large wind resource that occurs in the US Midwest. The Midwest also serves as the hub of US agriculture (USDA 2012), so any potential impacts of wind turbines on agriculture could have significant economic effects. In particular, concern exists that increased turbulence in turbine wakes may alter surface temperatures (Zhou et al. 2012) or fluxes (Baidya Roy 2011) downwind. Additionally, turbulent wakes affect the energy production of turbines located downwind of other turbines (Barthelmie et al. 2007, among others). However, it is not yet known whether wind-turbine wakes have a beneficial or detrimental impact on crop growth (Rajewski et al. 2013) primarily due to the lack of detailed observations of the atmosphere and of the surface exchanges of heat, momentum, moisture, and carbon dioxide upwind and downwind of operational turbines. Such observations are essential to determine wake impact on the local environment.
A wind farm or wind plant, most commonly used for utility-scale applications, is a group of tens or hundreds of individual horizontal-axis wind turbines (typically of capacity 1.5 MW or greater) installed over an area on the order of many square kilometres. Typically, two to four turbines are found per \(\text{ km }^{2}\) in modern wind plants. Each turbine in the wind farm has blades that produce electricity by converting horizontal momentum in the airflow into rotation of a generator. The area swept by the turbine blades is referred to here as the rotor disk. A region of reduced wind speeds and increased turbulence, called the wake, exists downwind of each turbine as a result of the interaction between the flow and rotor. Wakes from upwind turbines are responsible for decreased power output and increased turbine loading at downwind turbines in the wind farm (Frandsen 2007; Barthelmie et al. 2010, among others). Because turbine wakes have removed momentum from the flow, less momentum is available for the downwind turbines to extract.
Wind-turbine wakes induce atmospheric changes over a range of scales as documented through wind-tunnel studies, computer flow simulations, and field observations. Previous observational campaigns on turbines observe that downwind of a turbine there is a discernible wake region characterized by reduced horizontal wind speed and increased turbulence (Baker and Walker 1984; Hogström et al. 1988; Magnusson and Smedman 1994; Käsler et al. 2010; Trujillo et al. 2011; Iungo et al. 2013; Smalikho et al. 2013). Chamorro and Porté-Agel (2010) used a wind tunnel to determine the effects of a model wind turbine on fluid flow. Measuring mean and turbulence values of the flow under neutral conditions, they found signatures of turbine wakes at distances up to \(20D\) downwind in the wind tunnel. Furthermore, they found the wake momentum deficit to be axially symmetric while wake-driven turbulence characteristics were concentrated above hub height. Wu and Porté-Agel (2011) compared these wind-tunnel simulations to neutrally stratified large-eddy simulations (LES) to identify optimal approaches for representing wind turbines in LES. While both LES and wind-tunnel studies provide insights into what might be expected from field campaigns, only field campaigns are able to capture real interactions between wind farms and atmosphere dynamics. Of course, test conditions such as wind speed and wind direction may not be controlled during a field campaign while they can be specified in a wind tunnel or in computational flow simulation.
Turbine-wake measurement field campaigns often involve both in situ and remote sensing instrumentation. Standard atmospheric wind measurements are made with in situ measurements such as cup or sonic anemometers, which must be mounted on towers that can interfere with the measurements. Additionally, towers pose logistical problems due to the high measurement heights required to sample turbine wakes. Tethered kites and remotely piloted vehicles outfitted with onboard hotwire anemometers (Baker and Walker 1984; Hogström et al. 1988; Frehlich et al. 2003; Kocer et al. 2011) observe details of turbulence within wakes at multiple locations on the scale of seconds to minutes without the flow disruption of towers. Remote sensing technology, including acoustic, microwave, and laser systems, have the ability to observe turbine wakes at multiple locations. Lidar systems offer the ability to measure winds high above the ground and over long distances without having to erect a large meteorological tower at each measurement location.
In the present study, we aggregate over 100 h of lidar inflow and wake observations to document turbine mean and turbulent wake characteristics for a range of inflow wind speeds. Our dataset includes several cases of wakes occurring during nocturnal low-level jet conditions (Blackadar 1957; Whiteman et al. 1997; Banta et al. 2002). Section 2 qualifies the ability of this type of lidar to observe inhomogeneous flow such as wind-turbine wakes, and describes the observational dataset. Section 3 describes the undisturbed, or "inflow" wind and turbulence profiles to quantify the Midwest atmospheric boundary layer. In Sect. 4, we summarize the dependence of wake characteristics on inflow wind speed, and Sect. 5 highlights a case study of wake variability during nocturnal stable conditions; we emphasize this case because plentiful wind resources in the Midwest arise from the nocturnal low-level jet that occurs during such conditions. In Sect. 6 we compare the present results with those obtained in previous field studies, wind-tunnel studies, and simulations, and we suggest strategies for future lidar investigations of wind-turbine wakes.
2 Observational Dataset
As part of the Crop/Wind-energy EXperiment 2011 (CWEX-11) (Rajewski et al. 2013), two vertically profiling Doppler wind lidars (Windcube V1, described in Courtney et al. 2008) were deployed within an operating wind farm in the agricultural fields of central Iowa, USA (Fig. 1). Historical data indicate that this region often experiences strong southerly winds (Fig. 2), and so the lidars were sited north and south of a turbine to intentionally sample turbine inflow and wakes during southerly flow. Except during a brief intercomparison period, one lidar (CU1) was located approximately 165 m south (\(2.2D\)) of a row of six modern multi-megawatt wind turbine generators (WTG) placed in a line running from west to east; the second lidar (CU2) was located 250 m north (\(3.4D\)) of the WTG row. In addition to the lidars, other equipment interrogated the effects of turbine wakes on the agricultural crops in the vicinity, including an array of two surface-flux stations south and north of the wind-turbine row, and an Integrated Surface Flux System (ISFS) south of the turbine row and an additional three ISFSs north of the turbines (NCAR ISFS 2012). Surface-flux data were recorded for the duration of the lidar operational period; these data are discussed in Rajewski et al. (2013). To focus on the turbine wakes specifically, only the lidar data are discussed here; future work will explore the impact of the wakes on surface quantities.
A diagram of the field site shows an east–west row of wind turbines (B1–B6) indicated by circles. Square markers show the locations of each lidar system; triangles indicate the locations of NCAR surface-flux stations
A wind rose from the southern lidar measurements at 80 m a.g.l. indicates a predominantly southerly flow. Data from the entire observational period are included
The WTG observed in this study is a GE 1.5 SLE, which has an 80-m hub height and 74-m rotor diameter extending from 43 to 117 m above ground level (a.g.l.). The turbine begins to rotate at a cut-in speed of \(3\,\text{ m } \text{ s }^{-1}\), below which no power is produced. Electrical power production reaches a maximum at \(14\,\text{ m } \text{ s }^{-1}\), the rated speed for the turbines. At speeds \(>\) the rated speed, power production remains constant with increasing wind speed. At the cut-out speed of \(25\,\text{ m } \text{ s }^{-1}\), the turbine ceases rotation.
The row of WTGs pictured in Fig. 1 is located at the southern end of a larger utility-sized wind farm at an elevation of approximately 335 m above sea level. The landscape surrounding the study site consists of soybean and corn agricultural fields, with corn as the primary crop surrounding the lidars and WTGs. Small farms and homesteads interrupt the upwind fetch, with the closest homesteads approximately 600 m to the north-west and south–south-east from the lidars. A few metres north of the turbines and running parallel to the row of wind turbines, a 10-m-wide gravel access road connects the WTGs. The lidar observational period began on 30 June and concluded on 16 August 2011. Approximate sunrise occurred between 0430 and 0515 local standard time (LST) while sunset ranged from 1915 to 2000 LST.
The lidar system records the radial velocity of boundary-layer aerosols at a half opening angle \(\phi \) (approximately 30\(^{\circ }\) from vertical) in each of the four cardinal directions once per second. Line-of-sight velocities at each of the lidar's ten range gates are converted to zonal, meridional, and vertical wind speeds at each height assuming flow homogeneity throughout the volume scanned by the lidar. (The impact of flow inhomogeneities on the measurements is discussed in Sect. 2.1.) The lidars record horizontal and vertical components of wind speed every second for each specified height; these components are then averaged for a 2-min period to quantify horizontal and vertical components of the flow and the variances of those quantities. Any data that do not meet the carrier-to-noise ratio threshold of \(-\)22 dB are omitted from the recorded 2-min average. Wind shear, directional shear, horizontal turbulence intensity, vertical turbulence intensity, and a form of turbulent kinetic energy are calculated based on the 2-min data from the measured wind-speed components, wind direction, wind-speed variance, and measurement height. Periods of precipitation (as measured at the local ISFS stations) are omitted due to potential lidar signal contamination (Aitken et al. 2012).
2.1 Lidar Observations of Inhomogeneous Flow
In the CWEX domain, the lidar observations of flow into the turbine can be assumed to be homogeneous across the measurement volume. However, observations in the wake region likely incorporate inhomogeneous flow, and so the uncertainty of lidar velocity measurements in such flow must be assessed. The size of the volume sampled by the lidar varies with the height \(h\) of the measurement. For the Windcube v1 in the present campaign, \(h\) ranged from 40 to 220 m a.g.l. With a half-opening angle \(\phi \) of approximately 30\(^{\circ }\), the line-of-sight measurements at height \(h\) are collected over a horizontal extent of \(2h\) sin \(\phi \) (or approximately \(h\)). The effective probe length of the Windcube v1 is 18 m. Therefore, at an altitude of 40 m, the Windcube velocity measurements collected over a 4-s period represent a volume 40 m in the horizontal, 18 m in the vertical, and centered at 40 m above the surface. Similarly, at an altitude of 100 m, the Windcube velocity measurements collected over a 4-s period represent a volume 100 m in the horizontal, 18 m in the vertical, centered at 100 m above the surface.
At hub height (here, 80 m elevation), a turbine wake has a horizontal extent or width the size of the rotor disk (approximated here as 80 m); at either 60 or 100 m a.g.l., the wake width would be approximately 70 m. Very few measurements of wake expansion have been documented, but in the offshore wind farm Horns Rev, wakes have been observed to expand by 5–10\(^{\circ }\) as they move downwind (Barthelmie et al. 2010). With a 5\(^{\circ }\) wake expansion, the wake region at 80 m a.g.l. would expand to approximately 124 m wide by the time the wake reaches the downwind lidar located at \(3.4D\) (250 m) downwind, so that the wake will encompass the entire Windcube sampling volume at that elevation. Similarly, the wake at 100 m above the surface would expand to 114 m width at a location \(3.4D\) downwind of the turbine, and so the wake would again envelop the Windcube sampling volume. Therefore, it is reasonable to assume that for southerly flow, the downwind lidar samples turbine wake at all altitudes 100 m and below. The measurement volume of the downwind lidar may exceed that of the wake itself at 120 m a.g.l., and so information on the transverse component of the flow in the wake at the top of the rotor disk cannot be collected with certainty using the experimental design here. However, the estimates of the streamwise velocity would be collected within the wake. (We are hopeful that estimates of wake expansion will become more precise based on observational studies in the future, particularly studies using scanning Doppler lidar (Käsler et al. 2010; Iungo et al. 2013; Smalikho et al. 2013) or radar (Hirth and Schroeder 2013).)
It is useful to quantify wake characteristics throughout the rest of the rotor disk, 100 m and below, recognizing that the measurements are likely sampling inhomogeneous flow. Bingöl et al. (2008) quantified lidar measurement uncertainty due to inhomogeneous flow for the special case of the mean velocity field varying linearly across the measurement volume defined by the half-opening angle \(\phi \) (nominally 30\(^{\circ }\) for the Windcube v1). They found the lidar measurements of the zonal (\(u\)), meridional (\(v\)) and vertical (\(w\)) velocity components at measurement altitude \(h\) have an uncertainty given by a function only of the variation of the vertical velocity \(w\) as it varies in the zonal (\(x\)), meridional (\(y\)), and vertical (\(z\)) components:
$$\begin{aligned} u_\mathrm{lidar} (h)&= u(h)+h\frac{\partial w(h)}{\partial x},\end{aligned}$$
$$\begin{aligned} v_\mathrm{lidar} (h)&= v(h)+h \frac{\partial w(h)}{\partial y},\end{aligned}$$
$$\begin{aligned} w_\mathrm{lidar} (h)&= w(h)-\frac{h}{2\cos \left( \varphi \right) }\text{ tan }^{2}(\varphi )\frac{\partial w(h)}{\partial z}. \end{aligned}$$
In inhomogeneous flow, then, the uncertainty of velocity measurements is a function of the variation in vertical velocity across the horizontal extent (\(x\) or \(y\)) of the measurement volume. Note that, although variations in the horizontal velocities are permitted in this model, by continuity those terms may be replaced with variations in the vertical velocity as shown in Bingöl et al. (2008). The horizontal extent (\(x\) or \(y\)) of the measurement volume is, by virtue of \(\varphi \approx 30^{\circ }\), approximately \(h\), simplifying (1)–(3) after discretization of the partial derivatives to
$$\begin{aligned} u_\mathrm{lidar} (h)&= u(h)+\Delta w(h),\end{aligned}$$
$$\begin{aligned} v_\mathrm{lidar} (h)&= v(h)+\Delta w(h),\end{aligned}$$
$$\begin{aligned} w_\mathrm{lidar} (h)&= w(h)-\frac{h}{3\surd 3}\frac{\Delta w(h)}{\Delta z}, \end{aligned}$$
where \(\Delta z\) is the effective probe length of the Windcube, 18 m (Courtney et al. 2008). Therefore, uncertainties in the lidar estimates of the horizontal velocity components are on the order of the vertical velocity variation within the wake. Uncertainties in the vertical velocity measurements are a function of the magnitude of the vertical velocity and \(h/(3\surd 3 \Delta z)\) where the denominator \({\approx }94\,\text{ m }\). To quantify uncertainty of lidar measurements in inhomogeneous flow, then, it is important to quantify the variation of vertical velocity in a wake.
Using two Doppler lidars, Iungo et al. (2013) report observations of the horizontal and vertical velocity components in the wake of a 2.3 MW turbine, which is larger than the turbine studied here. They find "the mean vertical velocity is shown to be roughly negligible for all the tested downstream locations," consistent with the modeling studies of Porte-Agel et al. (2011) and Wu and Porté-Agel (2011). Close inspection of the figures of Iungo et al. (2013) suggests a variation of vertical velocity in the wake of less than \(0.7\,\text{ m } \text{ s }^{-1}\). Therefore, according to Eqs. 4 and 5, we expect an error of \({<}1\,\text{ m } \text{ s }^{-1}\) in the horizontal wind-speed measurements within the wake reported herein, for measurements at heights 100 m and below. More detailed quantification of lidar uncertainty, perhaps using detailed computational fluid dynamics simulations, may be possible.
2.2 Lidar Intercomparison
The two lidars were co-located during the first two days of the 2011 observational campaign to quantify any bias in wind-speed measurement between the two lidar units. Both units were sited at the CU1 site (Fig. 1) separated by approximately 3 m. During the intercomparison, wind speeds were less than \(20\,\text{ m } \text{ s }^{-1}\) and wind direction was mostly from the south–south-west, unaffected by any turbines. 10-min averaged wind speeds from both units compare well (Fig. 3): the slope is near unity and the y-intercept of the ordinary least-squares best fit is close to zero. A high coefficient of determination, 0.998, shows a strong correlation between the data from each lidar. \(R^{2}\) values between the lidar units exceeded 0.997 when comparing wind speed and direction at all heights within the rotor disk. Both wind-speed and wind-direction data are highly correlated between the two lidars at all heights during the intercomparison. We conclude there was no detectable bias in wind speed between the two instruments. After the intercomparison period, the lidars were located at their respective locations shown in Fig. 1 to capture both upwind and wind-turbine wake data.
Hub-height comparison of 10-min average horizontal wind speed for the two lidar units at the upwind site during the 2-day intercomparison period
2.3 Wake Definition
During the CWEX observational campaign, wind direction was frequently (but not always) from the south (Fig. 2). Using the hub-height wind speed and wind direction as measured by the upwind lidar (CU1), we define periods when the northerly lidar (CU2) likely sampled the turbine wake. Requirements included a CU1 wind speed \({>}3\,\text{ m } \text{ s }^{-1}\) (to ensure turbine operation). A wind direction requirement based on the wake expansion of 5–10\(^{\circ }\) observed by Barthelmie et al. (2010) was calculated: wind directions between \(167^{\circ }\) and \(195^{\circ }\) should place a wake from turbine B3 (Fig. 1) directly over the lidar at CU2 so that all four beams from the lidar sample wake. Other wind directions produce wakes from other turbines; the analysis here incorporates only wakes from turbine B3, located directly between the two lidars.
In excess of 4,000 10-min time periods are available for analysis; more than 600 time periods (6,000 min) meet the criteria based on wind speed, wind direction, and precipitation for a wind-turbine wake detected by the lidar. Approximately half of the data presented here (55 h) were collected between the hours of 0700 and 1900 LST, providing an even distribution of daytime and nighttime conditions.
2.4 Quantities Observed
From the 2-min estimates of the two horizontal components (\(u, v\)) and the vertical component (\(w\)) of wind velocity, as well as the variances of these quantities over the 2-min period, several useful quantities can be calculated. 10-min averages of these quantities are presented in the figures below.
Previous investigations have observed enhanced turbulence intensity in the wake. Turbulence intensity is calculated from the variances (\(\sigma ^{2}\)) of the \(u\) and \(v\) components of the flow as in Eq. 7,
$$\begin{aligned} I=\frac{\sqrt{\sigma _u^2 +\sigma _v^2 }}{U}, \end{aligned}$$
where \(U\) is the mean horizontal wind speed (Stull 1988) at the level at which the velocities are observed. (Note that some investigators focusing on wind-tunnel studies, such as Chamorro and Porté-Agel (2009), normalize turbulence intensity with hub-height wind speed rather than the wind speed at the altitude of the measurement.) In the wind energy industry, turbulence intensity is usually calculated over a 10-min period, although it is likely that other averaging times are more appropriate for capturing all the energetic length scales of turbulent fluctuations (Mahrt 1998).
These variances may also be used to calculate a quantity approximating turbulent kinetic energy (TKE), a measure of turbulence in the atmosphere (Eq. 8). TKE incorporates both horizontal and the vertical components of flow variability (Stull 1988), and so an estimate of lidar TKE (here \(E)\) can be calculated as,
$$\begin{aligned} E=0.5\left( {\sigma _u^2 +\sigma _v^2 +\sigma _w^2 } \right) . \end{aligned}$$
In a detailed comparison of tower-based sonic anemometry to ground-based lidar, Sathe et al. (2011) suggest that ground-based lidar systems fail to accurately calculate horizontal and vertical variances as compared to sonic anemometry. Without tower data for comparison here, we do not suggest that the lidar can directly calculate TKE as it would be measured by a sonic anemometer. Rather, we provide a comparison between two similar lidar systems as they quantify fluctuations in the wind as represented by \(E\).
The wind profile power law (Eq. 9) compares wind-speed measurements between two heights \(z_{i}\) and \(z_{i+1}\). The power law is used in the wind energy industry not because it is an accurate portrayal of the true wind profile, but because the resulting coefficient \(\alpha \) captures information about wind shear that is easily comparable between multiple geographic locations (Schwartz and Elliott 2006),
$$\begin{aligned} \frac{U_{i+1} }{U_i }=\left( {\frac{z_{i+1} }{z_i }} \right) ^{\alpha }. \end{aligned}$$
In neutral stability, \(\alpha = 1/7\) is commonly used for approximations (Brower 2012). This assumption does not hold true for strongly stable or strongly unstable atmospheric conditions or under high wind shear conditions (Walter et al. 2009; Wharton and Lundquist 2012a, b, among others). Large positive values of \(\alpha \) indicate a large increase of wind speed with height; negative values indicate a decrease of wind speed with height. Changes of wind direction with height are not captured by the power-law coefficient. As discussed in the references above, the use of \(\alpha \) to quantify wind shear is not optimal, as it implicitly assumes a logarithmic wind profile that may only be expected in the surface layer or under neutral stratification. Herein it is used only to facilitate comparisons with previous work. Wherever possible, we recommend quantification of the complete wind-speed and wind-direction profiles rather than analysis of only a power-law coefficient \(\alpha \) calculated between two levels.
3 Atmospheric Boundary-Layer Properties as Observed with Lidar
Before examining the properties of the waked or disturbed wind profile, it is useful to characterize the upwind or unwaked wind profile to assess averaged atmospheric boundary-layer characteristics of the summertime Midwest boundary layer. We focus on time periods with southerly flow (wind directions between \(150^{\circ }\) and \(210^{\circ }\)); this span of wind directions is slightly larger than that used by the wake characterization study in Sect. 4. Because of data loss due to variations in power-generator performance, precipitation events, wind-direction shifts, and air quality, data availability varies with time of day and of height (Fig. 4a): however, at least 30 data points contribute to information at each height and time of day, with over 50 data points in many cases.
10-min averages of a data availability, b median wind speed, c mean wind direction, d turbulence intensity, e power-law coefficient, and f wind-direction difference for times with hub-height wind directions ranging between \(150^{\circ }\) and \(210^{\circ }\)
An increase of wind speed with height at all times of day is observed (Fig. 4b), as would be expected near the surface. Between 0800 and 1600 local time, the increase of wind speed with height is subtle, between 2 and \(3\,\text{ m } \text{ s }^{-1}\) between 40 and 220 m above the surface: convective mixing during the daytime ensures only slight variations of wind speed with height. During nocturnal conditions, on the other hand, strong variations of wind speed with height occur, with wind-speed differences up to \(9\,\text{ m } \text{ s }^{-1}\) between 40 and 220 m above the surface. The nocturnal wind-speed profiles are consistent with those expected when nocturnal low-level jets are present (Banta et al. 2002, among others). Similarly, the wind direction (Fig. 4c) exhibits subtle variations (approximately 5\(^{\circ }\)) during well-mixed daytime conditions. However, the nocturnal decoupling of the boundary layer is accompanied by strong wind-direction shifts in excess of 35\(^{\circ }\) between the top and bottom heights sampled by the lidar.
Turbulence intensity, as measured by the upwind lidar, exhibits a classic pattern of strong turbulence during the day and reduced turbulence at night (Fig. 4d). Note that because wind speeds are reduced during the day (Fig. 4b), the turbulence intensity is enhanced during the day. When \(E\) is considered, daytime values also exceed nocturnal values (figure not shown). This regular diurnal pattern is also evident in the time series of the change of wind direction with height (Fig. 4e), presented here as calculated across the turbine rotor disk (between 40 and 120 m). These data are presented for the contrast with observations of minimal veer documented in northern European sites (Cariou et al. 2010). The lidar data suggest strong daytime convective turbulence couples flow at lower levels with that at upper levels, so little variation in wind speed or direction occurs during the day. Nighttime conditions, with minimal turbulence and no mixing, enable the decoupling and resulting acceleration and veering of winds at upper levels.
The power-law coefficient, which measures the variation of wind speed with height, also exhibits strong diurnal variability (Fig. 4f) when calculated either across the turbine rotor disk (between 40 and 120 m) or across the extent sampled by the lidar (between 40 and 220 m). Note that the lowest measurement available from the lidar is at 40 m above the surface, and so these \(\alpha \) values are lower than would be expected for comparison to other studies that calculate shear between a surface-layer measurement (10 m, for example) and altitudes at the turbine rotor disk (Walter et al. 2009). The marked diurnal variations in atmospheric stability are an important trait of atmospheric behaviour in the US Midwest region.
4 Wake Properties Vary with Inflow Wind Speed
Wake characteristics are expected to depend on inflow wind speed and elevation within the rotor disk. 10-min mean differences of horizontal wind speed and turbulence intensity data were categorized based on the upwind wind speed at the time of the difference measurement. Upwind wind-speed bins are defined in \(0.5\,\text{ m } \text{ s }^{-1}\) increments from \(2\) to \(16\,\text{ m } \text{ s }^{-1}\), the available range of wind speeds observed throughout the 7-week study period during southerly flow conditions. At least twenty 10-min data points contribute to each unique wind-speed and height bin presented here.
4.1 Wake Wind-Speed Deficit
The magnitude of the wind-speed deficit in the wake has been found to vary with altitude within the wake in field measurements (Baker and Walker 1984; Hogström et al. 1988; Chamorro and Porté-Agel 2010; Käsler et al. 2010). The wind-tunnel observations of Cal et al. (2010) and LES of Wu and Porté-Agel (2011) find a maximum deficit at hub height, as do the lidar observations of Iungo et al. (2013). We also find the average wake deficit exhibits largest values at or just above hub height (80–100 m) (Fig. 5a). The magnitude of the deficit depends on the upwind wind speed, with the maximum deficit occurring at wind speeds just below rated speed, on the order of \(10\,\text{ m } \text{ s }^{-1}\). These observations are consistent with expectations for pitch-control turbines such as those studied here: although the turbine will extract the maximum amount of energy at wind speeds below rated, at wind speeds above the rated speed the blades pitch so that no additional power is generated as the wind speed increases. At wind speeds above the rated speed, blades are pitched to extract less momentum from the flow and the wind-speed deficit can be smaller than at lower wind speeds.
Upwind wind speed-height contours of a mean wind speed and b mean turbulence intensity differences due to the turbine wake (\(2D\) upwind–\(3D\) downwind) for all stability conditions. D is the rotor diameter, or 74 m for this study. A minimum of 20 2-min data points were required in each bin
4.2 Wake Turbulence Intensity Enhancement
Previous investigations have observed enhanced turbulence intensity in the wake. This region of enhanced turbulence increases damaging loads on downwind turbines. Further, if this enhanced turbulence penetrates to the surface, it may modify surface-atmosphere exchanges of heat, momentum, moisture, and carbon dioxide. We observe a distinct region of turbulence enhancement at turbine hub height (Fig. 5b). As with the wind-speed deficit, the largest enhancement of turbulence occurs at wind speeds below rated. Other observations have noted turbulence enhancement also at the top of the rotor disk (Iungo et al. 2013). Given that our measurement volume at the top of the rotor disk is possibly sampling non-wake flow as discussed in Sect. 2.1, the measurements here do not support or deny the existence of enhanced turbulence at the top of the rotor disk.
5 Stable Nighttime Case Study
Stable conditions are of particular interest for wake studies, as wakes are expected to retain their integrity and propagate for longer distances during stable conditions, as compared to convective conditions in which strong background turbulence may erode a wake rapidly (Fitch et al. 2013). A night with consistent southerly flow during the CWEX campaign allows close examination of wake impacts during stable conditions. The nocturnal case study began at 2100 LST on 16 July 2011 and concluded at 0700 LST on 17 July 2011; sunset occurred at 1947 LST on 16 July, and sunrise followed at 0452 LST on 17 July. Weather conditions during the event consisted of southerly flow at wind-turbine hub height with wind-direction variations from \(170^{\circ }\) to \(195^{\circ }\) (Fig. 6). Average hub-height wind speeds were about \(9\,\text{ m } \text{ s }^{-1}\) and decreased throughout the night to \(7\,\text{ m } \text{ s }^{-1}\) with some variation. Synoptically, Iowa was situated between an anticyclone to the east and a weak low pressure system to the north-west, which resulted in southerly winds observed at hub height.
Wind directions (2-min averages) at 80 m observed by the upwind and downwind lidars over the night of 16–17 July 2011
5.1 Wake Effects on Wind Speed
The wind-speed deficit during the nighttime case study (Fig. 7) varied with both inflow wind speed and inflow wind direction. From 2230 until 0230 LST, the maximum wind-speed deficit at three rotor diameters downwind was located at 100 m above the surface, not at hub height, but rather in the top half of the rotor disk. The wake deficit was observed between 40 and 140 m, thereby extending above the top of the rotor disk at 120 m. The reduced wind speeds above the rotor disk suggest a vertically expanding turbine wake, which is unexpected during these stable conditions.
Time-height contours of a upwind, b downwind, and c upwind–downwind difference of horizontal wind speed, and d upwind, e downwind, and f upwind–downwind difference of power-law coefficient \(\alpha \) over the night of 16–17 July 2011 calculated from 2-min averages. Dotted lines indicate edge of the rotor disk and dashed lines show turbine-hub height. The solid white line indicates (a–c) the greatest difference of wind speed on a 20-min average basis. The solid black line (d–f) shows the lowest height between positive and negative wind-shear difference
Beginning at 0300 and concluding near 0500 LST, the wind direction shifted from a southerly flow to a more south–south-easterly flow. With this change in flow direction, the downwind wind-speed deficit was reduced, though still present, and because of this wind-direction shift, the downwind lidar (CU2) detected the edge of the wake.
5.2 Wake Enhancement of Turbulence Kinetic Energy
For most of the night, the upwind lidar observed TKE (as represented by \(E\)) values \({<}0.5\,\text{ m }^{2}\,\text{ s }^{-2}\) at all heights; \(E\) only began to increase in magnitude after sunrise at 0600 (Fig. 8). Throughout the night, the downwind lidar observed \(E\) to be approximately five times larger than values upwind. Larger values of \(E\) occurred frequently near the 80-m hub height but were seen throughout the rotor disc region by the downwind lidar. When the wind direction shifted to the south-east at 0245–0445 LST, the height of maximum \(E\) increased to around 100 m, although the overall values of \(E\) were reduced as the lidar likely sampled the edge of the wake during this time period. At both the downwind and upwind lidars, the large increase of \(E\) beginning shortly after 0600 was due to development of daytime convective conditions after sunrise at 0452 LST.
Time-height contours of a upwind, b downwind, and c upwind–downwind difference of the lidar-determined \(E\), and d upwind, e downwind, and f upwind–downwind difference of horizontal turbulence intensity over the night of 16–17 July 2011 calculated from 2-min averages. Dotted lines indicate edge of the rotor disk and dashed lines show turbine-hub height. The solid white lines indicate the greatest difference upwind versus downwind on a 20-min average basis
5.3 Wake Enhancement of Turbulence Intensity
Much like TKE, turbulence intensity values increase in the lee of a wind-turbine rotor due to increased turbulent flow in the wake. Turbulence intensity is presented here because it is commonly used in the wind industry when performing a wind resource assessment or turbine suitability studies (Brower 2012). As with \(E\), upwind turbulence intensities (Fig. 8a) were small at all heights throughout the night and only begin to increase at sunrise. However, the downwind turbulence intensity (Fig. 8b) in the rotor disc region remained large throughout the night. For wind directions between \(185^{\circ }\) and \(190^{\circ }\) (2130–2230 LST), the downwind lidar observed increased turbulence intensity in the lower half of the rotor disc between 40 and 80 m (as compared to the upwind lidar observations). Then from 2230 until 0300 LST with wind direction from \(170^{\circ }\) to \(175^{\circ }\), the largest values of turbulence intensity occurred at the 80-m level, with high values throughout the entire rotor. During the wind shift to slightly easterly flow, from 0300 to 0445 LST, the downwind lidar continued to observe increased turbulence intensity as compared to the upwind lidar observations, although the magnitude of the enhancement was smaller than when the flow was more southerly. After sunrise, a further deepening of the turbulent layer was evident in both the upwind and downwind datasets, though the largest values of turbulence intensity downwind were still in the top half of the rotor disc area.
5.4 Wake Impacts on the Power-Law Coefficient \(\alpha \)
Upwind \(\alpha \) values (Fig. 7d) remained near zero for most of the heights observed, but below 70 m, \(\alpha \) increased to between 0.02 and 0.08, reflecting the strong wind shear closer to the surface as the wind speed increased with height. Downwind (Fig. 7e), 50–70 m \(\alpha \) exhibited negative values between \(-\)0.1 and \(-\)0.02 from 50 to 70 m, indicating that wind speeds in the bottom half of the rotor disk decreased with height due to the momentum extraction from the turbine. Between 70 and 100 m, \(\alpha \) ranged from 0.02 to 0.04, indicating increasing wind speeds with height. Of note, this pattern changes during the easterly wind shift around 0330 to 0400 LST where the values of \(\alpha \) resembled values observed upwind of the turbine, suggesting that during this time period, the lidar did not sample a turbine wake. Additionally, the distinct sign change of the power-law coefficient during a period when the wake was not sampled suggests the utility of \(\alpha \) (or the sign of wind shear) as a parameter for determining wake versus non-wake conditions.
6 Discussion and Conclusions
Wind turbines have a measureable effect on atmospheric flow as determined using data from wind-profiling lidars located approximately two rotor diameters (160 m) upwind and three rotor diameters (240 m) downwind of a multi-MW three-bladed horizontal axis wind-turbine generator (WTG). The "undisturbed" flow upwind of the turbine in this summertime US Midwest location is marked by a strong diurnal cycle of moderate and daytime winds with little shear and strong nocturnal low-level jets with considerable shear; the nocturnal flow shows evidence of changes in wind direction on the order of 20\(^{\circ }\) across typical turbine rotor-disk altitudes. We find reduced wind speed, enhanced TKE, and enhanced turbulence intensity within the wake, and that the characteristics of the wake vary with inflow wind speed.
After quantifying the error that can be expected from lidar measurements of inhomogeneous flow in the turbine wake, over 100 h of data were aggregated to quantify the variability of the wake as a function of inflow wind speed. At all wind speeds, the height of maximum wind-speed reduction is at hub height or the range gate immediately above the hub. In neutrally stratified wind-tunnel observations, Chamorro and Porté-Agel (2009) found the maximum wind speed reduction at \(3D\) downwind to be very close to hub height. Similarly, Cal et al. (2010) report a wake-velocity minimum at hub height. In reporting other measurements of wakes in the atmosphere, various authors have asserted that maximum velocity deficits occur below hub height (Elliott and Barnard 1990), at hub height (Kambezidis et al. 1990), "somewhat" above hub height (Magnusson and Smedman 1994; Helmis et al. 1995) or "near" hub height (Barthelmie et al. 2003). The results found here are thus consistent with the wide range reported previously. Further, we find that the maximum deficit occurs at wind speeds just below the rated wind speed for the turbine, consistent with expectations based on the variation in the tip-speed ratio and thrust coefficient of the turbine (Elliott and Barnard 1990; Magnusson and Smedman 1994; Helmis et al. 1995; Barthelmie et al. 2007).
The turbine wake is also characterized by enhanced turbulence, which can induce large loads and stresses on downwind turbines. We find the height of maximum TKE enhancement to be at hub height in both a 100-h aggregation of wake characteristics and in a stable boundary-layer case study, with the caveat that observations at the top of the rotor disk are not available due to the measurement volume of the lidar at that altitude. Using neutral boundary-layer wind-tunnel observations, Chamorro and Porté-Agel (2009) found the height of maximum turbulence enhancement to be above hub height, in the top half of the rotor disk, associated with the high turbulence levels produced by the turbine blade-tip vortices. The present observations differ in that we see the turbulence intensity and the TKE at maximum levels closer to hub height in both stable conditions and unstable conditions, in contrast to the neutral conditions used in the wind tunnel. The wind-tunnel measurements and the LES of Wu and Porté-Agel (2011) normalize turbulence intensity by the hub-height inflow value instead of the value at that height. (We have also normalized our data by hub-height inflow values (not shown), and find no meaningful difference with the data presented here.) Cal et al. (2010) found maximum Reynolds shear stresses in the wake in the top half of the rotor disk. As shown in their Fig. 15, the streamwise velocity variance is at a maximum in the centre of the top half of the disk, while the vertical velocity variance also has a maximum value in the top of the rotor disk. Other field investigations (Magnusson and Smedman 1994) have noted two distinct maxima in the turbulence profile of the wake, attributed to the tip vortices off the blades, at distances of approximately four rotor diameters downwind.
Our case study emphasizes that observations of the wake are highly dependent on the inflow wind direction: subtle changes in wind direction were sufficient to remove the wake from the sampling volume of the lidar. The sign of the wind-speed shear (often expressed as \(\alpha \), the power-law coefficient) in the lower half of the rotor disk (between 50 and 70 m) is a useful determinant of wake versus non-wake downwind conditions. Because momentum extraction in the rotor disc region produces negative shear in the lower half of the rotor, the shear is a clear indicator of the wind-turbine wake and may prove a more precise tool than upwind wind direction for definition of wake periods for fixed downwind measurements. The strong effect of the wake on the shear of the lower boundary layer has also been observed in the wind-tunnel study of Cal et al. (2010). We conclude that evaluating the sign of the wind shear is a simple identifier for wake conditions and is potentially of use for quantifying wake behaviour and propagation.
The present work has provided insight into the impact of turbine wakes on the atmosphere at one location downwind of a single turbine. To understand the spatial extent of turbine wakes and their evolution downstream, however, future field studies should incorporate scanning lidar to investigate how a wake evolves far downwind of a turbine (Käsler et al. 2010; Iungo et al. 2013; Smalikho et al. 2013). Scanning lidar captures information across a vertical profile of the wake or horizontal scans that span wake and non-wake conditions over tens of seconds. Additionally, use of a radiometer or instrumented tall tower to measure temperature and moisture profiles in the atmospheric boundary layer would allow for better understanding of stability conditions and impacts of the turbine wake on the moisture flux (Friedrich et al. 2012). In situ flux measurements of moisture, heat, or trace gases from meteorological towers that span the distance between standard 10-m meteorological stations and the 40-m lowest lidar measurement level would provide a more complete understanding of how wind-turbine wakes propagate to, and interact with, the surface. Field observations such as these provide data critical for validating turbine-wake models (Churchfield et al. 2012) and wind-tunnel observations, and for guiding assessments of the impacts of wakes on surface fluxes or surface temperatures downwind of turbines.
The authors gratefully acknowledge the efforts of our collaborators in the CWEX experiment, including the Iowa State Team of Dr. Gene Takle, Dan Rajewski, Russ Doorenbos, Kris Spoth, Jimmy Cayer, and the NCAR team including Dr. Steve Oncley and Dr. Tom Horst. We also extend appreciation to the wind farm operators and the landowners who permitted the deployment of the lidar systems, and to Dr. Branko Kosović, Ms. Alice DuVivier, Dr. Andrew Clifton, and Mr. Brian Vanderwende for useful discussions and suggestions. We express appreciation for the helpful comments of two anonymous reviewers. This work was supported by the National Renewable Energy Laboratory under APUP UGA-0-41026-22. NREL is a national laboratory of the US Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable Energy, LLC.
Aitken ML, Rhodes ME, Lundquist JK (2012) Performance of a wind-profiling lidar in the region of wind turbine rotor disks. J Atmos Ocean Technol 29:347–355CrossRefGoogle Scholar
Baidya Roy S (2011) Simulating impacts of wind farms on local hydrometeorology. J Wind Eng Ind Aerodyn 99:491–498CrossRefGoogle Scholar
Baker RW, Walker SN (1984) Wake measurements behind a large horizontal axis wind turbine generator. Sol Energy 33:5–12CrossRefGoogle Scholar
Banta RM, Newsom RK, Lundquist JK, Pichugina YL, Coulter RL, Mahrt L (2002) Nocturnal low-level jet characteristics over Kansas during CASES-99. Boundary-Layer Meteorol 105(2):221–252Google Scholar
Barthelmie RJ, Folkerts L, Ormel FT, Sanderhoff P, Eecen PJ, Stobbe O, Nielsen NM (2003) Offshore wind turbine wakes measured by SODAR. J Atmos Ocean Technol 20:466–477CrossRefGoogle Scholar
Barthelmie RJ, Frandsen ST, Nielsen MN, Pryor SC, Rethore P-E, Jørgensen HE (2007) Modelling and measurements of power losses and turbulence intensity in wind turbine wakes at Middelgrunden offshore wind farm. Wind Energy 10:517–528CrossRefGoogle Scholar
Barthelmie RJ, Pryor SC, Frandsen ST, Hansen KS, Schepers JG, Rados K, Schlez W, Neubert A, Jensen LE, Neckelmann S (2010) Quantifying the impact of wind turbine wakes on power output at offshore wind farms. J Atmos Ocean Technol 27:1302–1317CrossRefGoogle Scholar
Bingöl F, Mann J, Foussekis D (2008) Modeling conically scanning lidar error in complex terrain with WAsP engineering. Danmarks Tekniske Universitet, Risø Nationallaboratoriet for Bæredygtig Energi, 2008. 16 pp (Denmark. Forskningscenter Risoe. Risoe-R; No. 1664(EN)). http://orbit.dtu.dk/services/downloadRegister/3332817/ris-r-1664.pdf
Blackadar AK (1957) Boundary layer wind maxima and their significance for the growth of nocturnal inversions. Bull Am Meteorol Soc 38:283–290Google Scholar
Brower M (2012) Wind resource assessment. Wiley, New YorkCrossRefGoogle Scholar
Cal RB, Lebrón J, Castillo L, Kang HS, Meneveau C (2010) Experimental study of the horizontally averaged flow structure in a model wind-turbine array boundary layer. J Renew Sustain Energy 2:013106–1- 013106-25Google Scholar
Cariou N, Wagner R, Gottschall J (2010) Analysis of vertical wind direction and speed gradients for data from the Met. Mast at Høvsøre. Danmarks Tekniske Universitet, Risø Nationallaboratoriet for Bæredygtig Energi. 34 pp. http://www.risoe.dk/en/Knowledge_base/publications/Reports/ris-r-1733.aspx?sc_lang=da
Chamorro LP, Porté-Agel F (2009) A wind-tunnel investigation of wind-turbine wakes: boundary-layer turbulence effects. Boundary-Layer Meteorol 132:129–149CrossRefGoogle Scholar
Chamorro LP, Porté-Agel F (2010) Effects of thermal stability and incoming boundary-layer flow characteristics on wind-turbine wakes: a wind-tunnel study. Boundary-Layer Meteorol 136:515–533CrossRefGoogle Scholar
Churchfield MJ, Lee S, Michalakes J, Moriarty PJ (2012) A numerical study of the effects of atmospheric and wake turbulence on wind turbine dynamics. J Turbul 13:1–32CrossRefGoogle Scholar
Courtney M, Wagner R, Lindelöw P (2008) Testing and comparison of lidars for profile and turbulence measurements in wind energy. IOP Conf Ser Earth Environ Sci 1:012021. doi: 10.1088/1755-1315/1/1/012021
Elliott DL, Barnard JC (1990) Observations of wind turbine wakes and surface roughness effects on wind flow variability. Sol Energy 45:265–283CrossRefGoogle Scholar
Fitch A, Lundquist JK, Olson JB (2013) Mesoscale influences of wind farms throughout a diurnal cycle. Mon Weather Rev (in press). doi: 10.1175/MWR-D-12-00185.1
Frandsen ST (2007) Turbulence and turbulence-generated structural loading in wind turbine clusters. 135 pp. http://www.risoe.dtu.dk/rispubl/VEA/veapdf/ris-r-1188.pdf
Frehlich R, Meillier Y, Jensen ML, Balsley B (2003) Turbulence measurements with the CIRES tethered lifting system during CASES-99: calibration and spectral analysis of temperature and velocity. J Atmos Sci 60:2487–2495CrossRefGoogle Scholar
Friedrich K, Lundquist JK, Aitken M, Kalina EA, Marshall RF (2012) Stability and turbulence in the atmospheric boundary layer: a comparison of remote sensing and tower observations. Geophys Res Lett 39:1–6CrossRefGoogle Scholar
Helmis CG, Papadopoulos KH, Asimakopoulos DN, Papageorgas PG, Soilemes AT (1995) An experimental study of the near-wake structure of a wind turbine operating over complex terrain. Sol Energy 54:413–428CrossRefGoogle Scholar
Hirth BD, Schroeder JL (2013) Documenting wind speed and power deficits behind a utility-scale wind turbine. J Appl Meteorol Climatol 52:39–46. doi: 10.1175/JAMC-D-12-0145.1 CrossRefGoogle Scholar
Hogström DA, Kambezidis H, Helmis C, Smedman A (1988) A field study of the wake behind a 2 MW wind turbine. Atmos Environ 22:803–820CrossRefGoogle Scholar
Iungo GV, Wu Y-T, Porté-Agel F (2013) Field measurements of wind turbine wakes with lidars. J Atmos Ocean Technol 30:274–287. doi: 10.1175/JTECH-D-12-00051.1 CrossRefGoogle Scholar
Jacobson MZ, Delucchi MA (2011) Providing all global energy with wind, water, and solar power, part I: technologies, energy resources, quantities and areas of infrastructure, and materials. Energy Policy 39:1154–1169CrossRefGoogle Scholar
Kambezidis HD, Asimakopoulos DN, Helmis CG (1990) Wake measurements behind a horizontal-axis 50 kW wind turbine. Sol Wind Technol 7:177–184CrossRefGoogle Scholar
Käsler Y, Rahm S, Simmet R, Kühn M (2010) Wake measurements of a multi-MW wind turbine with coherent long-range pulsed doppler wind lidar. J Atmos Ocean Technol 27:1529–1532CrossRefGoogle Scholar
Kocer G, Mansour M, Chokani N, Abhari RS, Muller M (2011) Full-scale wind turbine near-wake measurements using an instrumented uninhabited aerial vehicle. J Sol Energy Eng 133:041011–1-041011-8Google Scholar
Magnusson M, Smedman AS (1994) Influence of atmospheric stability on wind turbine wakes. Wind Eng 18:139–151Google Scholar
Mahrt L (1998) Flux sampling errors for aircraft and towers. J Atmos Ocean Technol 15:416–429. doi: 10.1175/1520-0426(1998)0150416:FSEFAA2.0.CO;2 CrossRefGoogle Scholar
Milligan M, Lew D, Corbus D, Piwko R, Miller N, Clark K, Jordan G, Freeman L, Zavadil B, Schuerger M (2009) Large-scale wind integration studies in the United States: preliminary results. NREL/CP-550- 46527, 8 ppGoogle Scholar
Porte-Agel F, Wu Y-T, Lu H, Conzemius R (2011) Large-eddy simulation of atmospheric boundary layer flow through wind turbines and wind farms. J Wind Eng Ind Aerodyn 99:154–168CrossRefGoogle Scholar
Rajewski DA et al (2013) Crop wind energy experiment (CWEX): observations of surface-layer, boundary layer, and mesoscale interactions with a wind farm. Bull Am Meteorol Soc 94:655–672CrossRefGoogle Scholar
Sathe A, Mann J, Gottschall J, Courtney MS (2011) Can wind lidars measure turbulence? J Atmos Ocean Technol 28:853–868CrossRefGoogle Scholar
Schwartz MN, Elliott DL (2006) Wind shear characteristics at central plains tall towers. National Renewable Energy Laboratory, 13 ppGoogle Scholar
Smalikho IN, Banakh VA, Pichugina YL, Brewer WA, Banta RM, Lundquist JK, Kelley ND (2013) Lidar investigation of atmosphere effect on a wind turbine wake. J Atmos Ocean Technol. doi: 10.1175/JTECH-D-12-00108.1 Google Scholar
Stull RB (1988) An introduction to boundary-layer meteorology. Kluwer, Dordrecht, 666 ppGoogle Scholar
Trujillo J-J, Bingöl F, Larsen GC, Mann J, Kühn M (2011) Light detection and ranging measurements of wake dynamics. Part II: Two-dimensional scanning. Wind Energy 14:61–75CrossRefGoogle Scholar
USDA (2012) US county crop harvest. http://www.nass.usda.gov/Charts_and_Maps/Crops_County/index.asp
Walter K, Weiss CC, Swift AHP, Chapman J, Kelley ND (2009) Speed and direction shear in the stable nocturnal boundary layer. J Sol Energy Eng 131:011013–1-011013-7Google Scholar
Wharton S, Lundquist JK (2012a) Atmospheric stability affects wind turbine power collection. Environ Res Lett 7:014005–1-014005-9Google Scholar
Wharton S, Lundquist JK (2012b) Assessing atmospheric stability and its impacts on rotor-disk wind characteristics at an onshore wind farm. Wind Energy 15:525–546CrossRefGoogle Scholar
Whiteman CD, Bian X, Zhong S (1997) Low-level jet climatology from enhanced rawindsonde observations at a site in the southern Great Plains. J Appl Meteorol 36:1363–1376Google Scholar
Wu Y-T, Porté-Agel F (2011) Large-eddy simulation of wind-turbine wakes: evaluation of turbine parametrisations. Boundary-Layer Meteorol 138:345–366CrossRefGoogle Scholar
Zhou L, Tian Y, Roy SB, Thorncroft C, Bosart LF, Hu Y (2012) Impacts of wind farms on land surface temperature. Nat Clim Chang 2:539–543Google Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
1.Department of Atmospheric and Oceanic Sciences, 311 UCBUniversity of ColoradoBoulderUSA
2.National Renewable Energy LaboratoryGoldenUSA
Rhodes, M.E. & Lundquist, J.K. Boundary-Layer Meteorol (2013) 149: 85. https://doi.org/10.1007/s10546-013-9834-x
Received 11 November 2012
DOI https://doi.org/10.1007/s10546-013-9834-x
|
CommonCrawl
|
Prediction accuracy measurements as a fitness function for software effort estimation
Tomas Urbanek ORCID: orcid.org/0000-0002-6307-28241,
Zdenka Prokopova1,
Radek Silhavy1 &
Veronika Vesela1
This paper evaluates the usage of analytical programming and different fitness functions for software effort estimation. Analytical programming and differential evolution generate regression functions. These functions are evaluated by the fitness function which is part of differential evolution. The differential evolution requires a proper fitness function for effective optimization. The problem is in proper selection of the fitness function. Analytical programming and different fitness functions were tested to assess insight to this problem. Mean magnitude of relative error, prediction 25 %, mean squared error (MSE) and other metrics were as possible candidates for proper fitness function. The experimental results shows that means squared error performs best and therefore is recommended as a fitness function. Moreover, this work shows that analytical programming method is viable method for calibrating use case points method. All results were evaluated by standard approach: visual inspection and statistical significance.
Effort estimation is defined as the activity of predicting the amount of effort required to complete a development of software project (Keung 2008). It is necessary to predict the effort estimation in the early stages of the software development cycle. In the best case, estimates should be calculated after a requirement analysis (Karner 1993).
Effort estimation methods can be divided into two major groups algorithmic methods and non-algorithmic methods. Algorithmic methods carries mathematical formula, which is regression model of historical data. The most famous methods are COCOMO (Boehm 1984), FP (Atkinson and Shepperd 1994) and UCP (Karner 1993). But there is a lot of algorithmic methods. To the second category belong methods like expert judgement and analogy based methods. The most famous methods is Delphi (Rowe and Wright 1999).
The use of artificial intelligence may be a promising way to improve the accuracy of effort estimations. Accurate and consistent estimates are crucial in software project management. These estimates are used for the effective planning, monitoring and controlling of a software development cycle. Project managers may use these estimates to arrive at better management decisions. Software engineering is a complicated process because there are a lot of factors—for example, the size of the development team, the actual requirements, the programming language used, as well as other factors. These factors may have a considerably impact on the accuracy of the effort estimation process.
In this research study, the analytical programming method was used to improve the use case points method. The Use Case Points method is widely used for effort estimation in software engineering. The main benefit of this method is that it provides effort estimates at a relatively early stage in the software development cycle. Nevertheless, this method is fully dependent on the human factor since the project manager has to estimate the project parameters and set the weights. There is a low probability that two project managers will perform these estimates exactly alike. Therefore, this research uses artificial intelligence to account for this dependency on the human factor. At the same time, this method is based on straightforward computation and allows a wide range of calibration—which can be achieved by setting the weights. The combination of analytical programming and the use case points method is used to derive early and more accurate effort estimation results. Analytical programming—as a symbolic regression technique, could be used to create a new model for the use case points method. An appropriate fitness function is vital for this task (Harman and Jones 2001). The fitness function evaluates solutions and decides whether the solution is acceptable—or not, for further processing. There are a large number of prediction accuracy measurement methods for assessing the accuracy of a predictive model. Thus, in this field, one of the main obstacles is to report the accuracy correctly. MMRE or Pred(25) are mainly used for the evaluation of the statistical properties of predictive models in the software engineering field. Currently, the MMRE method is being criticised by some experts in this field—e.g., in Myrtveit et al. (2003), Shepperd et al. (2000) or Kitchenham et al. (2001); however, the method is de-facto considered as a standard for reporting the suitability of a proposed model. In this study, prediction accuracy measurements will be used as fitness functions for the analytical programming.
The Sect. "Related work" of this paper summarise the related work in this field. Section "Problem statement" present the research questions for this work. Section "Experiment planning" describes the methodology used for this study. Section "Results" is devoted to the results of this work. In the next section you can see the limitations of this study. And finally, Sect. "Discussion" present discussion and conclude this paper.
The use case points method: short description
This effort estimation method was presented in 1993 by Karner (1993). It is based on a similar principle to the function point method. Project managers have to estimate the project parameters to four tables. These tables are as follows:
Unadjusted use case weight (UUCW)
Unadjusted actor weight (UAW)
Technical complexity factor (TCF)
Environmental complexity factor (ECF)
Unadjusted use case weight
The UCP method includes three categories for use case classification, which concern the use case complexity of the developed system. All the categories with weights are presented in Table 1. The influence of actor classification (UCW) are assessed by summing the number of use case with corresponding weights, see the Eq. 1.
Table 1 UCP table for estimation unadjusted use case weight
$$\begin{aligned} UUCW = \sum \limits _{i\in C}uClassification(c)*uWeight(c), \end{aligned}$$
where \(C\in \{simple,average,complex\}\) as can be seen in Table 1.
Unadjusted actor weight
The UCP method includes three categories for actor classification, which concern the actor complexity of the developed system. All the categories with weights are presented in Table 2. The influence of actor classification (UAW) are assessed by summing the number of actors with corresponding weights, see the Eq. 2.
Table 2 UCP table for actor classification
$$\begin{aligned} UAW = \sum \limits _{i\in C}aClassification(c)*aWeight(c), \end{aligned}$$
Technical complexity factor
The UCP method includes 13 technical factors, which concern the technical complexity of the developed system. All the technical factors are presented in Table 3. The influence of technical complexity factors (TCF) are assessed by assigning a value from 0 to 5 to each of them. This value is multiplied by a weight of a factor and totaled, see the Eq. 3.
Table 3 UCP table for technical factor specification
$$\begin{aligned} TCF = 0.6 + \left( 0.01 * \sum \limits _{i=1}^{13}Value_i*Weight_i\right) \end{aligned}$$
Environmental complexity factor
The UCP method includes 8 environmental factors, which concern the environmental complexity of the developed system. All the environmental factors are presented in Table 4. The influence of environmental complexity factors (ECF) are assessed by assigning a value from 0 to 5 to each of them. This value is multiplied by a weight of a factor and totaled, see the Eq. 4.
Table 4 UCP table for environmental factor specification
$$\begin{aligned} ECF = 1.4 + \left( -0.03 * \sum \limits _{i=1}^{8}Value_i*Weight_i\right) \end{aligned}$$
Final equations
The Eq. 5, is used for the calculation of the number of use case points. This number of use case points then has to be multiplied by productivity factor in order to obtain the effort estimation result, i.e., Eq. 6. This productivity factor was chosen by Karner (1993), and was set to default value 20 h per UCP. The calibration of use case points will be performed by replacing the Karner's equation for new model. This new model will be built by analytical programming method.
$$\begin{aligned} UCP = (UUCW + UAW) * TCF * ECF \end{aligned}$$
$$\begin{aligned} EE = UCP * PF \end{aligned}$$
Optimization tools
In this research, we use analytical programming method with differential evolution algorithm to calibrate use case points method.
Analytical programming
Analytical programming (AP), is a symbolic regression method. The core of analytical programming is a set of functions and operands. These mathematical objects are used for the synthesis of a new function. Every function in the analytical programming set core has its own varying number of parameters. The functions are sorted according to these parameters into general function sets (GFS). For example, \(GFS_{1par}\) contains functions that have only 1 parameter—e.g., sin(), cos(), or other functions. AP must be used with any evolutionary algorithm that consists of a population of individuals for its run (Zelinka et al. 2011; Oplatkova et al. 2013). In this paper, Differential evolution (DE) is used as an analytical programming evolutionary algorithm.
Scheme of analytical programming with differential evolution algorithm
The function of analytical programming can be seen in Fig. 1. In this case, the evolutionary algorithm is a differential evolution. The initial population is generated using differential evolution. This population, which must consist of natural numbers, is used for analytical programming purposes. The analytical programming then constructs the function on the basis of this population. This function is evaluated by its fitness function. If the termination condition is met, then the algorithm ends. If the condition is not met, then differential evolution creates a new population through the mutation and recombination processes. The whole process continues with the new population. At the end of the analytical programming process, it is assumed that one has a function that is the optimal solution for the given task.
Differential evolution
Differential evolution is an optimisation algorithm introduced by Storn and Price (1995). This optimisation method is an evolutionary algorithm based on population, mutation and recombination. Differential evolution is easy to implement and has only four parameters which need to be set. The parameters are: generations, NP, F and Cr. The generations parameter determines the number of generations; the NP parameter is the population size; the F parameter is the weighting factor; and the Cr parameter is the crossover probability (Storn 1996). In this research, the differential evolution is used as an analytical programming engine.
The fitness function
The fitness function is a mathematical formula that assesses the appropriateness of the solution of a given task. The selection of the appropriate fitness function is one of the most important tasks in designing an evolutionary process (Harman and Jones 2001). In the case of this study, the prediction accuracy measurements are used as fitness functions. These measurements are commonly used for the evaluation of the predictive model. It is assumed that this use of predictive accuracy measurements allows one to determine the behaviour of different fitness functions. These knowledge will be important for future research.
Some work has been done to enhance the effort estimation based on the use case points method. These enhancements cover the review and calibrating the productivity factor such as the work of Subriadi and Ningrum (2014). Another enhancement could be the construction investigation and simplification of the use case points method presented by Ochodek et al. (2011). The recent work of Silhavy et al. (2014) suggest a new approach "automatic complexity estimation based on requirements", which is partly based on use case points method. Or using fuzzy inference system approach to improve accuracy of the use case points method (Nassif et al. 2011). Surveys such as that conducted by Kitchenham et al. (2001), have shown that MMRE measures the spread (i.e. standard deviation). Therefore, this measurement is not suitable for accuracy predictions. The same study also showed that Pred(25) is a measurement of Kurtosis. Thus far, several studies such as Burgess et al. (2001), Chavoya et al. (2013) and Chavoya et al. (2013) have tested the efficiency of using the genetic programming method for more accurate effort estimation. In 2010, Ferrucci et al. (2010) published a paper in which they used a similar principle to assess accuracy by using different fitness functions. The authors used genetic programming and the function point method for their research. Genetic programming can suffer on bloat effect and constant resolving. In this research study on the other hand, a combination of analytical programming and the use case points method were used. There is no bloat effect in analytical programming because model is built by giving the length of the model. The problem of constant resolving can be solve by meta-evolution or non-linear fitting, e.g., Levenberg-Marquardt algorithm.
The overall research question to be answered within the study is whether there is a possibility to outperformed the Karner's equation by analytical programming method and is there a fitness function which outperforms the other fitness functions. This section presents the design of the research questions we carried out to get an insight in the use of analytical programming for effort estimation. The research questions of our study can be outlined as follows:
RQ-1 Comparing the estimates achieved by applying analytical programming with the estimates obtained by standard use case points method equation.
RQ-2 Analysing the impact of different fitness functions on the accuracy of the estimation models built with analytical programming.
The first research question (RQ-1) aims to get an insight on the estimation accuracy of analytical programming and understand the actual effectiveness of this technique with respect to the estimates by standard use case points method. For this reason, we first calibrate the UCP equation to produce the best estimates. Then, we try to outperformed this estimates by the method of analytical programming. The same process was carried out for standard calibration of UCP method. To address research question (RQ-2) we experimented with ten different fitness functions as reported and discussed in experiment planning section. To asses the performance of fitness function we used descriptive statistics and Wilcox signed rank test.
Experiment planning
Diagram of proposed experiment
The proposed experiment can be seen in the Fig. 2. The process begins with a cycle that loops through the number of used fitness functions. In this case, there are ten fitness functions. Ten different seeds were used to assess the reliability of the proposed experiment. In the data preparation loop, the seed was used to split the dataset into to two distinct sets. The dataset was split into the ratio of 66 % (i.e., training set) and 33 % (i.e. testing set). The dataset is depicted in Table 5. Then, there is a third loop that runs 10 times. In this loop, the differential evolution process starts to generate an initial population. Analytical programming then uses this initial population to synthesise a new function. After that, the new function is evaluated by the one of the selected fitness functions. If the termination condition is met, one can assume that one has an optimal predictive model, and this model is then evaluated by the calculation of the least absolute deviation (LAD) on the testing set. Then, the results are saved to file for further analysis. It is necessary to note that 10 different seeds are used for every of 10 models, as well as one of the 10 fitness functions. Thus, we have a total of 10 \(\times\) 10 \(\times\) 10 solutions.
The data for this study was collected using document reviews. The use case points method dataset was obtained from Poznan University of Technology (Ochodek et al. 2011) and from Subriadi's paper (Subriadi and Ningrum 2014).
Table 5 Data used for effort estimation
Table 5 displays the use case points method data from 24 projects. Only the use case points method data with transitions were utilized in this paper in the case of the Poznan University of Technology dataset. There are 5 values for each software project: UUCW, UAW, TCF, ECF and actual effort. Software projects 1–14 are from Poznan University of Technology. The rest are from Subriadi's paper. As can be seen Subriadi's data are quite consistent in actual effort. The possible reason is that these projects are related to one context, respectively linked to the web development software projects. The distribution of actual effort of this dataset can be seen on Fig. 3.
The distribution of actual efforts
Table 6 Set-up of analytical programming
Table 6 shows the analytical programming set-up. The number of leafs (functions built by analytical programming can be seen as trees) was set at 30, which can be recognized as a relatively high value. However, one needs to find the model that will be more accurate than the Karner's model. There is no need to generate short and easily memorable model, but rather, model that will be more accurate.
Table 7 Set-up of differential evolution
Table 7 shows the set-up of differential evolution. The best set-up of differential evolution is the subject of further research.
Fitness functions
The new model built by the analytical programming method contains the following parameters: UUCW, UAW, TCF and ECF. There is no force applied to the analytical programming that the models built by the analytical programming method have to contain all of these parameters. Ten different fitness functions (i.e., prediction accuracy measurements) were applied in this research.
Table 8 Used prediction accuracy measures
Table 8 shows the prediction accuracy measurements used. These equations were used for the learning algorithm. Standard accuracy measurements in the software engineering field—like MMRE or Pred(25) were chosen. Moreover, accuracy measurements used for general purposes—like the LAD or MSE methods were also chosen. For equations from 1 to 8; when the equation result is closer to zero, then the accuracy of the proposed model is higher. On the other hand, this condition does not apply for Eqs. 9 and 10—namely, the R squared (\(R^2\)) method and the prediction within 25 % Pred(25) method. The result of the \(R^2\) method ranges from 0 to 1, and the accuracy of the proposed model is higher when \(R^2\) is closer to 1. Likewise, the same conditions apply for Pred(25).
In this section, we present the result of our study. Exploratory statistical analysis and hypothesis testing were utilized to describe research results. All the calculations was performed on testing dataset, which consist of 8 randomly chosen data from dataset. To obtain the average error for one project one need to divide the error by value 8.
Statistics for each fitness function (one box is calculated from 100 equations and for testing dataset)
Figure 4 provide Statistics for each fitness function. As can be seen on this graph, nearly all fitness functions have a median value about 4000 man/h. On this figure could be also seen a considerably worse statistical properties for MdEMRE, Pred25 and MdMRE. As can be notice nearly all fitness functions have a minimum value about 2500 man/h. The exact values can be seen in Table 9.
Table 9 Summary statistics for each prediction accuracy measure
Table 9 provides the summary statistics for each fitness function. The minimum value of the minimum was calculated by MMRE, which is considerably lower then minim values of other fitness function. The most surprising aspect of the data is in the calculation of maximum value for MdMRE, MSE, LAD and MAE. These fitness functions does not reach the penalisation maximum. The penalisation maximum was set to 1,000,000 and in calculations for almost each equation was reached in about 1–2 %. The median value for every cost function is about 4000 man/h.
Median statistics of prediction error for standard UCP equation on testing dataset
Figure 5 shows the median of predicted error on testing data for Eq. 5. As can be seen the optimal productivity factor for testing dataset is between 11 and 14. The productivity factor value of 20, which is widely used, produce median error of 7469 man/h. Minimum value is 3227 man/h, if the error was set to 11.8. The median value of 3227 man/h was used as a value which need to be outperformed to have better results than from standard UCP Eq. 5.
Optimal productivity factor
The optimal productivity factor was set according to Fig. 5. The minimum value is 3227 man/h, if the productivity factor was set to 11.8. The Wilcox signed rank test for one sample was used to determine which fitness function have a location shift lower than 3227 man/h. All calculation was performed on 95 % significance level.
Table 10 Hypothesis testing for optimal productivity factor
Table 10 provides the results of Wilcox signed rank test for one sample. Every fitness function was tested on NULL hypothesis that this fitness function have lower true location than 3227 man/h. The value of "True" means that NULL hypothesis was accepted. The value of "False" means that alternative hypothesis was accepted. None of the proposed fitness functions have true location lower than 3227 man/h.
Table 11 The probability that fitness function calculate equation which is below the optimal standard UCP equation median
Table 11 show the probability that fitness function calculate equation which is below the standard UCP equation median. As can be seen on this table, the best probability is provided by RMSE fitness function. The Pred(25) fitness function show the worst result only 9 equations from 100 equations are below 3227 man/h.
Standard productivity factor
The standard productivity factor was set to 20. The median value for this productivity factor is 7469 man/h according to Fig. 5. The Wilcox signed rank test for one sample was used to determine which fitness function have a location shift lower than 7469 man/h. All calculation was performed on 95 % significance level.
Table 12 Hypothesis testing for standard productivity factor
Table 12 provides the results of Wilcox signed rank test for one sample. Every fitness function was tested on NULL hypothesis that this fitness function have lower true location than 7469 man/h. The value of "True" means that NULL hypothesis was accepted. The value of "False" means that alternative hypothesis was accepted. All fitness functions have true location lower than 7469 man/h.
Table 13 The probability that fitness function calculate equation which is below the standard UCP equation median
Table 13 show the probability that fitness function calculate equation which is below the standard UCP equation median. As can be seen on this table, the best probability is provided by MSE, MAE and MMRE fitness functions. The MdMRE fitness function show the worst result 81 equations from 100 equations are below 7469 man/h.
It is widely recognised that several factors can bias the validity of empirical studies. Therefore, our results are not devoid of validity threats.
External validity
External validity questions whether the results can be generalized outside the specifications of a study (Milicic and Wohlin 2004). Specific measures were taken to support external validity; for example, a random sampling technique was used to draw samples from the population in order to conduct experiments. Likewise, the statistical tests used in this paper, they are also quite standard. We note that the Wilcoxon method used in this paper features prominently. We used a relatively small size dataset, which could be a significant threat to external validity. Also the employed dataset contains projects related to one context that might be characterised by some specific properties. Similarly, we do not see how a smaller or larger dataset size should yield reliable results. It is widely recognised that, SEE datasets are neither easy to find nor easy to collect. This represents an important external validity threat that can be mitigated only replicating the study on another datasets. Another validity issue to mention is that either analytical programming nor differential evolution has been exhausted via fine-tuning. Therefore, future work is required to exhaust all the parameters of these methods to use their best versions. Threat to external validity could be also the implementation of the analytical programming and differential evolution algorithms. Although we used standard implementations, there is considerable amount of code, which could be the threat to validity.
Internal validity
Internal validity questions to what extent the cause-effect relationship between dependent and independent variables hold (Alpaydın 2014). This paper used random sampling technique to assess methods. An alternate experimental condition would be to use N-way cross-validation. In theory, not using cross-validation is a threat to the validity of our results since we did not check if our results were stable across both random sampling technique and cross-validation.
The study started out with a goals of answering the overall research questions (RQ-1) of whether analytical programming technique outperformed the standard UCP equation. This question is answered in the result section. If the UCP method is optimized, via calibrating weight or via production factor, the analytical programming method is not efficient enough to outperform standard UCP equation. The evidence can be seen in result section in Table 10. As can be seen in this table, there is no fitness function with less median value then the standard UCP equation has on the significance level 95 %. On the other hand, if the productivity factor and the whole UCP is set to default value, there is a possibility, that model built by analytical programming outperform the standard UCP equation with any of proposed fitness functions.
There is also a another question (RQ-2), which must be answered. The results for answering this question is not as conclusive as we wanted to. For answering this question we need to study Tables 9, 11 and 13 from result section very carefully. From Table 9, can be seen that, MSE have the lowest median value as well as mean value and 3rd. quartile from the all of fitness functions. The maximum values, which can be seen in this table are caused by penalisation process of the evolution. With this in mind, we used median for comparison between fitness functions. The overall worst measurement result, measured by the median (MdMRE, MdEMRE), could be in its sensitivity to extreme values. The median is considered as an insensitive measure of centrality on data containing extreme values. Therefore, these measurements could be less suitable for the fitness functions. As can be seen in Tables 11 and 13, the MSE have a higher probability, that this fitness function built a model, which outperformed the standard UCP equation. If the standard productivity factor is used there is a 97 % probability, that MSE built a model more accurate then standard UCP equation. If the productivity factor is optimized there is a 23 % probability that MSE fitness function built a model, which outperformed standard UCP equation. The minimum from the whole study was calculated by MMRE fitness function. Nevertheless this minimum value was marked as a outlier as can be seen in Fig. 4.
The current study found that the prediction accuracy measurement, which measures the median, performs worse than those that measure the mean or total value. Surprisingly, the MMRE measurement, which has raised a lot of controversy in the effort estimation field, could be considered as an average suitable fitness function. The results also revealed that fitness functions have a reasonably influence on the calculated predictions. Analytical programming method can be seen as a viable method for effort estimation. However, this is true if and only if the UCP method is not optimized. The MSE fitness function could be seen as the best fitness function due to her statistical properties. The findings of this study have a number of important implications for future research of the using of analytical programming as an effort estimation technique. More research is required to determine the efficiency of analytical programming for this task. It would be interesting to compare Karner's model with one of the model built by analytical programming.
Alpaydın E (2014) Introduction to Machine Learning 1107:105–128. doi:10.1007/978-1-62703-748-8-7. 0904.3664v1
Atkinson K, Shepperd M (1994) Using function points to find cost analogies. In: 5th European Software Cost Modelling Meeting. Ivrea, Italy, pp 1–5
Boehm WB (1984) Software Engineering Economics. IEEE Trans Softw Eng SE 10(1):4–21. doi:10.1109/TSE.1984.5010193
Burgess CJ, Lefley M, Le M (2001) Can genetic programming improve software effort estimation? A comparative evaluation. Inf Softw Technol 43(14):863–873. doi:10.1016/S0950-5849(01)00192-6
Chavoya A, Lopez-Martin C, Meda-Campaña ME (2013) Software development effort estimation by means of genetic programming. Int J Adv Comput Sci Appl 4(11)
Chavoya A, Lopez-Martin C, Meda-Campaña ME (2013) Software development effort estimation by means of genetic programming. Int J Adv Comput Sci Appl 4
Ferrucci F, Gravino C, Oliveto R, Sarro F (2010) Genetic programming for effort estimation: an analysis of the impact of different fitness functions. In: Search Based Software Engineering (SSBSE), 2010 Second International Symposium on, vol. 25. doi:10.1109/SSBSE.2010.20
Harman M, Jones BF (2001) Search-based software engineering. Inf Softw Technol 43:833–839. doi:10.1016/S0950-5849(01)00189-6
Karner G (1993) Resource estimation for objectory projects. Object Syst SF AB:1–9
Keung JW (2008) Theoretical maximum prediction accuracy for analogy-based software cost estimation. Software Engineering Conference. In: APSEC '08. 15th Asia-Pacific, pp 495–502. doi:10.1109/APSEC.2008.43
Kitchenham BA, MacDonell SG, Pickard L, Shepperd MJ (2001) What accuracy statistics really measure. IEE Proc Softw Eng 148:81–85. doi:10.1049/ip-sen:20010506
Milicic D, Wohlin C (2004) Distribution patterns of effort estimations. In: IEEE Conference Proceedings of Euromicro 2004, Track on software process and product improvement, pp 422–429
Myrtveit TF, Stensrud E, Kitchenham B (2003) Ingunn: a simulation study of the model evaluation criterion MMRE. IEEE Trans Softw Eng 29:1–30. doi:10.1109/TSE.2003.1245300
Nassif AB, Capretz LF, Ho D (2011) Estimating software effort based on use case point model using sugeno fuzzy inference system. In: Tools with Artificial Intelligence (ICTAI), 23rd IEEE International Conference on, pp 393–398. doi:10.1109/ICTAI.2011.64
Ochodek M, Nawrocki J, Kwarciak K (2011) Simplifying effort estimation based on Use Case Points. Inf Softw Technol 53(3):200–213. doi:10.1016/j.infsof.2010.10.005
Oplatkova ZK, Senkerik R, Zelinka I, Pluhacek M (2013) Analytic programming in the task of evolutionary synthesis of a controller for high order oscillations stabilization of discrete chaotic systems. Comput Math Appl 66(2):177–189. doi:10.1016/j.camwa.2013.02.008
Rowe G, Wright G (1999) The Delphi technique as a forecasting tool: issues and analysis. Int J Forecast 15(4):353–375. doi:10.1016/S0169-2070(99)00018-7
Shepperd M, Cartwright M, Kadoda G (2000) On building prediction systems for software engineers. Empir Softw Eng 5:175–182. doi:10.1023/A:1026582314146
Silhavy R, Silhavy P, Prokopova Z (2014) Automatic complexity estimation based on requirements. In: Latest trends on systems, vol. II. Santorini, Greece
Storn R, Price K (1995) Differential evolution—a simple and efficient adaptive scheme for global optimization over continuous spaces. Technical Report TR-95-012, vol. 11, pp. 1–15. doi:10.1023/A:1008202821328. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.67.5398&rep=rep1&type=pdf
Storn R (1996) On the usage of differential evolution for function optimization. Proc North Am Fuzzy Inf Process:519–523. doi:10.1109/NAFIPS.1996.534789
Subriadi AP, Ningrum PA (2014) Critical review of the effort rate value in use case point method for estimating software development effort. J Theroretical Appl Inf Technol 59(3):735–744
Zelinka I, Davendra D, Senkerik R, Jasek R, Oplatkova Z (2011) Analytical programming—a novel approach for evolutionary synthesis of symbolic structures. In: Ethem Alpaydin. 2004. Introduction to Machine Learning (Adaptive Computation and Machine Learning). The MIT Press. Rijeka, InTech, p 584
TU carried out the use of prediction accuracy measurement studies, performed the statistical analysis and drafted the manuscript.RS and ZP suggest this study, helped with design and continuously reviewing this manuscript.VV helped to draft the manuscript. All authors read and approved the final manuscript.
This study was supported by the internal Grant of Tomas Bata University in Zlin No. IGA/CebiaTech/2015/034 funded from the resources of specific university research. We are also immensely grateful to my colleagues Ales Kuncar and Andras Chernel for their comments on the earlier version of the manuscript, although any errors are our own and should not tarnish the reputations of these esteemed persons.
Department of Computer and Comunication systems, Tomas Bata University in Zlin, Nad Stranemi 4511, Zlin, Czech Republic
Tomas Urbanek, Zdenka Prokopova, Radek Silhavy & Veronika Vesela
Tomas Urbanek
Zdenka Prokopova
Radek Silhavy
Veronika Vesela
Correspondence to Tomas Urbanek.
Urbanek, T., Prokopova, Z., Silhavy, R. et al. Prediction accuracy measurements as a fitness function for software effort estimation. SpringerPlus 4, 778 (2015). https://doi.org/10.1186/s40064-015-1555-9
Effort estimation
Use case points
Prediction accuracy measures
|
CommonCrawl
|
What can't I buy?
A certain chain store sells chocolate bars in packets of 17 and 9 only. Clearly, you could not get 8 or 25 bars. Find all quantities of bars that you cannot buy.
KenshinKenshin
$\begingroup$ this is clearly an ad for chocolate. I'll be back. $\endgroup$
– Timmerz
We can make $17 \times 8 = 136$ and any number above this by replacing a $17$ with $2 \times 9$, to increment the number by $1$. By the time we run out of $17$s to replace ($9 \times 16$), we continue with $17 \times 8 + 9 = 145$ and repeat the trick, until we arrive at $9 \times 17$, at which point we swap that with $17 \times 9$ and start replacing $17$s again.
Looking at the numbers below $136$, we can make any number $n \times 17$ and
the following $n$ integers by replacement of $17$ with $2 \times 9$,
anything $m\times9$ above this e.g.
$17$ gives us $18$ by replacement and $26$, $35$, $44$, $53$, $62$, $71$, $80$, $89$, $98$, $107$, $116$, $125$, $134$ by addition of $9$s
Adding all of these we find that the highest number we cannot make is
$127$
and there are
$63$
possible numbers below this.
The ones we cannot form are:
1, 2, 3, 4, 5, 6, 7, 8, 10, 11, 12, 13, 14, 15, 16, 19, 20, 21, 22, 23, 24, 25, 28, 29, 30, 31, 32, 33, 37, 38, 39, 40, 41, 42, 46, 47, 48, 49, 50, 55, 56, 57, 58, 59, 64, 65, 66, 67, 73, 74, 75, 76, 82, 83, 84, 91, 92, 93, 100, 101, 109, 110, 118, 127
SQB
frodoskywalkerfrodoskywalker
$\begingroup$ so what quantities can you not buy :p $\endgroup$
– d'alar'cop
$\begingroup$ I know the question is not answered - I had a PEBKAC error and posted before I had finished writing. Finishing it now. $\endgroup$
– frodoskywalker
$\begingroup$ +1 anyway of course. It's essentially solved. nice job :) $\endgroup$
$\begingroup$ I can't find a simple argument why these numbers below 128 cannot be gotten and all others can. Or do I miss this? $\endgroup$
– miracle173
$\begingroup$ @miracle You can get 1) Any multiple of 9, 2) Any multiple of 17 (call it $n*17$) and 3) the next $n$ numbers after (2). Once (2) and (3) start to overlap, you can form all higher numbers. $\endgroup$
The numbers we can make are of the form $m \times 9 + n \times 17 \,| \, m, n \in \mathbb{Z}_{\ge0}$.
As we can see (and as has been said before), the largest number we can't make is $127$.
$$ \begin{array}{r|rr} \text{target} & m & n \\ \hline 0 & 0 & 0 \\ 1 & - & - \\ 2 & - & - \\ 3 & - & - \\ 4 & - & - \\ 5 & - & - \\ 6 & - & - \\ 7 & - & - \\ 8 & - & - \\ 9 & 1 & 0 \\ 10 & - & - \\ 11 & - & - \\ 12 & - & - \\ 13 & - & - \\ 14 & - & - \\ 15 & - & - \\ 16 & - & - \\ 17 & 0 & 1 \\ 18 & 2 & 0 \\ 19 & - & - \\ 20 & - & - \\ 21 & - & - \\ 22 & - & - \\ 23 & - & - \\ 24 & - & - \\ 25 & - & - \\ 26 & 1 & 1 \\ 27 & 3 & 0 \\ 28 & - & - \\ 29 & - & - \\ 30 & - & - \\ 31 & - & - \\ 32 & - & - \\ 33 & - & - \\ 34 & 0 & 2 \\ 35 & 2 & 1 \\ 36 & 4 & 0 \\ 37 & - & - \\ 38 & - & - \\ 39 & - & - \\ 40 & - & - \\ 41 & - & - \\ 42 & - & - \\ 43 & 1 & 2 \\ 44 & 3 & 1 \\ 45 & 5 & 0 \\ 46 & - & - \\ 47 & - & - \\ 48 & - & - \\ 49 & - & - \\ 50 & - & - \\ 51 & 0 & 3 \\ 52 & 2 & 2 \\ 53 & 4 & 1 \\ 54 & 6 & 0 \\ 55 & - & - \\ 56 & - & - \\ 57 & - & - \\ 58 & - & - \\ 59 & - & - \\ 60 & 1 & 3 \\ 61 & 3 & 2 \\ 62 & 5 & 1 \\ 63 & 7 & 0 \\ 64 & - & - \\ 65 & - & - \\ 66 & - & - \\ 67 & - & - \\ 68 & 0 & 4 \\ 69 & 2 & 3 \\ 70 & 4 & 2 \\ 71 & 6 & 1 \\ 72 & 8 & 0 \\ 73 & - & - \\ 74 & - & - \\ 75 & - & - \\ 76 & - & - \\ 77 & 1 & 4 \\ 78 & 3 & 3 \\ 79 & 5 & 2 \\ 80 & 7 & 1 \\ 81 & 9 & 0 \\ 82 & - & - \\ 83 & - & - \\ 84 & - & - \\ 85 & 0 & 5 \\ 86 & 2 & 4 \\ 87 & 4 & 3 \\ 88 & 6 & 2 \\ 89 & 8 & 1 \\ 90 & 10 & 0 \\ 91 & - & - \\ 92 & - & - \\ 93 & - & - \\ 94 & 1 & 5 \\ 95 & 3 & 4 \\ 96 & 5 & 3 \\ 97 & 7 & 2 \\ 98 & 9 & 1 \\ 99 & 11 & 0 \\ 100 & - & - \\ 101 & - & - \\ 102 & 0 & 6 \\ 103 & 2 & 5 \\ 104 & 4 & 4 \\ 105 & 6 & 3 \\ 106 & 8 & 2 \\ 107 & 10 & 1 \\ 108 & 12 & 0 \\ 109 & - & - \\ 110 & - & - \\ 111 & 1 & 6 \\ 112 & 3 & 5 \\ 113 & 5 & 4 \\ 114 & 7 & 3 \\ 115 & 9 & 2 \\ 116 & 11 & 1 \\ 117 & 13 & 0 \\ 118 & - & - \\ 119 & 0 & 7 \\ 120 & 2 & 6 \\ 121 & 4 & 5 \\ 122 & 6 & 4 \\ 123 & 8 & 3 \\ 124 & 10 & 2 \\ 125 & 12 & 1 \\ 126 & 14 & 0 \\ 127 & - & - \\ \hline 128 & 1 & 7 \\ 129 & 3 & 6 \\ 130 & 5 & 5 \\ 131 & 7 & 4 \\ 132 & 9 & 3 \\ 133 & 11 & 2 \\ 134 & 13 & 1 \\ 135 & 15 & 0 \\ 136 & 0 & 8 \\ 137 & 2 & 7 \\ 138 & 4 & 6 \\ 139 & 6 & 5 \\ 140 & 8 & 4 \\ 141 & 10 & 3 \\ 142 & 12 & 2 \\ 143 & 14 & 1 \\ 144 & 16 & 0 \\ 145 & 1 & 8 \\ 146 & 3 & 7 \\ 147 & 5 & 6 \\ 148 & 7 & 5 \\ 149 & 9 & 4 \\ 150 & 11 & 3 \\ 151 & 13 & 2 \\ 152 & 15 & 1 \\ 153 & 17 & 0 \\ 154 & 2 & 8 \\ 155 & 4 & 7 \\ 156 & 6 & 6 \\ 157 & 8 & 5 \\ 158 & 10 & 4 \\ 159 & 12 & 3 \\ 160 & 14 & 2 \\ 161 & 16 & 1 \\ 162 & 18 & 0 \\ 163 & 3 & 8 \\ 164 & 5 & 7 \\ 165 & 7 & 6 \\ 166 & 9 & 5 \\ 167 & 11 & 4 \\ 168 & 13 & 3 \\ 169 & 15 & 2 \\ 170 & 17 & 1 \\ 171 & 19 & 0 \\ \end{array} $$
SQBSQB
Not the answer you're looking for? Browse other questions tagged mathematics or ask your own question.
Chocolate challenge
Non-Greedy Chocolate Eaters
Hackers have created a new scheme to send passwords
The Thieves and the Gold Bars
The Thieves and the Gold Bars II: Greed and Distrust
Handbag question
|
CommonCrawl
|
How to intuitively understand formula for estimate of pooled variance when testing differences between group means?
Suppose I want to compare the difference between means of samples selected from two populations (the treatment and control). Assume both groups have normally distributed observations. Then $$Z = \frac{(\bar{X}_{t}- \bar{X}_{c})-(\mu_{t}-\mu_{c})}{\sqrt{\left(\frac{\sigma^{2}_{t}}{n_t}+ \frac{\sigma^{2}_{c}}{n_c} \right)}}$$
Suppose that $\sigma_{t}^{2}$ and $\sigma_{c}^{2}$ are unknown but can be assumed equal to $\sigma^2$. Why is the pooled estimate $S_{p}^{2}$ for $\sigma^2$ equal to $$S_{p}^{2} = \frac{S_{t}^{2}(n_{t}-1)+ S_{c}^{2}(n_{c}-1)}{[n_t+n_c-2]}$$ where $S_{t}^2$ and $S_{c}^2$ are the sample estimates of the treatment and control groups. I know this has something to do with degrees of freedom. But I never could really "grok" its definition.
In short, how do we get the pooled estimate and what are degrees of freedom intuitively?
variance mean degrees-of-freedom
Jeromy Anglim
DamienDamien
$\begingroup$ Here you go, I am also Heinlein's fan :-) $\endgroup$ – yeveee Aug 2 '11 at 5:22
There are really 2 questions here, one about pooling and one about degrees of freedom.
Let's look at degrees of freedom first. To get the concept consider if we know that $x+y+z=10$ Then $x$ can be anything we want, and $y$ can be anything we want, but once we set those 2 there is only one value that $z$ can be, so we have 2 degrees of freedom. When we calculate $S^2$ if we subtract the population mean from each $x_i$ then square and sum, then we would divide by $n$ taking the average squared difference. But we generally don't know the population mean so we subtract the sample mean as an estimate of the population mean. But subtracting the sample mean that is estimated from the same data as we are using to find $S^2$ guarentees the lowest possible sum of squares, so it will tend to be too small. But if we divide by $n-1$ instead then it is unbiased because we have taken into account that we already used the same data to compute one piece of information (the mean is just the sum divided by a constant). In regression models the degrees of freedom are equal to $n$ minus the number of parameters we estimate. Each time you estimate a parameter (mean, intercept, slope) you are spending 1 degree of freedom.
For the pooled variance function, $S^2_c$ and $S^2_t$ are already divided by $n_c-1$ and $n_t-1$, so the multiplying just gives the sums of squares, then we add the 2 sums of squares and divide by the total degrees of freedom (we subtract 2 because we estimated 2 sample means to get the sums of squares). The pooled variance is just a weighted average of the 2 variances.
Greg SnowGreg Snow
The pooled variance is a weighted average of the two independent unbiased estimators: $S^2_c$ and $S^2_t$. Why those weights and what is the relation to the degrees of freedom? Those weights are such that the weighted average is unbiased.
The degrees of freedom-
Accounting version: since you are summing differences from the mean, which always sum to zero, knowing $n-1$ of them will disclose the last. This suggests that you actually have only $n-1$ independent random variables.
Geometry version: The data can be orthogonally decomposed into two components: the mean and the distance from the mean. The mean vector spans a one dimensional linear space. It's orthogonal complement, should thus be a linear space of dimension $n-1$. So the degrees of freedom can be seen (and should!) as the dimension of $(x_i-\bar x)_{i=1}^n$, i.e., the linear space in which the distances from the mean reside.
JohnRosJohnRos
Not the answer you're looking for? Browse other questions tagged variance mean degrees-of-freedom or ask your own question.
What does pooled variance "actually" mean?
Confidence intervals over mean difference with unknown but equal variance
Degrees of freedom for a weighted average
How to combine sample means and sample variances?
Help Beginner Q: Explanation on pooled variance and when it is used
R's t.test() unequal variance degrees of freedom
Why is $\frac{\sum^n_{i=1}(X_i-\bar{X})^2}{\sigma^2}$chi-square distributed with $n-1$ degrees of freedom?
|
CommonCrawl
|
Mortality, material deprivation and urbanization: exploring the social patterns of a metropolitan area
Paula Santana1,
Claudia Costa1,
Marc Marí-Dell'Olmo2,3,4,
Mercè Gotsens2,3,4 &
Carme Borrell2,3,4,5
International Journal for Equity in Health volume 14, Article number: 55 (2015) Cite this article
Socioeconomic inequalities affecting health are of major importance in Europe. The literature enhances the role of social determinants of health, such as socioeconomic characteristics and urbanization, to achieve health equity. Yet, there is still much to know, mainly concerning the association between cause-specific mortality and several social determinants, especially in metropolitan areas.
This study aims to describe the geographical pattern of cause-specific mortality in the Lisbon Metropolitan Area (LMA), at small area level (parishes), and analyses the statistical association between mortality risk and health determinants (material deprivation and urbanization level). Fourteen causes have been selected, representing almost 60 % of total mortality between 1995 and 2008, particularly those associated with urbanization and material deprivation.
A cross-sectional ecological study was carried out. Using a hierarchical Bayesian spatial model, we estimated sex–specific smoothed Standardized Mortality Ratios (sSMR) and measured the relative risks (RR), and 95 % credible intervals, for cause-specific mortality relative to 1. urbanization level, 2. material deprivation and 3. material deprivation adjusted by urbanization.
The statistical association between mortality and material deprivation and between mortality and urbanization changes by cause of death and sex. Dementia and MN larynx, trachea, bronchus and lung are the causes of death showing higher relative risk associated with urbanization. Infectious and parasitic diseases, Chronic liver disease and Diabetes are the causes of death presenting higher relative risk associated with material deprivation. Ischemic heart disease was the only cause with a statistical association with both determinants, and MN female breast was the only without any statistical association. Urbanization level reduces the impact of material deprivation for most of the causes of death. Men face a higher impact of material deprivation and urbanization level, than women, in most cause-specific mortality, even when considering the adjusted model.
Our findings explore the specific pattern of fourteen causes of death in LMA and reveals small areas with an excess risk of mortality associated with material deprivation, thereby identifying problematic areas that could potentially benefit from public policies effecting social inequalities.
Health, and the socioeconomic inequalities affecting it, are of major importance in Europe [1], so taking action to reduce health inequalities should be a high priority at all levels of governance [2, 3]. Although Europe has a tradition of studies that analyse the association between material deprivation and increased mortality [4, 5] and other indicators of ill health [6], most of them have analysed individual data at country level [1]. Hence, their results may not be relevant for municipal policymaking [7]. Fewer studies have been able to identify small area level territories within urban areas [8], specifically in metropolitan areas, where interventions can effectively target the structural determinants of health inequalities.
In recent years, area of residence has been recognised as a social determinant of health [9, 10] and, accordingly, the use of spatial analysis of health outcomes and their predictors has increased. Likewise, the development of spatial methods has rapidly improved [11]. By analysing spatial health-related data, researchers were able to identify the association between health determinants and health outcomes at the level of the municipality [12], city [13] and also at small area level [3, 8, 14]. The small area level is considered the best one to avoid the ecological bias component (the Modifiable Areal Unit Problem) created by heterogeneity and to detect geographical patterns in mortality which would not be evident with larger geographical areas [15].
Material deprivation is one of most well-established health determinants [4, 16]: areas with higher socioeconomic deprivation present a higher mortality risk [17]. This association has already been found for Total mortality [8]; Avoidable mortality amenable to healthcare [3]; Diabetes [12]; Infectious diseases [18]; Cancer [19]; Dementia [20]; Suicide [21]; Ischemic heart disease [16]; Cerebrovascular disease [16]; Chronic liver disease [16] and Traffic injuries [22]. According to Testi and Ivaldi [23], who distinguished between material and social forms of deprivation based on Townsend's approach [5], the material index is the most suitable measure to explain variations in mortality within an urban area.
Today, the rural–urban gradient is also one of the major influential factors in spatial issues [24]. Moreover, urban areas have important health advantages, particularly in the developing world [13, 25]. However, urbanization amplifies the adverse impacts of material deprivation on mortality [26, 27]. As Diez-Roux et al. [28] point out, an important feature of urban areas is the great heterogeneity in socioeconomic circumstances and resources, resulting in enormous inequality in environmental conditions within cities. This means that the consequences of urbanization are not the same for all.
The literature contains several studies that relate urbanization and mortality: higher urbanization has been associated with Ischemic heart disease [29], Infectious disease [30], Chronic liver disease and Cirrhosis [27] and some cancers [31, 32] and lower levels of urbanization have been associated with Suicide [33], Stomach cancer [32], Diabetes [12] and Dementias [34].
Borrell et al. [8] have shown that socioeconomic inequalities in health tend to be more pronounced in more urbanized areas (where disadvantaged and poor populations are concentrated in marginalized neighbourhoods) and that urban areas have certain special characteristics which can influence the population's health and can be the targets of specific policies. Therefore, given the growth in the urban population, public health challenges must be concentrated in urban areas and policies must be adopted to this context [35].
According to Singh et al. [36], material deprivation and urbanization indices can serve as important surveillance tools for monitoring health inequalities. However the relationship between material deprivation, urban/rural status, and mortality is complex; hence, careful study is required of the way in which urban–rural differences in disease risk are heterogeneous and often context-specific [25]. In fact, material deprivation and urbanization often co-occur in the same places, which mean that it is important to study the mutual influence of these health determinants upon each other.
Some authors have already identified premature mortality inequalities within Lisbon Metropolitan Area (LMA), due to material deprivation [37]. The persistence of poverty, and social and health inequalities in the LMA, despite the general improvement in all health and social indicators [38], proceeds from previous social and political conditions that, at different levels, are also present in other metropolitan areas or cities in European countries; mainly in those that have had delayed industrialization and urbanization, like Portugal. Thus, particular attention should be given to the consequences of material deprivation on urban health at small area level in this region [39].
Studying mortality in small areas, and associating this with material deprivation and with urbanization levels, allows us to identify factors that drive inequalities, and establish how these determinants contribute to inequalities. The information yielded is critical for implementing and tailoring policies to reduce health inequalities and, considering these results, important lessons can be adduced regarding similar contextual factors (urbanization and material deprivation). The results can also be compared internationally with other metropolitan areas with similar characteristics of urbanization and material deprivation.
As far as we know, this paper is the first in Europe to use small-area data to address mortality inequalities associated with material deprivation adjusted to the urbanization level.
The aims of this paper are to describe the geographical mortality pattern in the parishes of the Lisbon Metropolitan Area (LMA) by cause of death, and to analyse the statistical association between mortality and 1. urbanization, 2. material deprivation, and 3. material deprivation adjusted by urbanization, in the period 1995–2008.
The LMA is the main metropolitan area of Portugal in which over ¼ of the Portuguese population lives. In accordance with other Southern European cities, Lisbon's population is steadily ageing, particularly within the city centre: the population aged 65 or over in the LMA increased by 43.7 % (1991: 12.8 %; 2011: 18.4 %, according to the Portuguese National Statistics Office–INE). Nowadays, the older population in the Lisbon municipality accounts for 24 % of total population (INE, 2011).
Geographically, the LMA is divided into two main areas by the River Tagus: the northern and southern banks. The centre is the city of Lisbon, surrounded by a highly urbanized urban ring (in north) and a less urbanized urban ring (in south and northern border). Between the 1970s and 90s, the population in the north urban belt has grown very fast, mainly due to migrants from other Portuguese regions and former African colonies. Yet, this growth has not always been accompanied by public services, infrastructures, land-use mix concerns, etc., with consequences that are important to study, mainly related with social exclusion and health inequities [40].
Design, source of information and indicators
This study follows an ecological design, as defined within the INEQ-CITIES project [8]. The sources of information were mortality registers (aggregated for the period 1995-2008), the 2001 census for population data and socioeconomic indicators, and the 1998 Urban Areas Classification for urbanization data, all from the INE.
The area of analysis was the parish, the lowest administrative level in Portugal. In the LMA there are 207 parishes belonging to 18 municipalities. The parish borders were stable for a long time and during the study period, decreasing the probability of misregistration of death certificates.
Based on an exploratory analysis of sixty causes of death (INEQ-CITIES list), 14 causes of death were selected, representing almost 60 % of total mortality in LMA between 1995 and 2008 (Table 1). The selection was restricted to causes of death previously associated with material deprivation or urbanization level [3, 8, 12, 16, 18–20, 22, 28, 35, 41] and for which numbers can be expected to be large enough to allow small area analysis.
Table 1 ICD Codes (for the 9th and 10th Revision) of the causes of death considered in the study
The mortality data by cause of death was aggregated for the period 1995–2008 (N = 355,363), disaggregated by age (<15; 15–24; 25–44; 45–64; > = 65), sex (total, male, female) and parish (small area) of LMA. For reasons of confidentiality and lack of information, our database includes 97.7 % of total deaths from the selected causes of death in LMA, meaning that 13,319 deaths have not been considered (e.g., age, sex and parish are not mentioned on death certificates). The study population consisted of residents of LMA in 2001, stratified by the same sex and age groups as the mortality data.
To evaluate the social and economic conditions of the area of residence, a material deprivation index was built. This is a composite indicator that takes into account three dimensions: education, employment and housing conditions. The chosen indicators (from the 2001 Census) were: 1. Illiteracy rate (population with more than 10 years that does not know how to write and read); 2. Unemployment rate (unemployment among the population between 14 and 65 years), and 3. Substandard housing rate (houses without toilet). The material deprivation index was constructed in accordance with the method used by Carstairs and Morris [4]. The variables were standardised (using the z-score method) so that each variable exerted the same influence upon the final result. This index, and the indicators used to build it, have been already applied in other studies about material deprivation in LMA [12, 40]. The material deprivation index was analysed in terciles (t1: lowest level of deprivation; t3: highest level of deprivation).
Finally, the Classification of urban areas, produced by INE, was used to determine the urbanization level. This indicator takes into account population density and urban land use to categorise the Portuguese parishes in three groups: 1. Predominantly rural area, 2. Medium urban area and 3. Predominantly urban area. For the last one, the criteria was population density higher than 500 inhabitants/km2 and more than half the territory classified as urban. Since the focus is only on the parishes of a metropolitan area, the first two have been aggregated to represent the "less urbanized" parishes and the last one representing the "most urbanized" parishes. This has been analysed as a dichotomic variable.
The mortality indicator used for this analysis is the Standardized Mortality Ratio (SMR) for total population of LMA. This variable is dependent on population size since its variance is inversely proportional to the expected values. Thus, areas with low population tend to present estimates with a high variance. When analysing aggregated data from small areas, it is important to consider two sources of variability: first, the spatial dependence between geographical areas, which means that neighbouring areas are more likely to have a similar mortality level than distant areas, according to Tobler's first law of Geography [42]; second, the non-spatial variability (random variation). In order to take into account this variability we used the hierarchical bayesian model proposed by Besag, York and Mollié obtaining smoothed SMR (sSMR) [43]. This method allows us to produce smoothed estimates, minimizing potential bias while still presenting a valid spatial pattern [3, 44].
The sSMR were estimated for each cause of death and sex with the following model:
$$ {O}_i\sim Poisson\left({E}_i{\theta}_i\right) $$
$$ \mathrm{l}og\left({\theta}_i\right) = \alpha + {S}_i + {H}_i $$
where, for each small area i, O i denoted the observed cases of deaths for a particular cause and gender in the small area i, E i was the expected number of deaths (of each cause and gender) in the small area i and θ i , the relative risk for each specific area and specific cause of death. ∝ represents the intercept, S i the spatial random effects and H i the heterogeneous (non-spatial) effects. The expected numbers of deaths in each area were calculated by indirect standardisation, using the population in 2001 (multiplied by the number of years in the study period: 14 years) and taking as reference mortality rates by gender, age (<15; 15–24; 25–44; 45–64; > = 65) and cause of death in the LMA.
Based on sSMR we measured the probability of excess risk (sSMR > 100), which should also be taken into account when evaluating the statistical evidence provided by estimates of sSMR in each small area.
The geographical distribution of sSMR, calculated through Model 1, was represented using maps of septiles: the dark blue areas have the lowest sSMR and the dark brown ones have the highest. The probability of excess risk was represented using five fixed categories: [0–0.1] (lowest probability sSMR > 100), ]0.1-0.2], ]0.2-0.8], ]0.8-0.9] and ]0.9-1.0] (highest probability sSMR > 100).
The statistical association with the contextual-level variables (material deprivation and urbanization) has been obtained through the application of an ecological regression model that introduces those indicators as explanatory variables.
To evaluate the statistical association between mortality by cause of death and urbanization (Xi) (a dichotomous variable), the regression was formulated as follows:
$$ \log \left({\uptheta}_{\mathrm{i}}\right) = {\upbeta}_1 + {\upbeta}_2{\mathrm{X}}_{\mathrm{i}} + {\mathrm{S}}_{\mathrm{i}} + {\mathrm{H}}_{\mathrm{i}} $$
Where exp(β 2 ) denotes the relative risk of mortality in the more urbanized areas with respect to the less urbanized.
To analyse the relationship between mortality and material deprivation (D i ), we applied a similar model in which material deprivation terciles were introduced as dummy variables:
D 2i = 1 if the small area i is in the second tercile group.
D 2i = 0 otherwise
D 3i = 1 if the small area i is in the third tercile group.
For this model, called "based", the regression was formulated as follows:
$$ \log \left({\theta}_i\right) = {\beta}_1 + {\beta}_2{D}_{2i}+{\beta}_3{D}_{3i} + {S}_i + {H}_i $$
where exp(β 2 ) (respectively exp(β 3 )) denotes the relative risk of mortality in the areas included in the second tercile (respectively third tercile) group with respect to the included in the first tercile deprivation group.
Finally, we estimated the statistical association between material deprivation and mortality adjusted by urbanization level. For this model, called "adjusted", the regression was formulated as follows:
$$ \log \left({\theta}_i\right) = {\beta}_1 + {\beta}_2{D}_{2i}+{\beta}_3{D}_{3i}+{\beta}_4{X}_i + {S}_i + {H}_i $$
Where exp(β 2 ) (respectively exp(β 3 )) are adjusted by urbanization level (dichotomic variable) and denotes the relative risk of mortality in the areas included in the second tercile (respectively third tercile) group with respect to those included in the first tercile deprivation group.
For these three models, the relative risk (RR) estimates were obtained based on their posterior means, along with the corresponding 95 % credible intervals (95%CI). A RR was considered significantly higher or lower than 1 if its 95%CI did not include 1. The posterior distributions were obtained with the "Integrated nested Laplace approximation" (INLA) method.
For all models, an intrinsic conditional autoregressive prior distribution (ICAR) was assigned to the spatial effect, which assumes that the expected value of each area coincides with the mean of the spatial effect of the adjacent areas and has variance of σ s 2 , while the heterogeneous effect is represented using independent normal distributions with mean 0 and variance σ h 2 [43]. A half-normal distribution with mean 0 and precision 0.0001 was assigned to the standard deviations σ s and σ h . A vague prior distribution was assigned to the parameters β1, β2 and β3 [45].
These models were developed using the INLA library (version 3.0.1) and the R statistical package (version R.2.15.2) [46].
The LMA has small areas with different levels of urbanization, material deprivation and mortality (Table 2 and Fig. 1). The geography of material deprivation reveals high levels in the southern river bank and in some parishes of the city centre and periurban areas (in red), and low material deprivation in the west and north of LMA (in green). The most urbanized parishes are in the northern bank. The statistical association between urbanization and material deprivation (chi-square test) was not found.
Table 2 Descriptive analysis of the data of the study area: quartile distribution of the number of inhabitants and deaths (by the 14 selected causes) by sex and level of urbanization, 2001
Geographic distribution of the urbanization level (based on Classification of urban areas, 1998) and material deprivation (2001) (green: lower material deprivation; red: higher material deprivation)
Table 3 presents the number of deaths and crude mortality rate by cause of death and sex. Cerebrovascular disease and Ischemic heart disease are the most common causes of death in both sexes (15.9 % and 12.6 % of total deaths in LMA, respectively): among men, is Ischemic heart disease; among women is Cerebrovascular disease. In the majority of causes of death, the number is higher in men, but for Diabetes mellitus, Dementia and Cerebrovascular disease, women have higher values.
Table 3 Descriptive analysis: total number of Deaths and crude death rates by sex and cause of death in Lisbon Metropolitan Area (aggregated period, 1995–2008)
The geographical distribution shows that the highest sSMR due to the total selected causes of death, for both genders, are found in some urban areas, located especially in Lisbon city centre. From here, excess mortality risk continues southwards to the southern river bank. The highest deficit in mortality risk for both genders is evident in the areas surrounding the Lisbon city centre, where low mortality risk is concentrated (Fig. 2).
Geographic distribution of the smoothed Standardized Mortality Ratios (sSMR) for the total of the 14 selected causes of death in LMA (blue: low sSMR; brown: high sSMR) and the probability that the sSMR is higher than 100 (green: low risk; red: high risk). Note: the maps for each cause of death are present in the Additional file
Most of the causes of death follow this centre (highly urbanized small areas) to periphery (lower urbanized small areas) configuration. This is the case for Infectious and parasitic disease; MN of the colon, rectal, anus and anal canal; MN larynx, trachea, bronchus and lung; MN of the female breast; Chronic liver disease and Dementia. In the opposite direction we found Transport injuries and, for men, MN of the stomach and Suicide and intentional self-harm. Additionally, some causes of death present a North/South pattern: MN stomach; Diabetes mellitus and Symptoms, signs and abnormal clinical and laboratory findings show a southern river bank with high risk of mortality (see Additional file 1 ).
Table 4 shows the results of the ecological regression that identified the statistical association between the urbanization and cause-specific mortality: the more urbanized areas present a higher risk (1.35–95 % CI: 1.18-1.53) than the less urbanized ones, especially for men (1.50–95 % CI: 1.26-1.80). The exception occurs in the case of Suicide and intentional self-harm and Transport injuries for men (Table 4). Dementia is the cause of death that presents higher mortality risk for the population living in more urbanized areas (1.94–95 % CI: 1.20-3.01), especially for men (2.31–95 % CI: 1.19-4.30). Although the statistical association in most causes of death is similar between sex, there are some cases where it is only significant for men. That is the case for MN larynx, trachea, bronchus and lung; Infectious and parasitic disease, Diabetes mellitus and Transport injuries.
Table 4 Relative Risk (RR) and 95 % credible intervals between urbanization (less urbanized parishes compared with more urbanized parishes) and mortality by cause of death in the Lisbon Metropolitan Area (1995–2008)
Table 5 presents the results of the ecological regression that identified the statistical association between the index of material deprivation, in terciles, and cause-specific mortality before and after adjustment by urbanization level. Most causes of death show significant association between cause-specific mortality and material deprivation, mainly for total deaths and for men: more deprived areas have a higher risk than the less deprived ones. The results of both models are very similar: Infectious and parasitic disease; Chronic liver disease; Diabetes mellitus and MN stomach are the main causes of death associated with deprivation, disregarding sex. Yet, according to the base model, people that live in the most deprived tercile have an 18 % higher risk (95 % IC: 1.06-1.32) of dying from one of the fourteen selected causes than the population living in the lowest deprived tercile. With the adjusted model the relative risk is 21 % higher (95 % IC: 1.09-1.35). This reveals that urbanization reduces the effect of material deprivation on mortality in 3 %. For men the figure is 5 %.
Table 5 Relative Risk (RR) and 95 % credible intervals between material deprivation (the 2nd and 3rdtercile (most deprived) compared with the 1sttercile (less deprived)) and mortality by cause of death in Lisbon Metropolitan Area (1995–2008)
Figure 3 and Table 6 presents the causes of deaths that show statistical association with deprivation and urbanization. The only cause of death presenting a statistical association with both health determinants for men and women is Ischemic heart disease and, in addition, the total studied causes. The statistical association with both health determinants was found for men on Infectious and parasitic disease, MN larynx, trachea, bronchus and lung, Diabetes mellitus, Transport injuries and Suicide and intentional self-harm. The causes of death that do not show any statistical association with material deprivation and urbanization level are MN female breast, MN larynx and Symptoms, signs and abnormal findings, all for women (Table 6).
Association between cause-specific mortality by sex and material deprivation (T3: highest material deprivation) and urbanization level. Note: When the associations have only been found for one gender, there is an indication about it: M = only found for men; W = only found for women)
Table 6 Causes of death showing (or not) a significant association with urbanization level and/or material deprivation (most deprived compared with less deprived)
Our results show that: 1. There is a similar geographical pattern of material deprivation and risk of mortality 2. There is a statistical association between mortality and material deprivation, mainly for Infectious and parasitic diseases, Chronic liver disease and Diabetes; 3. There is a statistical association between mortality and urbanization, mainly for Dementia and MN larynx, trachea, bronchus and lung; 4. The urbanization level reduces the impact of material deprivation on mortality for most of the causes; and 5. Socioeconomic inequalities in mortality associated with urbanization level and material deprivation were more pronounced for men than for women.
Firstly, our results indicate that there is substantial intra-urban variation in risk by cause of death, presenting two geographic patterns of mortality across the LMA: city centre versus periphery, and northern river bank versus southern river bank. These two patterns are due to the degree of urbanization, older population rate (particularly in Lisbon city centre) and social and economic contrast between the two river banks: the northern municipalities near the city of Lisbon have experienced a long-term urbanization and suburbanization process, and have the capacity to attract investment and highly-qualified services and human resources; the southern municipalities, on the other hand, have low levels of urbanization and higher unemployment and unqualified workers [47]. The geography of material deprivation also reveals high levels on the southern river bank. The city centre shows high and low levels of material deprivation, which could be one of the reasons for heterogeneity in the city centre (high and low levels of risk of mortality by 14 causes of death). The city centre versus periphery has also been found by other authors in Europe [3, 8]. A similar pattern of intra-urban variability and area effects on mortality, indicating unequal chances of health between different areas, was also revealed by Diez-Roux [28] for Buenos Aires.
Secondly, there is a statistical association between mortality and material deprivation and between mortality and urbanization. The main causes of death in LMA (representing 40.1 % of total mortality) are associated with both material deprivation and/or urbanization level. Nevertheless, some causes show a statistical association with only one health determinant and for one gender. For instance, Ischemic heart disease was the only cause with statistical association with both determinants for both genders.
The association we found between cause-specific mortality and urbanization level confirms the results of other authors [12, 27, 29, 31, 32, 34, 36]. However, we did not find association for MN female breast and Chronic liver disease as other authors [14, 35]. Moreover, in contrast to other authors we found that Dementia, Diabetes mellitus and Stomach cancer have a statistical association with urbanization level. Suicide and intentional self-harm and Transport injuries are the causes of death that show a reverse association with urbanization (higher in less urbanized areas). For Suicide, other authors achieved the same evidence for Portugal [48, 49] and other countries [33]. This may be related with social and economic factors, namely social isolation, stigma towards mental disorders (especially in men) and easy access to highly toxic pesticides [21, 33]. Regarding association between cause specific mortality and material deprivation, as in other studies we also found that mortality increases alongside material deprivation [3, 17, 26, 50, 51]. However, in contrast with other authors, we did not find an association between material deprivation and Dementia. Previous studies related with infectious causes of death (Tuberculosis and AIDS) have already indicated high mortality rates in LMA [52, 53]. Compared with other authors [3, 14, 35] that analysed the association between causes of death and material deprivation in European cities, including the LMA, we also found a clearer association between both. Nevertheless, there are some differences: 1. Mari Dell'Olmo et al. [35] found an inversely significant association between material deprivation and MN female breast, while we did not found any; 2. In our study we did not find an association for Suicide and intentional self-harm for women, as Gotsens et al. [14] and 3. we found a significant association for Ischemic heart disease that Marí-Dell'Olmo et al. [35] only found for women.
Thirdly, urbanization level has the ability to reduce the association of material deprivation with mortality. Some authors argue that urbanization level may be a confounding variable in the association between material deprivation and mortality [26]; others already state (for chronic liver disease and cirrhosis) that the effect is not significant enough to change the association [27]. In our study, although most of the causes of death show a higher (although slighter) relative risk with the adjusted model, especially for men, the causes of death that present a significant statistical association in the based and adjusted model are the same. In the adjusted model, the total causes of death only show a 3 % higher relative risk. For men is 5 %. Infectious and parasitic disease and MN Larynx, trachea, bronchus and lung are the causes of death that reveal higher discrepancy between the based and adjusted models. In fact, as higher the relative risk that we found between mortality and urbanization, the greater the difference between the based and adjusted models. Nevertheless, Dementia was the cause of death showing the highest relative risk for urbanization but in the adjusted model this cause of death continued to present no statistical association with material deprivation. Unlike Dolk et al. [26], we conclude that mortality has a stronger relationship with material deprivation, and the putative excess risk due to urbanization within metropolitan areas is small.
Finally, socioeconomic inequalities in mortality were more pronounced for men than for women with both models, for association with urbanization level and material deprivation. Besides, there are more causes of death with a significant statistical association for men than for women. These sex inequalities have also been described in other studies [3, 8, 22, 27, 35]. In the based model (association between mortality and material deprivation, without adjustment for urbanization) women only have higher statistical association for Diabetes mellitus, Ischemic heart disease and Chronic liver disease. The same occurs with the adjusted model (association between mortality and material deprivation, adjusted for urbanization). Furthermore, the statistical association with both health determinants was only found for Ischemic heart disease and, in men, for Infectious and parasitic disease, MN larynx, trachea, bronchus and lung, Diabetes mellitus, Transport injuries and Suicide and intentional self-harm. This shows us that women are not so influenced by material deprivation and urbanization as men.
As other authors claim [54], people with lower socioeconomic status are more likely to live in metropolitan areas that are more detrimental to health [52, 53]. Improving the socioeconomic determinants of health in those neighbourhoods is crucial to improve the health of the population and to reduce inequalities, because interventions have the greatest potential impact, as stated in the "health impact pyramid" [9, 55]. The conclusions highlight that parishes should be targeted by interventions designed to tackle health inequalities [9]. They also reveal the need for considering the urban territory as a diverse and complex system where health determinants must be analysed through a systematic approach that requires the articulation of mediating mechanisms and analysis of confounding variables [56], such as urbanization level [26].
As far as we know, this paper is the first one that aimed at measuring and identifying the association between material deprivation, urbanization and mortality within a metropolitan area. Further, it is the pioneer research in Portugal that uses small area data to present mortality inequalities.
However, there are a number of limitations which may impact on the findings presented. First of all, mortality due to Symptoms, signs and abnormal clinical and laboratory findings cause of death is high, representing 5.8 % of total mortality. As a consequence, the other causes here studied may be under-represented, especially Suicide, Diabetes mellitus [57] and Cancer. Second, due to statistical confidentiality, the National Statistics Office only gave access to aggregated mortality data from fourteen years. This time-aggregation was imposed to have access to space-disaggregation data and did not allow to apply time-series cross-sectional analysis. Third, cause-specific mortality maps can only be used to indicate potential problems in material deprivation and urbanization level at small area level, which then have to be studied with more specific information and better local data on the relation between health determinants and health outcomes. The fourth limitation is related with population mobility. As we only have access to the deceased person's last place of residence, we do not know how long s/he had been living there and how long s/he had been exposed to material deprivation. Furthermore, material deprivation is defined in the period 1995–2008 in the same way as in 2001. However, there were changes over these fourteen years in unemployment and in the number of substandard housing, although the geographical pattern had not changed. Fifth, we were not able to explore if there was an interaction effect between urbanization and material deprivation. Although statistically this could be done, the small number of less urbanized areas does not give us enough sample power to estimate these interactions. Finally, in terms of methodology, there are two main issues: (i) the standardization of mortality data took into account a structure of four age groups, which does not entirely remove the confounding effect of age; and (ii) the existence of statistical associations between the characteristics of place of residence and mortality patterns may be carefully interpreted in terms of causality [58].
Our findings can extend current knowledge by showing spatial patterns of cause-specific mortality in the LMA, identifying small areas with an excess risk of mortality associated with material deprivation and thereby pointing at problematic areas that could potentially benefit from public policies addressing specific causes of death and the effect of social inequalities. These results highlight the need to implement effective policies to reduce inequalities, namely through the intervention of government institutions (local and regional) on specific areas within the metropolitan area [59, 60]. Physical and social environments in neighbourhoods can be overtly hazardous. For instance, evidence about local risk factors (unemployment, illiteracy and poor housing conditions) associated with the Infectious and parasitic diseases, Chronic liver disease and Diabetes mellitus, within LMA, will potentially support the development of local interventions addressing those social and material conditions. Local governments are in a better position to tackle some of these health determinants, by implementing social programmes and built environment interventions aiming to reduce poverty and to improve constructed features that encourage healthy behaviours. Policy measures tackling unemployment and poverty include strategies reducing supply-side unemployment (e.g., education and training schemes, self-employment assistance), the number of families at risk of poverty (e.g., social benefits for low-income individuals and families, local council taxes reduction, affordable housing, access of disadvantaged population groups to health services, lifelong learning actions). Interventions targeting the built environment, urban design and planning in economically disadvantaged small-areas includes various interventions related with physical surroundings (e.g., buildings, green urban spaces, schools, road systems and other infrastructures), housing conditions (e.g., rehousing, refurbishment and community regeneration) and food environment (e.g., increasing the availability of healthy food choices, activities to encourage families to purchase healthier food options) [40].
As so, our results must be transferred to the local stakeholders, especially from sectors such as urban planning, culture, leisure, education, environment, social services and housing, due to their ability to exacerbate or reduce intra-urban health inequalities [61, 62].
ICD9:
International Statistical Classification of Diseases and Related Health Problems 9th Revision
ICD10:
International Statistical Classification of Diseases and Related Health Problems 10th Revision
INE:
Portuguese National Statistics Office
LMA:
Lisbon Metropolitan Area
MN:
Malignant neoplasm
relative risk
sSMR:
smoothed Standardized Mortality Ratio
Mackenbach JP, Stirbu I, Roskam A-JJR, Schaap MM, Menvielle G, Leinsalu M, et al. Socioeconomic inequalities in health in 22 European countries. N Engl J Med. 2008;358:2468–81.
Diez E, Morrison J, Pons-Vigués M, Borrell C, Corman D, Burström B, et al. Municipal interventions against inequalities in health: The view of their managers. Scand J Public Health. 2014;42(6):476–87. 1403494814529850.
Hoffmann R, Borsboom G, Saez M, Marí-Dell'Olmo M, Burström B, Corman D, et al. Social differences in avoidable mortality between small areas of 15 European cities: an ecological study. Int J Health Geogr. 2014;13:8.
Carstairs V, Morris R. Deprivation and health in Scotland. Health Bull (Raleigh). 1990;48:162–75.
Townsend P. Deprivation. J Soc Policy. 1987;16:125–46.
Shaw M, Gordon D, Dorling D, Davey-Smith G. The Widening Gap: Health Inequalities and Policy in Britain. 1999.
Corburn J, Cohen AK. Why we need urban health equity indicators: Integrating science, policy, and community. PLoS Med. 2012;9, e1001285.
Borrell C, Marí-Dell'Olmo M, Palència L, Gotsens M, Burström B, Domínguez-Berjón F, et al. Socioeconomic inequalities in mortality in 16 European cities. Scand J Public Health. 2014;42(3):245–54.
Diez Roux AV. Investigating neighborhood and area effects on health. Am J Public Health. 2001;91:1783–9.
Marmot M, Friel S, Bell R, Houweling TTAJ, Taylor S, Hlt CSD. Closing the gap in a generation: health equity through action on the social determinants of health. Lancet. 2008;372:1661–9.
Auchincloss AH, Gebreab SY, Mair C, Diez Roux AV. A review of spatial methods in epidemiology, 2000–2010. Annu Rev Public Health. 2012;33:107–22.
Santana P, Costa C, Loureiro A, Raposo J, Boavida JM. Geografias da diabetes em Portugal. como as condições do contexto influenciam o risco de morrer. Acta Med Port. 2014;27:309–17.
Vlahov D, Freudenberg N, Proietti F, Ompad D, Quinn A, Nandi V, et al. Urban as a determinant of health. J Urban Heal. 2007;84(3 Suppl):i16–26.
Gotsens M, Marí-Dell'Olmo M, Pérez K, Palência L, Martinez-Beneito M-A, Rodríguez-Sanz M, et al. Socioeconomic inequalities in injury mortality in small areas of 15 European cities. Health Place. 2013;24:165–72.
Richardson S, Thomson A, Best N, Elliott P. Interpreting posterior relative risk estimates in disease-mapping studies. Environ Health Perspect. 2004;112:1016–25.
Benach J, Yasui Y, Borrell C, Sáez M, Pasarin MI. Material deprivation and leading causes of death by gender: evidence from a nationwide small area study. J Epidemiol Community Health. 2001;55:239–45.
Fukuda Y, Nakamura K, Takano T. Municipal socioeconomic status and mortality in Japan: sex and age differences, and trends in 1973–1998. Soc Sci Med. 2004;59:2435–45.
WHO. Removing obstacles to healthy development : report on infectious diseases. Geneva: WHO; 1999.
Saurina C, Saez M, Marcos-Gragera R, Barceló MA, Renart G, Martos C. Effects of deprivation on the geographical variability of larynx cancer incidence in men, Girona (Spain) 1994–2004. Cancer Epidemiol. 2010;34:109–15.
Scazufca M, Menezes PR, Araya R, Di Rienzo VD, Almeida OP, Gunnell D, et al. Risk factors across the life course and dementia in a Brazilian population: results from the Sao Paulo Ageing & Health Study (SPAH). Int J Epidemiol. 2008;37:879–90.
Stark C, Hopkins P, Gibbs D, Belbin A, Hay A. Population density and suicide in Scotland. Rural Remote Health. 2007;7:672.
Gotsens M, Mari-Dell'Olmo M, Martinez-Beneito MA, Perez K, Pasarin MI, Daponte A, et al. Socio-economic inequalities in mortality due to injuries in small areas of ten cities in Spain (MEDEA Project). Accid Anal Prev. 2011;43:1802–10.
Testi A, Ivaldi E. Material versus social deprivation and health: a case study of an urban area. Eur J Health Econ. 2009;10:323–8.
Rey G, Jougla E, Fouillet A, Hémon D. Ecological association between a deprivation index and mortality in France over the period 1997–2001: variations with spatial scale, degree of urbanicity, age, gender and cause of death. BMC Public Health. 2009;9:33.
Leon DA. Cities, urbanization and health. Int J Epidemiol. 2008;37:4–8.
Dolk H, Mertens B, Kleinschmidt I, Walls P, Shaddick G, Elliott P. A standardisation approach to the control of socioeconomic confounding in small area studies of environment and health. J Epidemiol Community Health. 1995;49 Suppl 2:S9–14.
Erskine S, Maheswaran R, Pearson T, Gleeson D. Socioeconomic deprivation, urban–rural location and alcohol-related mortality in England and Wales. BMC Public Health. 2010;10:99.
Diez Roux AV, Green Franklin T, Alazraqui M, Spinelli H. Intraurban variations in adult mortality in a large Latin American city. J Urban Health. 2007;84:319–33.
Yusuf S, Reddy S, Ounpuu S, Anand S. Global burden of cardiovascular diseases: Part I: General considerations, the epidemiologic transition, risk factors, and impact of urbanization. Circulation. 2001;104:2746–53.
Hay SI, Guerra CA, Tatem AJ, Atkinson PM, Snow RW. Urbanization, malaria transmission and disease burden in Africa. Nat Rev Microbiol. 2005;3:81–90.
Greenberg MR. Urbanization and cancer: changing mortality patterns? Int Reg Sci Rev. 1983;8:127–45.
Bidoli E, Franceschi S, Dal Maso L, Guarneri S, Barbone F. Cancer mortality by urbanization and altitude in a limited area in Northeastern Italy. Rev Epidemiol Sante Publique. 1993;41:374–82.
Pesonen TM, Hintikka J, Karkola KO, Saarinen PI, Antikainen M, Lehtonen J. Male suicide mortality in eastern Finland–urban–rural changes during a 10-year period between 1988 and 1997. Scand J Public Health. 2001;29:189–93.
Arslantaş D, Ozbabalik D, Metintaş S, Ozkan S, Kalyoncu C, Ozdemir G, et al. Prevalence of dementia and associated risk factors in Middle Anatolia, Turkey. J Clin Neurosci. 2009;16:1455–9.
Marí-Dell'Olmo M, Gotsens M, Palència L, Burström B, Corman D, Costa G, et al. Socioeconomic inequalities in cause-specific mortality in fifteen European Cities. J Epidemiol Community Health. 2015;69(5):432–41.
Singh GK, Azuine RE, Siahpush M, Kogan MD. All-cause and cause-specific mortality among US youth: socioeconomic and rural–urban disparities and international patterns. J Urban Health. 2013;90:388–405.
Nogueira H, Santana P. Geographies of Health and Deprivation: Relationship between them. In: Palagiano C, De Santis G, editors. Geogr dell'Alimentazione. Perugia: Edizioni Rux; 2005. p. 539–46.
Santana P. Poverty, social exclusion and health in Portugal. Soc Sci Med. 2002;55:33–45.
Borrell C, Pasarín MI. Inequalities in health and urban areas. Gac Sanit. 2004;18:1–4.
Santana P, Santos R, Nogueira H. The link between local environment and obesity: A multilevel analysis in the Lisbon Metropolitan Area, Portugal. Soc Sci Med. 2009;68:601–9.
Chang S-S, Sterne JAC, Wheeler BW, Lu T-H, Lin J-J, Gunnell D. Geography of suicide in Taiwan: spatial patterning and socioeconomic correlates. Health Place. 2011;17:641–50.
Tobler WR. A computer movie simulating urban growth in the detroit region. Econ Geogr. 1970;46:234.
Besag J, York J, Mollié A. Bayesian image restoration, with two applications in spatial statistics. Ann Inst Stat Math. 1991;43:1–20.
Graham P. Intelligent smoothing using hierarchical Bayesian models. Epidemiology. 2008;19:493–5.
Gelman A. Prior distributions for variance parameters in hierarchical models. 2006.
Rue H, Martino S, Chopin N. Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations. J R Stat Soc Ser B Stat Methodol. 2009;71:319–92.
Fonseca ML, McGarrigle J, Esteves A, Malheiros J. Lisbon - City Report. Lisbon. 2008.
Santana P, Costa C, Cardoso G,Loureiro A, Ferrão J. Suicide in Portugal: spatial determinants in a context of economic crisis. Health & Place; 2015 (accepted).
Gusmão R, Quintão S. Suicide and death resulting from events of undetermined intent register in Portugal. Revisiting "The truth about suicide", 20 years later. Dir Gen Heal J. 2013;1:80–95.
McLoone P, Boddy FA. Deprivation and mortality in Scotland, 1981 and 1991. BMJ. 1994;309:1465–70.
Ecob R, Jones K. Mortality variations in England and Wales between types of place: an analysis of the ONS longitudinal study. Office of National Statistics Soc Sci Med. 1998;47:2055–66.
Couceiro L, Santana P, Nunes C. Pulmonary tuberculosis and risk factors in Portugal: a spatial analysis. Int J Tuberc Lung Dis. 2011;15:1445–54.
Williamson LM, Rosato M, Teyhan A, Santana P, Harding S. AIDS mortality in African migrants living in Portugal: evidence of large social inequalities. Sex Transm Infect. 2009;85:427–31.
Marmot M, Allen J, Bell R, Bloomer E, Goldblatt P. WHO European review of social determinants of health and the health divide. Lancet. 2012;380:1011–29.
Frieden TR. A framework for public health action: the health impact pyramid. Am J Public Health. 2010;100:590–5.
Diez Roux AV. Conceptual approaches to the study of health disparities. Annu Rev Public Health. 2012;33:41–58.
José Manuel B, Pereira M, Ayala M. A mortalidade por diabetes em Portugal. Acta Med Port. 2013;26(4):315–7.
Jokela M. Are neighborhood health associations causal? A 10-year prospective cohort study with repeated measurements. Am J Epidemiol. 2014;180:776–84.
Borrell C, Pons-Vigués M, Morrison J, Díez E. Factors and processes influencing health inequalities in urban areas. J Epidemiol Community Health. 2013;67:389–91.
Pons-Vigués M, Diez È, Morrison J, Salas-Nicás S, Hoffmann R, Burstrom B, et al. Social and health policies or interventions to tackle health inequalities in European cities: a scoping review. BMC Public Health. 2014;14:198.
Rydin Y, Bleahu A, Davies M, Dávila JD, Friel S, De Grandis G, et al. Shaping cities for health: Complexity and the planning of urban environments in the 21st century. Lancet. 2012;379:2079–108.
Galea S, Freudenberg N, Vlahov D. Cities and population health. Soc Sci Med. 2005;60:1017–33.
The authors would like to thank to Adriana Loureiro for helping to collect and manage the mortality data and Karen Bennett for the language review. This research was partially supported by two projects: INEQ-CITIES, project funded by the Executive Agency for Health and Consumers (Commission of the European Union), project n°2008 12 13 and SMAILE–Study on Mental Health. Assessment of the Impact of Local and Economic conditioners (PTDC/ATP-GEO/4101/2012) project funded by FEDER funds through the Operational Competitiveness Programme–COMPETE and National funds through the Foundation for Science and Technology (FCT).
INE - Portugal (2003) Census 2001, Lisbon: INE.
INE - Portugal. Mortality data for 1995–2008. National Statistics Institute. Data not published.
INE - Portugal (1998). Classification of urban areas. Lisbon: INE.
Departamento de Geografia, Centro de Estudos de Geografia e Ordenamento do Território, Universidade de Coimbra, Colégio S. Jerónimo, Largo D. Dinis, 3000-043, Coimbra, Portugal
Paula Santana
& Claudia Costa
CIBER Epidemiología y Salud Pública (CIBERESP), 3-5, Pabellón 11. Planta 0, Monforte de Lemos, 28029, Madrid, Spain
Marc Marí-Dell'Olmo
, Mercè Gotsens
& Carme Borrell
Agència de Salut Pública de Barcelona, Plaça Lesseps, 1, 08023, Barcelona, Spain
Institut d'Investigació Biomèdica (IIB Sant Pau), Sant Antoni Maria Claret, 167, 08025, Barcelona, Spain
Universitat Pompeu Fabra, Doctor Aiguader, 80, 08003, Barcelona, Spain
Carme Borrell
Search for Paula Santana in:
Search for Claudia Costa in:
Search for Marc Marí-Dell'Olmo in:
Search for Mercè Gotsens in:
Search for Carme Borrell in:
Correspondence to Paula Santana.
PS is responsible for the concept and design of the manuscript, participated in the interpretation of data and drafted the manuscript. CC made the analysis and interpretation of data and performed the statistical analysis. MM and MG participated in the statistical analysis and helped to draft the manuscript. CB gave final approval of the version to be published and ensured that questions related to the accuracy or integrity of the work is appropriately investigated and resolved. All authors read and approved the final manuscript.
All the authors are researchers but from different areas of expertise. PS and CC are both geographers working for long in the field of health geography. MM and MG are both statisticians. CB is Doctor in Medicine, specialized in Epidemiology.
Additional file
Maps of smoothed Standardized Mortality Ratios(sSMR) for specific cause of death in LMA and the probability that the sSMR is higher than 100 Description of data: Figures shows mortality maps for specific causes of death in the Lisbon Metropolitan Area for men and women separately. The colours represent smoothed Standardized Mortality Ratios (sSMR): the dark blue areas have the lowest sSMR and the dark brown ones have the highest. Next to each map showing the level of mortality for each small area, there is a map giving the probability that the shown sSMRs are above 100. This is the credibility level and represents the Bayesian correspondent to confidence intervals. On this credibility map, red indicates a probability of 90-100 % that an sSMR is higher than 1 and green colour indicates with the same probability that it is lower than 1.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Santana, P., Costa, C., Marí-Dell'Olmo, M. et al. Mortality, material deprivation and urbanization: exploring the social patterns of a metropolitan area. Int J Equity Health 14, 55 (2015). https://doi.org/10.1186/s12939-015-0182-y
Material Deprivation
Bayesian Model
Social/Spatial determinants
|
CommonCrawl
|
Help us improve our products. Sign up to take part.
A Nature Research Journal
Search E-alert Submit My Account Login
Article | Open | Published: 21 February 2018
Microsecond dark-exciton valley polarization memory in two-dimensional heterostructures
Chongyun Jiang1,
Weigao Xu ORCID: orcid.org/0000-0002-3014-756X1,
Abdullah Rasmita1,
Zumeng Huang1,
Ke Li ORCID: orcid.org/0000-0003-1329-91391,
Qihua Xiong1,2,3 &
Wei-bo Gao1,3,4
Nature Communicationsvolume 9, Article number: 753 (2018) | Download Citation
Electronic properties and materials
Optical physics
Two-dimensional materials
Transition metal dichalcogenides have valley degree of freedom, which features optical selection rule and spin-valley locking, making them promising for valleytronics devices and quantum computation. For either application, a long valley polarization lifetime is crucial. Previous results showed that it is around picosecond in monolayer excitons, nanosecond for local excitons and tens of nanosecond for interlayer excitons. Here we show that the dark excitons in two-dimensional heterostructures provide a microsecond valley polarization memory thanks to the magnetic field induced suppression of valley mixing. The lifetime of the dark excitons shows magnetic field and temperature dependence. The long lifetime and valley polarization lifetime of the dark exciton in two-dimensional heterostructures make them promising for long-distance exciton transport and macroscopic quantum state generations.
Reassembled layered van der Waals heterostructures have revealed new phenomena beyond single material layers1. In particular, when two different monolayer transition metal dichalcogenides (TMDs) are properly aligned, the electrons can be confined in one layer while the holes are confined in the other layer1,2,3,4,5,6,7,8. Because the electron and hole wavefunctions in the two layers only have small overlap, excitons can have a longer lifetime of several nanoseconds, as compared to around picoseconds for excitons in monolayer. Such exciton is known as indirect exciton or interlayer exciton2,8. A long exciton lifetime is crucial for the generation of high-temperature macroscopically ordered exciton state, which forms the basis of a series of fundamental physics phenomena such as superfluidity9 and Bose–Einstein condensation10,11,12. Moreover, longer survival of excitons means longer distance of exciton transport, which is useful for excitonic devices13,14.
In another perspective, similar to monolayer TMDs, the properly aligned two-dimensional (2D) heterostructure has indirect exciton with valley degree of freedom. Valleys (K and K′) are located at the band edges in the corners of the hexagonal Brillouin zone15,16. The spins show opposite signs in the two valleys at the same energy which corresponds to spin-valley locking15,16. Moreover, these two valleys have opposite Berry curvature leading to different optical selection rules in each valley. By using a circularly polarized optical pumping and observing the locking between the output photon chirality and the valleys, the phenomena of valley polarization has been observed17,18,19,20,21. These unique properties put the valley degree of freedom as a possible candidate for opto-electronics and quantum computation with 2D material22,23,24. Leading to its application, a long valley polarization lifetime is a prerequisite and extensive efforts have been put towards particles and quasi-particles with longer lifetime in these ultra-thin systems. Previous lifetime measurement reveals that direct exciton lifetime is in the order of picosecond25,26,27, limiting its application to some extent. Localized excitons can have a lifetime of around nanosecond28,29 and indirect exciton can have ~100 ns lifetime30,31 and tens of nanosecond valley polarization lifetime8. We note that recent work also shows single charge carrier can have microsecond lifetime32. On the other hand, experimental evidence shows the existence of dark excitons, lying tens of millielectronvolts below the bright exciton in WSe233. Their decay time is measured to be nanoseconds in monolayer TMD34.
Here we report that, the dark excitons in 2D heterostructures can survive for microsecond timescale. With magnetic field suppressed valley mixing, they serve as a microsecond valley polarization memory for indirect excitons. This is two orders longer than the case without an applied magnetic field.
Experimental observation of interlayer excitons
The schematics and the optical spectroscopy result of the MoSe2/WSe2 heterostructure on SiO2/Si substrate are shown in Fig. 1. In such heterostructures, electrons tend to go to the conduction band of MoSe2 and holes are confined in the valence band of WSe2, forming the indirect excitons (Fig. 1a). Our samples are prepared via a mechanical exfoliation and aligned-transfer method7. In this sample, the MoSe2 monolayer is stacked on top of the WSe2 monolayer in AA-stacking style (see Supplementary Note 4 and Supplementary Fig. 5 for the second harmonic generation experiment). The detail of the sample preparation can be found in the Methods section. A fluorescence image of the heterostructure was taken with a color camera under white light excitation (Fig. 1d). It is found that the heterostructure consists of two areas: dark area with low intensity luminescence (labeled as H1) and bright area with high intensity luminescence (labeled as H2). The photoluminescence (PL) of these two areas as well as the MoSe2 region under 633 nm continuous-wave (CW) laser excitation is shown in Fig. 1e. As can be seen from this figure, the interlayer exciton emission of ~1.34 eV emerges for the dark area H2. The interlayer exciton emission intensity is comparable to the intralayer exciton and the trion emission of MoSe2. The interlayer emission is missing for the H1 region where higher intensity of intralayer emission is observed. This can be attributed to the weak coupling between the two layers in this H1 region7. In the following measurements, we focused on the interlayer exciton emission in H2 area. An 850-nm long pass filter is used in the PL collection to filter out the contribution of other emission type.
Sample characterization. a MoSe2 and WSe2 form a 2D heterostructures, where electrons are confined in one layer and holes are confined in the other layer. b Schematic of the interlayer exciton and dark exciton. The interlayer excitons are illustrated as solid black ellipses. The dark excitons are represented by the dashed ellipses. Red (blue) curves denote spin-up (spin-down) in the conduction and valence bands while the gray arrowed curves denote the dark exciton valley scattering. c Optical microscope image of the MoSe2/WSe2 heterostructure. Blue dashed line shows the area of MoSe2. White dashed lines show the region of heterostructure, which is separated into two areas labeled as H1 and H2 in d. d Fluorescence image taken with a color camera under white light excitation. It shows a bright (H1) and dark (H2) state in the two different place of heterostructure. e Photoluminescence in the monolayer MoSe2, heterostructure H1 region, and heterostructure H2 region under a 633 nm laser excitation. Peaks on MoSe2 are attributed to exciton (XMo) and trion (TMo) of MoSe2. In H1 area, another two weak peaks appear and are labeled as exciton (XW) and trion (TW) from WSe2. In H2 area, another peak around 1.34 eV emerges, which is attributed to interlayer exciton and labeled as Int
Valley polarization with CW laser
We firstly carried out the measurement of valley polarization with a CW pump laser. The results are shown in Fig. 2. In order to increase the count of the interlayer exciton, an excitation laser with wavelength of 1.708 eV is used. This corresponds to the resonant excitation of the WSe2 charged exciton. The polarization states of the excitation and detection are set to the circular polarization σ+ or σ−, and the degree of circular polarization is extracted from these four polarization combinations. Please note that the emission polarization will have a small distortion away from circular polarization due to the existence of the Moire pattern35,36,37,38. However, circular polarization still acts as a good approximation for studying the dynamics of valley polarization here. The PL emission of different configurations at 0 T are shown in Fig. 2a, b. It is observed that the PL intensity of the co-polarization is always larger than that of the cross-polarization, corresponding to valley polarization. Next, we apply a magnetic field of −7 T perpendicular to the sample surface (out-of-plane direction, B z ). The results are shown in Fig. 2c, d. As can be seen from these figures, the emission difference between the case with co-polarization and cross-polarization excitation gets larger at B z = −7 T compared to 0 T.
Valley polarization with CW laser excitation. a, b Valley polarization at 0 T. Right and left circularly polarized light are labeled as σ+ and σ−. Under σ+ laser excitation, σ+ PL output component is more than σ− and vice versa for σ− excitation. This shows evidence of valley polarization. c, d Valley polarization at −7 T. Valley polarization is enhanced by applying magnetic field perpendicular to the sample surface. e The valley polarization degree as a function of applied magnetic field in the z direction. The solid line is the fitting result following equation \(P^j = P_0^j \pm P_1^j\left( {1 - \frac{1}{{r^2 + r\sqrt {1 + r^2} + 1}}} \right)\), \(r = \left| B \right|{\mathrm{/}}\alpha\) where j indicates the excitation polarization, \(P_0^j\) is the residual degree of polarization at 0 T due to the valley polarization, \(P_1^j\) is the saturation level of degree of polarization, and α represents the intervalley scattering between the dark exciton. f, The valley polarization degree as a function of applied magnetic field in the y direction with B z = 0T
To quantify this difference, we measured the degree of polarization as a function of magnetic field in both B z (Faraday geometry) and B y directions (parallel to sample surface, Voigt geometry). Here we define the degree of polarization as \(P^j = \frac{{I_{\sigma _ + }^j - I_{\sigma _ - }^j}}{{I_{\sigma _ + }^j + I_{\sigma _ - }^j}}\), where \(I_{\sigma _ + }^j\) \(\left( {I_{\sigma _ - }^j} \right)\) is the σ+ (σ−) polarized PL intensity when excited with j polarization. The degree of polarization pumped by σ+ and σ− excitation versus magnetic field in the Faraday and Voigt geometry are shown in Fig. 2e, f. We can observe a dip of the degree of polarization at low magnetic field around 0 T in Faraday geometry. The valley polarization is around 17% at 0 T and quickly increases to ~35% at ~1 T. For Voigt geometry, the degree of polarization does not show any dependence on the magnetic field.
Regarding the Faraday geometry, our observation of increased valley polarization near 0 T is in line with the report in ref. 39 where it is attributed to the suppression of intervalley electron–hole exchange interaction. Similar with traditional semiconductor quantum wells and quantum dots, the valley depolarization in 2D material is caused by electron–hole exchange interaction40,41,42. The larger binding energy of excitons in monolayer TMDs further enhances such interaction, leading to valley depolarization and short valley polarization lifetime. The intervalley scattering can be understood in term of in-plane depolarizing field40. Hence, by increasing the magnitude of out-of-plane magnetic field, the valley depolarization can be suppressed28,41. This model can also explain why the degree of polarization saturates at high magnetic field. This saturation has also been observed in WSe2 exciton system43. The difference is that, unlike the bright exciton case reported there, the interlayer exciton shows a non-linear magnetic-dependence of valley polarization. Following this, we fit the degree of polarization with equation \(P^j = P_0^j \pm P_1^j\left( {1 - \frac{1}{{r^2 + r\sqrt {1 + r^2} + 1}}} \right)\), \(r = \left| B \right|{\mathrm{/}}\alpha\) where j indicates the excitation polarization, \(P_0^j\) is the residual degree of polarization at 0 T, \(P_1^j\) is the saturation level of degree of polarization, and α represents the intervalley scattering term between the dark excitons where the dark exciton refers to the WSe2 dark exciton as illustrated in Fig. 1b. The experimental data fits very well with this model within the magnetic field range of our experiment.
The exchange interaction-based argument can also be used to explain why the degree of polarization does not show any magnetic field dependence under the Voigt geometry. In order to discuss this, it should be noted that the 2D material can be seen as an atomic-thin quantum well. It is known that the effect of in-plane magnetic field to the exchange interaction is proportional to the quantum well thickness44. Hence, the exchange interaction is practically independent of the magnetic field under the Voigt geometry which result in the magnetic field-independent degree of polarization.
Time-resolved experiment with pulsed laser excitation
To understand the dynamics of the interlayer exciton emission, we carried out time-resolved PL experiment with pulsed laser excitation. The laser has a repetition period of 8 μs and has the same wavelength as the one used in the CW experiment. The left panels of the Fig. 3a, b show the decay of the PL emission pumped by σ+ excitation at 0 T and −3 T, respectively. The middle panels show the calculated degree of polarization. We can see that, the degree of polarization \(P^{\sigma _ + }\) decays quickly to zero at 0 T, while it has an extra slow decay component and remains above 0.2 for up to 2.5 μs at −3 T. To quantify it, here we address two different types of degree of polarization: valley polarization and PL polarization. The former depends on the polarization state of the excitation, i.e. copolarization and cross-polarization give different PL intensity. It can be calculated as \(P_{{\mathrm{val}}} = \frac{{P^{\sigma _ + } - P^{\sigma _ - }}}{2}\). The latter one solely depends on the polarization of the PL emission and it does not depend on the excitation. It can be calculated as the average of the individual degree of polarization pumped by σ+ or σ− excitation: \(P_{{\mathrm{PL}}} = \frac{{P^{\sigma _ + } + P^{\sigma _ - }}}{2}\). Figure 3e, f shows the decay of valley polarization Pval at 0 and −3 T. valley polarization at 0 T has a decay time of 15 ± 0.3 ns, while it has a decay time of 1.745 ± 0.007 μs at −3 T. More detailed PL data, valley polarization and PL polarization at −3, 0 and 3 T are provided in Supplementary Figs. 1 and 2. More analysis can be found in Supplementary Note 1.
Time-resolved investigation of polarization in magnetic field. a, b Time-resolved PL of σ+-output and σ−-output pumped by a σ+-polarized pulsed laser in Faraday geometry at B z = 0 T (a) and B z = −3 T (b). c, d Degree of polarization extracted from σ+ excitation PL at B z = 0 T (c) and B z = −3 T (d). e, f Valley polarization calculated from σ+ excitation and σ−-excitation PL. The σ− excitation PL data is in the Supplementary Figs. 1 and 2. At B z = 0 T, the degree of polarization disappears quickly and hardly seen after 50 ns and valley polarization has a decay time of 15 ± 0.3 ns. At B z = −3 T, the difference between σ+ output and σ− output is clearly seen even at 200 ns. The valley polarization at B z = −3 T has a decay time of 1.745 ± 0.007 μs
Microscopic mechanism of long lifetime and valley polarization lifetime for interlayer excitons
Below we analyze the origin of the long exciton lifetime and valley polarization lifetime. To this end, the experimental result for B = −7, 0, 7 T at low temperature (T = 2.3 K) in the case of σ− polarization excitation and σ− polarized PL detection are shown in Fig. 4a. First we consider the case at high magnetic field. The decay has both the slow and fast decay components. The slow decay in the order of τ1 ~ 1 μs suggests the dark exciton involvements, which will be further confirmed by the magnetic field dependence measurement shown below. The fast decay part includes two parts, one of which is exponential decay and the other one is power-law decay. Hence, we fit the decay with three components as
$$I = A_1{\rm e}^{ - \frac{t}{{\tau _1}}} + A_2 {\rm e}^{ - \frac{t}{{\tau _2}}} + \frac{B}{{t + t_0}},$$
where τ1 and τ2 are related to the lifetimes of the slow and fast decay, respectively. A1, A2, B and t0 are other fitting parameters related to the initial population and rate constants. The value of τ1 is ~1 μs while τ2 has a value of ~10 ns.
Theoretical model and experimental data fitting. a Sample of the experimental data (temperature, T = 2.3 K) and the fitting to the theoretical model. The data is obtained using σ− polarized pulsed excitation and σ− polarized PL detection. Semi-log plot is used. b, c The two conversion pathways from the WSe2 dark exciton to the interlayer exciton. In the first pathway (A-B-D), the spin flip happens before the interlayer charge transfer while in the second one (A-C-D), the interlayer charge transfer happens before the spin flip. The conversion rate in the first path has an exponential magnetic field dependence while it is constant for the second path. d Scattering rate from dark exciton to interlayer exciton versus magnetic field. The dark-to-interlayer scattering rates in K valley (σ− PL) and K′ valley (σ+ PL) are plotted against magnetic field and fitted using exponential function
For the small magnetic field case, instead of Eq. (1), a more complete model has to be used. In this more complete model, each valley has one WSe2 dark exciton state and two type of interlayer exciton states. The first type of the interlayer exciton decays following power law while the second type undergoes exponential decay. The dark exciton can scatter to the interlayer exciton of the second type. Additionally the dark exciton in one valley can scatter to become dark exciton in the other valley. This model is equivalent to Eq. (1) when this dark exciton intervalley scattering rate is negligible. The complete description of this model is given in the Supplementary Note 2 and Supplementary Fig. 3.
As can be seen from Fig. 4a, both the experimental data fits well with the theoretical model. The dark-to-interlayer exciton scattering rate is found to be in the MHz regime. This suggests that the dark exciton lifetime should be in the order of μs which is exceptionally long compared to the lifetime of other exciton types. The maximum value of the dark exciton intervalley scattering rate is found to be ~100 MHz at B z = 0 T. It decreases quickly with increasing magnetic field. Note that this scattering rate is much smaller than the intralayer bright exciton intervalley scattering rate which can be attributed to the fact that the exchange interaction between dark excitons is much smaller compared to that between bright excitons. However, here we need to consider microsecond time scale. For this time scale, the valley depolarization of dark exciton need to be taken into account.
The dark-to-interlayer scattering rate has an exponential dependence on the magnetic field (Fig. 4d). This can be understood by analyzing the scattering mechanism from the dark exciton to the interlayer exciton. The dark exciton can scatter to become an interlayer exciton by following two different paths. These two paths are illustrated in Fig. 4b, c. In the first path (A-B-D path in Fig. 4b, c), the conduction band electron undergoes spin flipping before the charge transfer happens while in the second path (A-C-D path in Fig. 4b, c) the charge transfer happens before the spin flipping. Following the first path, the dark exciton will transform into an intermediate bright exciton before transforming into an interlayer exciton. According to previous measurement, charge transfer from the bright monolayer exciton to the interlayer exciton is very fast ~100 fs7. Therefore, we can safely neglect the charge transfer time. This means that the contribution of the first path to the dark-to-interlayer scattering rate will be approximately the same as the dark-to-bright exciton scattering rate. As can be seen from Fig. 4c this scattering rate at temperature T can be written as \(r_0{\rm e}^{ - ({\mathrm{\Delta }}E_0 + g\mu _{\mathrm{B}})/k_{\mathrm{B}}T}\), where ΔE0 is the energy difference between the dark exciton and bright exciton at 0 T and r0 is the bright-to-dark exciton scattering rate. This shows that this contribution from the first path has an exponential dependence on magnetic field. On the contrary, the second path does not have strong magnetic field dependence because the transition only happens from a higher energy level to a lower energy level. Hence, its contribution to the dark-to-interlayer scattering rate is constant.
Based on our explanation above, the g-factor of the conduction band electron can be obtained from the magnetic dependence of the dark-to-interlayer exciton scattering rate at a fixed temperature. This can be used to check the sanity of our model. For K valley, the value 1.07 ± 0.079 is found while it is equal to −1.11 ± 0.095 for K′ valley. These g-factor values agree well with the theoretical prediction of the conduction band g-factor for WSe2 when the out-of-plane effective spin g-factor has negative sign45. From the fitting in Fig. 4d, it can also be seen that the dark-to-interlayer exciton scattering rate saturates to a finite value of ~1 MHz at a big magnetic field as predicted by the model. Additionally, the value of the energy level difference between the bright and dark exciton at zero magnetic field (ΔE0) can also be calculated from the temperature dependence of k1. The detail of the experimental data and the theoretical fitting of the temperature dependence of k1 is shown in Supplementary Note 3 and Supplementary Fig. 4. The obtained energy level difference is in line with the value reported in ref. 46. All of these results shows the sanity of our model.
The fact that there are two types of interlayer exciton and that the time-resolved PL signal follows a multi-exponential decay has been reported before2,30,31. The component with slow exponential decay of this PL signal has been attributed to the extrinsic defect30 and also to a transition that is indirect in both real space and momentum space31. However, these two possible explanations cannot be used to explain the magnetic dependence of the slow decay rate that is observed in our data. Instead, we attribute this slow decay component to slow conversion rate from the dark exciton to the bright exciton. By doing so, the magnetic field dependence of this decay rate can be explained. Experimental demonstration of interlayer excitons on additional samples can be found in Supplementary Notes 5 and 6, Supplementary Figs. 7, 8 and 9.
In summary, we have experimentally demonstrated the long valley polarization lifetime in the order of microsecond in 2D heterostructures. This is primarily induced by magnetic field suppressed valley mixing for dark excitons. The long lifetime of the dark exciton put the dark exciton as a reservoir for the interlayer exciton in a long time scale. The long lifetime of exciton in 2D heterostructures makes 2D heterostructure to be a promising candidate for the realization of ultralong-distance exciton transport and exciton devices13,14. The possibilty to realize superfluidity9 in 2D heterostructures with a long exciton lifetime may provide future platform towards low-energy dissipation valleytronic devices.
Spectroscopy experiment setup
A homemade fiber-based confocal microscope is used for performing the polarization-resolved PL spectroscopy. Polarizers and quarter wave plates are installed on the excitation and detection arms of the confocal microscope for polarization-selective excitation and PL detection. The PL emission is directed by an multi-mode optical fiber into a spectrometer (Andor Shamrock) with a CCD detector for spectroscopic recording. The sample is loaded into a magneto cryostat and cooled down to ~2.3 K. Cryostat with vector magnet provides possibility to study dynamics in different magnetic field directions. The vector magnetic field ranges from −7 to +7 T in the out-of-plane direction (z-axis) and −1 to +1 T in the in-plane direction (x-axis and y-axis). The wavelength of the excitation is 726 nm (1.708 eV) for both the CW and pulsed laser experiment (pulse width 100 ps).
Preparation of the heterostructures
We fabricated MoSe2/WSe2 heterostructures via a mechanical exfoliation and aligned-transfer method7. Bulk WSe2 and MoSe2 crystals (from HQ graphene) were used to produce WSe2 and MoSe2 monolayer flakes and they were precisely stacked with a solvent-free aligned-transfer process. We first prepared a WSe2 monolayer on SiO2 (300 nm)/Si substrate and a MoSe2 monolayer on a transparent polydimethylsiloxane (PDMS) substrate. After careful alignment (for both relative position and stacking angle) under the optical microscope with the aid of an XYZ manipulation stage, we then stacked the two monolayer flakes together, forming a PDMS/MoSe2/WSe2–SiO2/Si structure. Finally, we removed the top PDMS layer and obtained a MoSe2/WSe2 heterostructure on SiO2/Si substrate. For the controlled alignment of stacking angle, the armchair axes were guided according to their sharp edges from optical images, e.g., a stacking angle of 0° (60°) (<±2°) can be identified from Fig. 1a.
The data that support the findings of this study are available from the corresponding author upon request.
Geim, A. K. & Grigorieva, I. V. Van der Waals heterostructures. Nature 499, 419–425 (2013).
Rivera, P. et al. Observation of long-lived interlayer excitons in monolayer MoSe2-WSe2 heterostructures. Nat. Commun. 6, 6242 (2014).
Gong, Y. et al. Vertical and in-plane heterostructures from WS2/MoS2 monolayers. Nat. Mater. 13, 1135–1142 (2014).
Ceballos, F. et al. Ultrafast charge separation and indirect exciton formation in a MoS2-MoSe2 van der Waals heterostructure. ACS Nano 8, 12717–12724 (2014).
Fang, H. et al. Strong interlayer coupling in van der Waals heterostructures built from single-layer chalcogenides. Proc. Natl. Acad. Sci. 111, 6198–6202 (2014).
Lin, Y.-C. et al. Atomically thin resonant tunnel diodes built from synthetic van der Waals heterostructures. Nat. Commun. 6, 7311 (2015).
Xu, W. et al. Correlated fluorescence blinking in two-dimensional semiconductor heterostructures. Nature 541, 62–67 (2017).
Rivera, P. et al. Valley-polarized exciton dynamics in a 2D semiconductor heterostructure. Science 351, 688–691 (2016).
Fogler, M. et al. High-temperature superfluidity with indirect excitons in van der Waals heterostructures. Nat. Commun. 5, 4555 (2014).
Butov, L. et al. Macroscopically ordered state in an exciton system. Nature 418, 751 (2002).
Butov, L. et al. Towards bose-einstein condensation of excitons in potential traps. Nature 417, 47 (2002).
High, A. et al. Spontaneous coherence in a cold exciton gas. Nature 483, 584 (2012).
High, A. et al. Control of exciton fluxes in an excitonic integrated circuit. Science 321, 229–231 (2008).
Grosso, G. et al. Excitonic switches operating at around 100 K. Nat. Photonics 3, 577–580 (2009).
Xiao, D. et al. Coupled spin and valley physics in monolayers of MoS2 and other group-VI dichalcogenides. Phys. Rev. Lett. 108, 196802 (2012).
Xu, X. et al. Spin and pseudospins in layered transition metal dichalcogenides. Nat. Phys. 10, 343–350 (2014).
Mak, K. F. et al. Control of valley polarization in monolayer MoS2 by optical helicity. Nat. Nanotech. 7, 494–498 (2012).
Zeng, H. et al. Control of valley polarization in monolayer MoS2 by optical helicity. Nat. Nanotech. 7, 490–493 (2012).
Cao, T. et al. Valley-selective circular dichroism of monolayer molybdenum disulphide. Nat. Commun. 3, 887 (2012).
Sallen, G. et al. Robust optical emission polarization in MoS2 monolayers through selective valley excitation. Phys. Rev. B 86, 081301–081304 (2012).
Jones, A. M. et al. Optical generation of excitonic valley coherence in monolayer WSe2. Nat. Nanotech. 8, 634–638 (2013).
Li, X. et al. Unconventional quantum hall effect and tunable spin all effect in dirac materials: Application to an isolated MoS2 trilayer. Phys. Rev. Lett. 110, 066803 (2013).
Mak, K. F. et al. The valley Hall effect in MoS2 transistors. Science 344, 1489–1492 (2014).
Gong, Z. et al. Magnetoelectric effects and valley-controlled spin quantum gates in transition metal dichalcogenide bilayers. Nat. Commun. 4, 2053 (2013).
Lagarde, D. et al. Carrier and polarization dynamics in monolayer MoS2. Phys. Rev. Lett. 112, 047401 (2014).
Mai, C. et al. Many-body effects in valleytronics: direct measurement of valley lifetimes in single-layer MoS2. Nano. Lett. 14, 202–206 (2014).
Wang, G. et al. Valley dynamics probed through charged and neutral exciton emission in monolayer WSe2. Phys. Rev. B 90, 075413 (2014).
Smoleński, T. et al. Tuning valley polarization in a WSe2 monolayer with a tiny magnetic field. Phys. Rev. X 6, 021024 (2016).
Srivastava, A. et al. Optically active quantum dots in monolayer WSe2. Nat. Nano. 10, 491–496 (2015).
Nagler, P. et al. Interlayer exciton dynamics in a dichalcogenide monolayer heterostructure. 2D Materi. 4, 025112 (2017).
Miller, B. et al. Long-lived direct and indirect interlayer excitons in van der waals heterostructures. Nano. Lett. 17, 5229–5237 (2017).
Kim, J. et al. Observation of ultralong valley lifetime in WSe2/MoS2 heterostructures. Sci. Adv. 3, e1700518 (2017).
Zhang, X.-X. et al. Experimental evidence for dark excitons in monolayer WSe2. Phys. Rev. Lett. 115, 257403 (2015).
Zhang, X.-X. et al. Magnetic brightening and control of dark excitons in monolayer WSe2. Nat. Nanotech. 12, 883–888 (2017).
Yu, H. et al. Anomalous light cones and valley optical selection rules of interlayer excitons in twisted heterobilayers. Phys. Rev. Lett. 115, 187002 (2015).
Wu, F. et al. Topological exciton bands in Moiré heterojunctions. Phys. Rev. Lett. 118, 147401 (2017).
Wu, F. et al. Theory of optical absorption by interlayer excitons in transition metal dichalcogenide heterobilayers. Phy. Rev. B 97, 035306 (2018).
Yu, H. et al. Moiré excitons: From programmable quantum emitter arrays to spin-orbit-coupled artificial lattices. Sci. Adv. 3, e1701696 (2017).
Srivastava, A. et al. Valley zeeman effect in elementary optical excitations of monolayer WSe2. Nat. Phys. 11, 141–147 (2015).
Maialle, M. Z. et al. Exciton spin dynamics in quantum wells. Phys. Rev. B 47, 15776–15788 (1993).
Bayer, M. et al. Fine structure of neutral and charged excitons in self-assembled In (Ga) As/(Al) GaAs quantum dots. Phys. Rev. B 65, 195315 (2002).
Yu, T. & Wu, M. Valley depolarization due to intervalley and intravalley electron-hole exchange interactions in monolayer MoS 2. Phys. Rev. B 89, 205303 (2014).
Aivazian, G. et al. Magnetic control of valley pseudospin in monolayer. Nat. Phys. 11, 148–152 (2015).
Maialle, M. Z. & Degani, M. H. Transverse magnetic field effects upon the exciton exchange interaction in quantum wells. Semicond. Sci. Technol. 16, 982–085 (2001).
Kormányos, A. et al. Spin-orbit coupling, quantum dots, and qubits in monolayer transition metal dichalcogenides. Phys. Rev. X 4, 011034 (2014).
Molas, M. R. et al. Brightening of dark excitons in monolayers of semiconducting transition metal dichalcogenides. 2D Mater. 4, 021003 (2017).
The first two authors contribute equally to this work. We acknowledges the support from the Singapore National Research Foundation through Singapore NRF fellowship grants (NRF-NRFF2015-03), Competitive Research Programme (CRP Award No. NRF-CRP14-2014-02), Astar QTE and Singapore Ministry of Education (No. MOE2016-T2-2-077, No. MOE2017-T2-1-163, and No. RG176/15) and a start-up grant (No. M4081441) from Nanyang Technological University. Q. H. Xiong acknowledges the support for this work from the Singapore National Research Foundation through an Investigatorship Award (NRF-NRFI2015-03), and Singapore Ministry of Education via two AcRF Tier 2 grants (MOE2013-T2-1-049 and MOE2015-T2-1-047).
Division of Physics and Applied Physics, School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore, 637371, Singapore
Chongyun Jiang
, Weigao Xu
, Abdullah Rasmita
, Zumeng Huang
, Ke Li
, Qihua Xiong
& Wei-bo Gao
NOVITAS, Nanoelectronics Center of Excellence, School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, 639798, Singapore
Qihua Xiong
MajuLab, CNRS-Université de Nice-NUS-NTU International Joint Research Unit UMI 3654, Singapore, 637371, Singapore
The Photonics Institute and Centre for Disruptive Photonic Technologies, Nanyang Technological University, 637371, Singapore, Singapore
Wei-bo Gao
Search for Chongyun Jiang in:
Search for Weigao Xu in:
Search for Abdullah Rasmita in:
Search for Zumeng Huang in:
Search for Ke Li in:
Search for Qihua Xiong in:
Search for Wei-bo Gao in:
W.-b.G. and Q.X. supervised the work. C.J. and W.-b.G. conceived the idea and designed the experiments. W.X. prepared the heterostructure samples and carried out the second harmonic generation investigation. C.J. and Z.H. carried out the steady state and time-resolved magneto-PL study of the heterostructures. C.J. and A.R. did the data analysis. A.R. contributed to the theoretical interpretation of the results. K.L. supported the PL experiment. C.J., A.R., W.X., W.-b.G., Q.X. and Z.H. co-wrote the paper. All the authors reviewed and modified the manuscript.
Corresponding authors
Correspondence to Qihua Xiong or Wei-bo Gao.
Ethics declarations
The authors declare no competing financial interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
https://doi.org/10.1038/s41467-018-03174-3
A parallel magnetic field driven confinement versus separation of charges in GaAs quantum well investigated by magneto-photovoltage and magneto-photoluminescence spectroscopy
S. Haldar
, A. Banerjee
, Geetanjali Vashisht
, S. Porwal
, T.K. Sharma
& V.K. Dixit
Journal of Luminescence (2019)
Optical initialization of a single spin-valley in charged WSe2 quantum dots
Xin Lu
, Xiaotong Chen
, Sudipta Dubey
, Qiang Yao
, Weijie Li
, Xingzhi Wang
& Ajit Srivastava
Nature Nanotechnology (2019)
Probing Exciton Complexes and Charge Distribution in Inkslab-Like WSe2 Homojunction
Taishen Li
, Mingling Li
, Yue Lin
, Hongbing Cai
, Yiming Wu
, Huaiyi Ding
, Siwen Zhao
, Nan Pan
& Xiaoping Wang
ACS Nano (2018)
Microsecond Valley Lifetime of Defect-Bound Excitons in Monolayer WSe2
Galan Moody
, Kha Tran
, Xiaobo Lu
, Travis Autry
, James M. Fraser
, Richard P. Mirin
, Li Yang
, Xiaoqin Li
& Kevin L. Silverman
Physical Review Letters (2018)
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Nature Communications menu
Editors' Highlights
Top 50 Read Articles of 2018
|
CommonCrawl
|
Emeritus ProfessorRichardVinter
Faculty of Engineering, Department of Electrical and Electronic Engineering
Emeritus Professor in Electrical and Electronic Engineering
+44 (0)20 7594 6287r.vinter
Electrical EngineeringSouth Kensington Campus
Filter from 201920182017201620152014201320122011201020092008200720062005200420032002200120001999199819971996199519941993199219911990198919881987198619851984198319821981198019791978197719761975197419731972 to Filter to 201920182017201620152014201320122011201020092008200720062005200420032002200120001999199819971996199519941993199219911990198919881987198619851984198319821981198019791978197719761975197419731972
Bettiol P, Quincampoix M, Vinter RB, 2019, Existence and Characterization of the Values of Two Player Differential Games with State Constraints, APPLIED MATHEMATICS AND OPTIMIZATION, Vol: 80, Pages: 765-799, ISSN: 0095-4616
Citations: 1
Vinter RB, 2019, Free end-time optimal control problems: Conditions for the absence of an infimum gap, Vietnam Journal of Mathematics, Vol: 47, Pages: 757-768, ISSN: 0866-7179
This paper concerns free end-time optimal control problems, in which the dynamic constraint takes the form of a controlled differential inclusion. Such problems may fail to have a minimizer. Relaxation is a procedure for enlarging the domain of an optimization problem to guarantee existence of a minimizer. In the context of problems studied here, the standard relaxation procedure involves replacing the velocity sets in the original problem by their convex hulls. It is desirable that the original and relaxed versions of the problem have the same infimum cost. For then we can obtain a sub-optimal state trajectory, by obtaining a solution to the relaxed problem and approximating it. It is important, therefore, to investigate when the infimum costs of the two problems are the same; for otherwise the above strategy for generating sub-optimal state trajectories breaks down. We explore the relation between the existence of an infimum gap and abnormality of necessary conditions for the free-time problem. Such relations can translate into verifiable hypotheses excluding the existence of an infimum gap. Links between existence of an infimum gap and normality have previously been explored for fixed end-time problems. This paper establishes, for the first time, such links for free end-time problems.
Vinter RB, 2019, OPTIMAL CONTROL PROBLEMS WITH TIME DELAYS: CONSTANCY OF THE HAMILTONIAN, SIAM JOURNAL ON CONTROL AND OPTIMIZATION, Vol: 57, Pages: 2574-2602, ISSN: 0363-0129
Vinter RB, 2018, State constrained optimal control problems with time delays, JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS, Vol: 457, Pages: 1696-1712, ISSN: 0022-247X
Bettiol P, Vinter RB, 2017, THE HAMILTON JACOBI EQUATION FOR OPTIMAL CONTROL PROBLEMS WITH DISCONTINUOUS TIME DEPENDENCE, SIAM JOURNAL ON CONTROL AND OPTIMIZATION, Vol: 55, Pages: 1199-1225, ISSN: 0363-0129
Boccia A, Vinter RB, 2017, THE MAXIMUM PRINCIPLE FOR OPTIMAL CONTROL PROBLEMS WITH TIME DELAYS, SIAM JOURNAL ON CONTROL AND OPTIMIZATION, Vol: 55, Pages: 2905-2935, ISSN: 0363-0129
Vinter RB, Boccia A, Pinho M, 2016, Optimal Control Problems with Mixed and Pure State Constraints, SIAM Journal on Control and Optimization, Vol: 54, Pages: 3061-3083, ISSN: 0363-0129
This paper provides necessary conditions of optimality for optimal control problems, in whichthe pathwise constraints comprise both 'pure' constraints on the state variable and also 'mixed'constraints on control and state variables. The proofs are along the lines of earlier analysis formixed constraint problems, according to which Clarke's theory of 'stratified' necessary conditions isapplied to a modified optimal control problem resulting from absorbing the mixed constraint into thedynamics; the difference here is that necessary conditions which now take account of the presenceof pure state constraints are applied to the modified problem. Necessary conditions are given for arather general formulation of the problem containing both forms of the constraints, and then theseare specialized to apply to problems having special structure. While combined pure state and mixedcontrol/state problems have been previously treated in the literature, the necessary conditions in thispaper are proved under less restrictive hypotheses and for novel formulations of the constraints.
Bettiol P, Vinter RB, 2016, L∞ estimates on trajectories confined to a closed subset, for control systems with bounded time variation, Mathematical Programming, Vol: 168, Pages: 201-228, ISSN: 1436-4646
The term 'distance estimate' for state constrained control systems refers to an estimate on the distance of an arbitrary state trajectory from the subset of state trajectories that satisfy a given state constraint. Distance estimates have found widespread application in state constrained optimal control. They have been used to establish regularity properties of the value function, to establish the non-degeneracy of first order conditions of optimality, and to validate the characterization of the value function as a unique solution of the HJB equation. The most extensively applied estimates of this nature are so-called linear L∞L∞ distance estimates. The earliest estimates of this nature were derived under hypotheses that required the multifunctions, or controlled differential equations, describing the dynamic constraint, to be locally Lipschitz continuous w.r.t. the time variable. Recently, it has been shown that the Lipschitz continuity hypothesis can be weakened to a one-sided absolute continuity hypothesis. This paper provides new, less restrictive, hypotheses on the time-dependence of the dynamic constraint, under which linear L∞L∞ estimates are valid. Here, one-sided absolute continuity is replaced by the requirement of one-sided bounded variation. This refinement of hypotheses is significant because it makes possible the application of analytical techniques based on distance estimates to important, new classes of discontinuous systems including some hybrid control systems. A number of examples are investigated showing that, for control systems that do not have bounded variation w.r.t. time, the desired estimates are not in general valid, and thereby illustrating the important role of the bounded variation hypothesis in distance estimate analysis.
Festa A, Vinter RB, 2016, Decomposition of Differential Games with Multiple Targets, Journal of Optimization Theory and Applications, Vol: 169, Pages: 848-875, ISSN: 1573-2878
This paper provides a decomposition technique for the purpose of simplifying the solution of certain zero-sum differential games. The games considered terminate when the state reaches a target, which can be expressed as the union of a collection of target subsets considered as 'multiple targets'; the decomposition consists in replacing the original target by each of the target subsets. The value of the original game is then obtained as the lower envelope of the values of the collection of games, resulting from the decomposition, which can be much easier to solve than the original game. Criteria are given for the validity of the decomposition. The paper includes examples, illustrating the application of the technique to pursuit/evasion games and to flow control.
Bettiol P, Khalil N, Vinter RB, 2016, Normality of Generalized Euler-Lagrange Conditions for State Constrained Optimal Control Problems, Journal of Convex Analysis, Vol: 23, Pages: 291-311, ISSN: 0944-6532
We consider state constrained optimal control problems in which the cost to minimize comprises both integral and end-point terms, establishing normality of the generalized Euler-Lagrange condition. Simple examples illustrate that the validity of the Euler-Lagrange condition (and related necessary conditions), in normal form, depends crucially on the interplay between velocity sets, the left end-point constraint set and the state constraint set. We show that this is actually a common feature for general state constrained optimal control problems, in which the state constraint is represented by closed convex sets and the left end-point constraint is a closed set. In these circumstances classical constraint qualifications involving the state constraints and the velocity sets cannot be used alone to guarantee normality of the necessary conditions. A key feature of this paper is to prove that the additional information involving tangent vectors to the left end-point and the state constraint sets can be used to establish normality.
Boccia A, Vinter RB, 2016, The Maximum Principle for Optimal Control Problems with Time Delays, 10th IFAC Symposium on Nonlinear Control Systems (NOLCOS), Publisher: ELSEVIER, Pages: 951-955, ISSN: 2405-8963
Vinter RB, 2015, Multifunctions of bounded variation, Journal of Differential Equations, Vol: 260, Pages: 3350-3379, ISSN: 1090-2732
Consider control systems described by a differential equation with a control term or, more generally, by a differential inclusion with velocity set F(t,x). Certain properties of state trajectories can be derived when it is assumed that F(t,x) is merely measurable w.r.t. the time variable t . But sometimes a refined analysis requires the imposition of stronger hypotheses regarding the time dependence. Stronger forms of necessary conditions for minimizing state trajectories can be derived, for example, when F(t,x) is Lipschitz continuous w.r.t. time. It has recently become apparent that significant addition properties of state trajectories can still be derived, when the Lipschitz continuity hypothesis is replaced by the weaker requirement that F(t,x) has bounded variation w.r.t. time. This paper introduces a new concept of multifunctions F(t,x) that have bounded variation w.r.t. time near a given state trajectory, of special relevance to control. We provide an application to sensitivity analysis.
Palladino M, Vinter RB, 2015, Regularity of the Hamiltonian Along Optimal Trajectories, SIAM Journal on Control and Optimization, Vol: 53, Pages: 1892-1919, ISSN: 1095-7138
This paper concerns state constrained optimal control problems, in which the dynamic constraint takes the form of a differential inclusion. If the differential inclusion does not depend on time, then the Hamiltonian, evaluated along the optimal state trajectory and the co-state trajectory, is independent of time. If the differential inclusion is Lipschitz continuous, then the Hamiltonian, evaluated along the optimal state trajectory and the co-state trajectory, is Lipschitz continuous. These two well-known results are examples of the following principle: the Hamiltonian, evaluated along the optimal state trajectory and the co-state trajectory, inherits the regularity properties of the differential inclusion, regarding its time dependence. We show that this principle also applies to another kind of regularity: if the differential inclusion has bounded variation w.r.t. time, then the Hamiltonian, evaluated along the optimal state trajectory and the co-state trajectory, has bounded variation. Two applications of these newly found properties are demonstrated. One is to derive improved conditions which guarantee the nondegeneracy of necessary conditions of optimality in the form of a Hamiltonian inclusion. The other application is to derive new conditions under which minimizers in the calculus of variations have bounded slope. The analysis is based on a recently proposed, local concept of differential inclusions that have bounded variation w.r.t. the time variable, in which conditions are imposed on the multifunction involved, only in a neighborhood of a given state trajectory.
Palladino M, Vinter RB, 2015, When are minimizing controls also minimizing relaxed controls?, Discrete and Continuous Dynamical Systems, Vol: 35, Pages: 4573-4592, ISSN: 1553-5231
Relaxation refers to the procedure of enlarging the domain of a variational problem or the search space for the solution of a set of equations, to guarantee the existence of solutions. In optimal control theory relaxation involves replacing the set of permissible velocities in the dynamic constraint by its convex hull. Usually the infimum cost is the same for the original optimal control problem and its relaxation. But it is possible that the relaxed infimum cost is strictly less than the infimum cost. It is important to identify such situations, because then we can no longer study the infimum cost by solving the relaxed problem and evaluating the cost of the relaxed minimizer. Following on from earlier work by Warga, we explore the relation between the existence of an infimum gap and abnormality of necessary conditions (i.e. they are valid with the cost multiplier set to zero). Two kinds of theorems are proved. One asserts that a local minimizer, which is not also a relaxed minimizer, satisfies an abnormal form of the Pontryagin Maximum Principle. The other asserts that a local relaxed minimizer that is not also a minimizer satisfies an abnormal form of the relaxed Pontryagin Maximum Principle.
Palladino M, Vinter RB, 2014, Minimizers That Are Not Also Relaxed Minimizers, SIAM Journal on Control and Optimization, Vol: 52, Pages: 2164-2179, ISSN: 1095-7138
Relaxation is a widely used regularization procedure in optimal control, involving the replacement of velocity sets by their convex hulls, to ensure the existence of a minimizer. It can be an important step in the construction of suboptimal controls for the original, unrelaxed, optimal control problem (which may not have a minimizer), based on obtaining a minimizer for the relaxed problem and approximating it. In some cases the infimum cost of the unrelaxed problem is strictly greater than the infimum cost over relaxed state trajectories; we need to identify such situations because then the above procedure fails. The noncoincidence of these two infima leads also to a breakdown of the dynamic programming method because, typically, solving the Hamilton--Jacobi equation yields the minimum cost of the relaxed, not the original, optimal control problem. Following on from earlier work by Warga, we explore the relation between, on the one hand, noncoincidence of the minimum cost of the optimal control and its relaxation and, on the other, abnormality of necessary conditions (in the sense that they take a degenerate form in which the cost multiplier is set to zero). Two kinds of theorems are proved, depending on whether we focus attention on minimizers of the unrelaxed or the relaxed formulation of the optimal control problem. One kind asserts that a local minimizer which is not also a relaxed local minimizer satisfies an abnormal form of the Hamiltonian inclusion. The other asserts that a relaxed local minimizer that is not also a local minimizer also satisfies an abnormal form of Hamiltonian inclusion.
Bettiol P, Frankowska H, Vinter RB, 2014, Improved Sensitivity Relations in State Constrained Optimal Control, Applied Mathematics and Optimization, Vol: 71, Pages: 353-377, ISSN: 1432-0606
Sensitivity relations in optimal control provide an interpretation of the costate trajectory and the Hamiltonian, evaluated along an optimal trajectory, in terms of gradients of the value function. While sensitivity relations are a straightforward consequence of standard transversality conditions for state constraint free optimal control problems formulated in terms of control-dependent differential equations with smooth data, their verification for problems with either pathwise state constraints, nonsmooth data, or for problems where the dynamic constraint takes the form of a differential inclusion, requires careful analysis. In this paper we establish validity of both 'full' and 'partial' sensitivity relations for an adjoint state of the maximum principle, for optimal control problems with pathwise state constraints, where the underlying control system is described by a differential inclusion. The partial sensitivity relation interprets the costate in terms of partial Clarke subgradients of the value function with respect to the state variable, while the full sensitivity relation interprets the couple, comprising the costate and Hamiltonian, as the Clarke subgradient of the value function with respect to both time and state variables. These relations are distinct because, for nonsmooth data, the partial Clarke subdifferential does not coincide with the projection of the (full) Clarke subdifferential on the relevant coordinate space. We show for the first time (even for problems without state constraints) that a costate trajectory can be chosen to satisfy the partial and full sensitivity relations simultaneously. The partial sensitivity relation in this paper is new for state constraint problems, while the full sensitivity relation improves on earlier results in the literature (for optimal control problems formulated in terms of Lipschitz continuous multifunctions), because a less restrictive inward pointing hypothesis is invoked in the proo
Gavriel C, Vinter RB, 2014, Second Order Sufficient Conditions for Optimal Control Problems with Non-unique Minimizers: An Abstract Framework, Applied Mathematics and Optimization, Vol: 70, Pages: 411-442, ISSN: 1432-0606
Standard second order sufficient conditions in optimal control theory provide not only the information that an extremum is a weak local minimizer, but also tell us that the extremum is locally unique. It follows that such conditions will never cover problems in which the extremum is continuously embedded in a family of constant cost extrema. Such problems arise in periodic control, when the cost is invariant under time translations, in shape optimization, where the cost is invariant under Euclidean transformations (translations and rotations of the extremal shape), and other areas where the domain of the optimization problem does not really comprise elements in a linear space, but rather an equivalence class of such elements. We supply a set of sufficient conditions for minimizers that are not locally unique, tailored to problems of this nature. The sufficient conditions are in the spirit of earlier conditions for 'non-isolated' minima, in the context of general infinite dimensional nonlinear programming problems provided by Bonnans, Ioffe and Shapiro, and require coercivity of the second variation in directions orthogonal to the constant cost set. The emphasis in this paper is on the derivation of directly verifiable sufficient conditions for a narrower class of infinite dimensional optimization problems of special interest. The role of the conditions in providing easy-to-use tests of local optimality of a non-isolated minimum, obtained by numerical methods, is illustrated by an example in optimal control.
Vinter RB, 2014, The Hamiltonian Inclusion for Nonconvex Velocity Sets, SIAM Journal on Control and Optimization, Vol: 52, Pages: 1237-1250, ISSN: 1095-7138
Since Clarke's 1973 proof of the Hamiltonian inclusion for optimal control problems with convex velocity sets, there has been speculation (and, more recently, speculation relating to a stronger, partially convexified version of the Hamiltonian inclusion) as to whether these necessary conditions are valid in the absence of the convexity hypothesis. The issue was in part resolved by Clarke himself when, in 2005, he showed that $L^{\infty}$ local minimizers satisfy the Hamiltonian inclusion. In this paper it is shown, by counterexample, that the Hamiltonian inclusion (and so also the stronger partially convexified Hamiltonian inclusion) are not in general valid for nonconvex velocity sets when the local minimizer in question is merely a $W^{1,1}$ local minimizer, not an $L^{\infty}$ local minimizer. The counterexample demonstrates that the need to consider $L^{\infty}$ local minimizers, not $W^{1,1}$ local minimizers, in the proof of the Hamiltonian inclusion for nonconvex velocity sets is fundamental, not just a technical restriction imposed by currently available proof techniques. The paper also establishes the validity of the partially convexified Hamiltonian inclusion for $W^{1,1}$ local minimizers under a normality assumption, thereby correcting earlier assertions in the literature.
Bettiol P, Boccia A, Vinter RB, 2013, Stratified Necessary Conditions for Differential Inclusions with State Constraints, SIAM Journal on Control and Optimization, Vol: 51, Pages: 3903-3917, ISSN: 1095-7138
The concept of stratified necessary conditions for optimal control problems, whose dynamic constraint is formulated as a differential inclusion, was introduced by F. H. Clarke. These are conditions satisfied by a feasible state trajectory that achieves the minimum value of the cost over state trajectories whose velocities lie in a time-varying open ball of specified radius about the velocity of the state trajectory of interest. Considering different radius functions stratifies the interpretation of "minimizer." In this paper we prove stratified necessary conditions for optimal control problems involving pathwise state constraints. As was shown by Clarke in the state constraint-free case, we find that, also in our more general setting, the stratified necessary conditions yield generalizations of earlier optimality conditions for unbounded differential inclusions as simple corollaries. Some examples are provided, giving insights into the nature of the hypotheses invoked for the derivation of stratified necessary conditions and into the scope for their further refinement.
Bettiol P, Vinter RB, 2013, Estimates on trajectories in a closed set with corners for (t,x) dependent data, Mathematical Control and Related Fields, Vol: 3, Pages: 245-267, ISSN: 2156-8472
Estimates on the distance of a given process from the set of processes that satisfy a specified state constraint in terms of the state constraint violation are important analytical tools in state constrained optimal control theory; they have been employed to ensure the validity of the Maximum Principle in normal form, to establish regularity properties of the value function, to justify interpreting the value function as a unique solution of the Hamilton-Jacobi equation, and for other purposes. A range of estimates are required, which differ according the metrics used to measure the `distance' and the modulus θ(h) of state constraint violation h in terms of which the estimates are expressed. Recent research has shown that simple linear estimates are valid when the state constraint set A has smooth boundary, but do not generalize to a setting in which the boundary of A has corners. Indeed, for a velocity set F which does not depend on (t,x) and for state constraints taking the form of the intersection of two closed spaces (the simplest case of a boundary with corners), the best distance estimates we can hope for, involving the W1,1, metric on state trajectories, is a super-linear estimate expressed in terms of the h|log(h)| modulus. But, distance estimates involving the h|log(h)| modulus are not in general valid when the velocity set F(.,x) is required merely to be continuous, while not even distance estimates involving the weaker, Hölder modulus hα (with α arbitrarily small) are in general valid, when F(.,x) is allowed to be discontinuous. This paper concerns the validity of distance estimates when the velocity set F(t,x) is (t,x)-dependent and satisfy standard hypotheses on the velocity set (linear growth, Lipschitz x-dependence and an inward pointing condition). Hypotheses are identified for the validity of distance estimates, involving both the h|log(h)| and linear moduli, within the framework of control systems described by a controlled dif
Boccia A, Falugi P, Maurer H, Vinter RBet al., 2013, Free time optimal control problems with time delays, 52nd IEEE Annual Conference on Decision and Control (CDC), Publisher: IEEE, Pages: 520-525, ISSN: 0743-1546
Festa A, Vinter RB, 2013, A decomposition technique for pursuit evasion games with many pursuers, 52nd IEEE Annual Conference on Decision and Control (CDC), Publisher: IEEE, Pages: 5797-5802, ISSN: 0743-1546
Palladino M, Vinter RB, 2013, When Does Relaxation Reduce the Minimum Cost of an Optimal Control Problem?, 52nd IEEE Annual Conference on Decision and Control (CDC), Publisher: IEEE, Pages: 526-531, ISSN: 0743-1546
Falugi P, Kountouriotis P-A, Vinter RB, 2012, Differential Games Controllers That Confine a System to a Safe Region in the State Space, With Applications to Surge Tank Control, IEEE Transactions on Automatic Control, Vol: 57, Pages: 2778-2788, ISSN: 1558-2523
Surge tanks are units employed in chemical processing to regulate the flow of fluids between reactors. A notable feature of surge tank control is the need to constrain the magnitude of the Maximum Rate of Change (MROC) of the surge tank outflow, since excessive fluctuations in the rate of change of outflow can adversely affect down-stream processing (through disturbance of sediments, initiation of turbulence, etc.). Proportional + Integral controllers, traditionally employed in surge tank control, do not take direct account of the MROC. It is therefore of interest to explore alternative approaches. We show that the surge tank controller design problem naturally fits a differential games framework, proposed by Dupuis and McEneaney, for controlling a system to confine the state to a safe region of the state space. We show furthermore that the differential game arising in this way can be solved by decomposing it into a collection of (one player) optimal control problems. We discuss the implications of this decomposition technique, for the solution of other controller design problems possessing some features of the surge tank controller design problem.
Clark JMC, Vinter RB, 2012, Stochastic exit time problems arising in process control, Stochastics-An International Journal of Probability and Stochastic Processes, Vol: 84, Pages: 667-681, ISSN: 1744-2516
This paper concerns the problem of controlling a stochastic system, with small noise parameter, to prevent it leaving a safe region of the state space. Such problems arise in flow control and other areas. We consider a formulation of the problem, in which a control is sought, to maximize a cost which is related to the expected exit time, but modified to reduce the probability of an early exit, according to a specified level of risk aversion ('risk sensitive' stochastic control). Formally letting the noise parameter tend to zero, we find that the optimal control strategy for this problem coincides with the optimal feedback control strategy for a differential game. We identify a class of differential games arising in this way, the so called decomposable differential games, for which the optimal control strategy can be easily obtained and illustrate the proposed solution technique by applying it to a flow control problem arising in process systems engineering.
Bettiol P, Frankowska H, Vinter RB, 2011, L∞ estimates on trajectories confined to a closed subset, Journal of Differential Equations, Vol: 252, Pages: 1912-1933, ISSN: 1090-2732
This paper concerns the validity of estimates on the distance of an arbitrary state trajectory from the set of state trajectories which lie in a given state constraint set. These so called distance estimates have wide-spread application in state constrained optimal control, including justifying the use of the Maximum Principle in normal form and establishing regularity properties of value functions. We focus on linear, L∞ distance estimates which, of all the available estimates have, so far, been the most widely used. Such estimates are known to be valid for general, closed state constraint sets, provided the functions defining the dynamic constraint are Lipschitz continuous, with respect to the time and state variables. We ask whether linear, L∞ distance estimates remain valid when the Lipschitz continuity hypothesis governing t-dependence of the data is relaxed. We show by counter-example that these distance estimates are not valid in general if the hypothesis of Lipschitz continuity is replaced by continuity. We also provide a new hypothesis, 'absolute continuity from the left', for the validity of linear, L∞ estimates. The new hypothesis is less restrictive than Lipschitz continuity and even allows discontinuous time dependence in certain cases. It is satisfied, in particular, by differential inclusions exhibiting non-Lipschitz t-dependence at isolated points, governed, for example, by a fractional-power modulus of continuity. The relevance of distance estimates for state constrained differential inclusions permitting fractional-power time dependence is illustrated by an example in engineering design, where we encounter an isolated, square-root type singularity, concerning the t-dependence of the data.
Vinter RB, Bettiol P, 2011, Trajectories satisfying a state constraint: improved estimates and new non-degeneracy conditions, IEEE Transactions on Automatic Control, Vol: 56, Pages: 1090-1096
For a state-constrained control system described by a differential inclusion and a single functional inequality state constraint, it is known that, under an `inward pointing condition', the $W^{1,1}$ distance of an arbitrary state trajectory to the set of state trajectories, which have the same left endpoint and which satisfy the state constraint, is linearly related to the state constraint violation. In this paper we show that, in situations where the state-constrained control system is described instead by a controlled differential equation, this estimate can be improved by replacing the $W^{1,1}$ distance on state trajectories by the Ekeland metric of the distance of the control functions. A counter-example reveals that a refinement of this nature is not in general valid for state constrained differential inclusions. Finally we show how the refined estimates may be used to establish new conditions for non-degeneracy of the state constrained Maximum Principle, in circumstances when the data depends discontinuously on the control variable.
Clark JMC, Kountouriotis PA, Vinter RB, 2011, A Gaussian Mixture Filter for Range-Only Tracking, IEEE TRANSACTIONS ON AUTOMATIC CONTROL, Vol: 56, Pages: 602-613, ISSN: 0018-9286
Citations: 13
Bettiol P, Bressan A, Vinter RB, 2011, ESTIMATES FOR TRAJECTORIES CONFINED TO A CONE IN R-n, SIAM JOURNAL ON CONTROL AND OPTIMIZATION, Vol: 49, Pages: 21-41, ISSN: 0363-0129
Singh R, Pal BC, Jabr RA, Vinter RBet al., 2011, Meter Placement for Distribution System State Estimation: An Ordinal Optimization Approach, IEEE Transactions on Power Systems, Vol: 26, Pages: 2328-2335, ISSN: 0885-8950
This paper addresses the problem of meter placement for distribution system state estimation (DSSE). The approach taken is to seek a set of meter locations that minimizes the probability that the peak value of the relative errors in voltage magnitudes and angle estimates across the network exceeds a specified threshold. The proposed technique is based on ordinal optimization and employs exact calculations of the probabilities involved, rather than estimates of these probabilities as used in our earlier work. The use of ordinal optimization leads to a decrease in computational effort without compromising the quality of the solution. The benefits of the approach in terms of reduced estimation errors is illustrated by simulations involving a 95-bus UKGDS distribution network model.
|
CommonCrawl
|
Which graphs embedded in surfaces have symmetries acting transitively on vertex-edge flags?
A vertex-edge flag in a graph is a vertex together with an edge incident to that vertex. Given a graph $\Gamma$ embedded in a compact oriented surface $S$, when does the group of homeomorphisms of $S$ act transitively on vertex-edge flags of $\Gamma$?
We can start by looking a graphs embedded in the sphere, and start among those by looking at graphs that consist of the vertices and edges of a convex polyhedron. Among these, the obvious examples on which isometries act transitively on vertex-edge flags are the 5 Platonic solids. However, there are also two Archimedean solids with this property, namely the cuboctahedron:
and the icosidodecahedron:
I believe these are all the convex polyhedra on which isometries act transitively on vertex-edge flags. A polyhedron is said to be isotoxal if isometries act transitively on edges. There are 9 isotoxal convex polyhedra, and among these I see 7 on which isometries act transtively on vertex-edge flags: the 5 Platonic solids and the 2 shown above.
There are also infinitely many other connected graphs embedded in the sphere on which isometries act transitively on vertex-edge flags, namely the hosohedra, like this:
and the dihedra, like this:
I believe these are all the connected graphs embedded in the sphere on which isometries acts transitively on vertex-edge flags. If we consider graphs embedded in the sphere that are not connected, we get a host of other examples: for example, a bunch of equal-sized regular $n$-gons embedded in the sphere. If we further drop the requirement that the homeomorphisms be isometries, we get even more examples: for example, a bunch of copies of the complete graph on 4 vertices embedded in the sphere. I am happy to restrict attention to connected graphs, to avoid such clutter.
I am also happy to omit graphs that have self-loops.
So, here is a sub-question that interests me: what are all the connected graphs without self-loops embedded in the sphere on which homeomorphisms of the sphere act transitively on vertex-edge flags? Have I listed them all, or are there more?
(Whoops, here are some more: the empty graph, the graph with one vertex and no edges, and the graph with two vertices and one edge connecting them. The last might be considered a degenerate hosohedron.)
When we go to higher genus the situation becomes a lot more complicated and interesting. For example, in genus 3 we have Klein's quartic curve tiled by 56 triangles meeting 7 at each vertex:
So, I'd be perfectly happy to hear classifications that only handle surfaces of genus less than some fixed value.
graph-theory
John Baez
John BaezJohn Baez
8,98411 gold badge4848 silver badges101101 bronze badges
$\begingroup$ A very common term for a vertex-edge flag is an arc. Let me use this for simplicity. You are asking about maps with an arc-transitive group of automorphisms. It's not hard to see that the maximal amount of symmetry a map can have is to be arc-transitive with the arc-stabiliser having order $2$. This is the so-called "regular" case that Noam mentioned. In that case, the vertex-stabiliser is dihedral, of order $2k$, where $k$ is the valency. $\endgroup$ – verret Apr 29 '16 at 20:42
$\begingroup$ The only other option is that the arc-stabiliser is trivial and hence the group acts regularly on arcs. In that case, the vertex-stabiliser acts regularly on the neighbours, so it must be either cyclic of order $k$, or dihedral of order $k$ (in which case $k$ is even). These are all quite well studied objects, although not as much as the regular ones. In particular, they can be defined group-theoretically and, using this, can be enumerated up to a few thousand vertices and quite high genus, 100 say. (See math.auckland.ac.nz/~conder for example) $\endgroup$ – verret Apr 29 '16 at 20:42
$\begingroup$ Isn't the vertex-edge flag graph somehow related to the line digraph? $\endgroup$ – draks ... May 16 '17 at 15:23
It sounds like what you're asking for is close to the notion of "regular map":
Jozef Siran, "Regular Maps on a Given Surface: A Survey", Topics in Discrete Mathematics, 2006. (pdf)
A (connected) map $M$ is regular if its automorphism group acts transitively (and hence regularly) on the set of flags. On the other hand, the definition of "symmetry" you're using in the question is weaker than the standard definition of map automorphism, which in the case of maps embedded on compact oriented surfaces would restrict to orientation-preserving homeomorphisms (that also preserve the graph incidence relationships). This stronger definition rules out the cuboctahedron and icosidodecahedron, and indeed Siran asserts that the only regular maps on the sphere are the five platonic solids, the hosohedra ("$k$-dipoles") and the dihedra ("$k$-cycles") (plus the "semi-stars", if you allow graphs with dangling semi-edges).
Apparently a lot is also known about regular maps in the higher genus case! But since I'm not at all familiar with this, I'll just refer you to Siran's survey.
Noam ZeilbergerNoam Zeilberger
$\begingroup$ Thanks, that's extremely helpful. It's a very readable paper. It only gets at a special case of my question. He does consider orientation-reversing symmetries, and even maps on nonorientable surfaces, but he only studies maps whose symmetries act transitively on vertex-edge-face flags, not more general ones where the symmetries act transitively on vertex-edge flags (or 'arcs'). Nonetheless, this special case is so mathematically natural that it's very fun to read about, and it gives a big pile of examples of what I want. $\endgroup$ – John Baez Apr 30 '16 at 20:57
$\begingroup$ @JohnBaez Glad you found it useful! On v-e-f flags vs v-e flags, Siran's definition of regular map allows for nonorientable regular maps, but in the oriented case it seems natural to me to define regular maps as combinatorial maps $M = (D,v,e)$ such that $Aut(M)$ (respectively $Mon(M) = \langle v,e\rangle$) acts transitively (respectively, freely) on the set of darts $D$. Note that a dart is essentially the same thing as a v-e flag, except in the case of loops. I see that this definition of "oriented regular map" is considered by Roman Nedela here: savbb.sk/~nedela/CMbook.pdf. $\endgroup$ – Noam Zeilberger May 2 '16 at 9:56
$\begingroup$ @JohnBaez On the other hand, this definition of "oriented regular map" would still rule out the cuboctahedron and the icosidodecahedron, since, for example, any automorphism of the cuboctahedron-as-combinatorial-map could not send a dart bordering a square to its left to a dart bordering a triangle to its left. $\endgroup$ – Noam Zeilberger May 2 '16 at 9:57
Not the answer you're looking for? Browse other questions tagged graph-theory or ask your own question.
when is the independence number of a graph equals the clique cover number
Are vertex and edge-transitive graphs determined by their spectrum?
Ways to look at a polyhedral graph
regular polyhedra (and polytopes) in hyperbolic geometry, and generalisations
Convex polyhedra, combinatorial types and Symmetry
Which is the most time efficient algorithm for having a Tait Coloring (edge-3-coloring) of planar cubic graphs?
Lifting of a spherical graph
Can the graph of a symmetric polytope have more symmetries than the polytope itself?
Are there half-transitive convex polytopes?
Which cubic graphs can be orthogonally embedded in $\mathbb R^3$?
|
CommonCrawl
|
Construction of 3-designs using $(1,\sigma)$-resolution
AMC Home
A note on Erdős-Ko-Rado sets of generators in Hermitian polar spaces
August 2016, 10(3): 525-540. doi: 10.3934/amc.2016023
Construction of subspace codes through linkage
Heide Gluesing-Luerssen 1, and Carolyn Troha 2,
University of Kentucky, Department of Mathematics, Lexington, KY 40506-0027
Department of Mathematics, Viterbo University, La Crosse, WI, United States
Received May 2015 Revised September 2015 Published August 2016
A construction is discussed that allows to produce subspace codes of long length using subspace codes of shorter length in combination with a rank metric code. The subspace distance of the resulting linkage code is as good as the minimum subspace distance of the constituent codes. As a special application, the construction of the best known partial spreads is reproduced. Finally, for a special case of linkage, a decoding algorithm is presented which amounts to decoding with respect to the smaller constituent codes and which can be parallelized.
Keywords: constant dimension subspace codes, partial spreads., Random network coding.
Mathematics Subject Classification: Primary: 11T71, 94B60; Secondary: 51E2.
Citation: Heide Gluesing-Luerssen, Carolyn Troha. Construction of subspace codes through linkage. Advances in Mathematics of Communications, 2016, 10 (3) : 525-540. doi: 10.3934/amc.2016023
A. Beutelspacher, Partial spreads in finite projective spaces and partial designs,, Math. Zeit., 145 (1975), 211. Google Scholar
M. Braun, T. Etzion, P. Östergård, A. Vardy and A. Wassermann, Existence of $q$-analogs of Steiner systems,, Forum Math. PI, (). Google Scholar
M. Braun and J. Reichelt, $q$-analogs of packing designs,, J. Comb. Designs, 22 (2014), 306. doi: 10.1002/jcd.21376. Google Scholar
T. Bu, Partitions of a vector space,, Discr. Math., 31 (1980), 79. doi: 10.1016/0012-365X(80)90174-0. Google Scholar
J. de la Cruz, M. Kiermaier, A. Wassermann and W. Willems, Algebraic structures of MRD Codes,, Adv. Math. Commun., 10 (2016), 499. doi: 10.3934/amc.2016021. Google Scholar
P. Delsarte, Bilinear forms over a finite field, with applications to coding theory,, J. Combin. Theory Ser. A, 25 (1978), 226. doi: 10.1016/0097-3165(78)90015-8. Google Scholar
D. A. Drake and J. W. Freeman, Partial $t$-spreads and group constructible $(s,r,\mu)$-nets,, J. Geom., 13 (1979), 210. doi: 10.1007/BF01919756. Google Scholar
S. El-Zanati, H. Jordon, G. Seelinger, P. Sissokho and L. Spence, The maximum size of a partial $3$-spread in a finite vector space over $\mathbb F_2$, Des. Codes Cryptogr., 54 (2010), 101. doi: 10.1007/s10623-009-9311-1. Google Scholar
A. Elsenhans, A. Kohnert and A. Wassermann, Construction of codes for network coding,, in Proc. 19th Int. Symp. Math. Theory Netw. Syst., (2010), 1811. Google Scholar
T. Etzion and N. Silberstein, Error-correcting codes in projective spaces via rank-metric codes and Ferrers diagrams,, IEEE Trans. Inform. Theory, IT-55 (2009), 2909. doi: 10.1109/TIT.2009.2021376. Google Scholar
T. Etzion and N. Silberstein, Codes and designs related to lifted MRD codes,, IEEE Trans. Inform. Theory, IT-59 (2013), 1004. doi: 10.1109/TIT.2012.2220119. Google Scholar
T. Etzion and A. Vardy, Error-correcting codes in projective space,, IEEE Trans. Inform. Theory, IT-57 (2011), 1165. doi: 10.1109/TIT.2010.2095232. Google Scholar
E. M. Gabidulin, Theory of codes with maximal rank distance,, Probl. Inf. Transm., 21 (1985), 1. Google Scholar
E. M. Gabidulin and M. Bossert, Algebraic codes for network coding,, Probl. Inf. Trans. (Engl. Transl.), 45 (2009), 343. doi: 10.1134/S003294600904005X. Google Scholar
E. M. Gabidulin and N. I. Pilipchuk, Rank subcodes in multicomponent network coding,, Probl. Inf. Trans. (Engl. Transl.), 49 (2013), 40. doi: 10.1134/S0032946013010043. Google Scholar
E. M. Gabidulin, N. I. Pilipchuk and M. Bossert, Decoding of random network codes,, Probl. Inf. Trans. (Engl. Transl.), 46 (2010), 300. doi: 10.1134/S0032946010040034. Google Scholar
H. Gluesing-Luerssen, K. Morrison and C. Troha, Cyclic orbit codes and stabilizer subfields,, Adv. Math. Commun., 9 (2015), 177. doi: 10.3934/amc.2015.9.177. Google Scholar
E. Gorla, F. Manganiello and J. Rosenthal, An algebraic approach for decoding spread codes,, Adv. Math. Commun., 6 (2012), 443. doi: 10.3934/amc.2012.6.443. Google Scholar
E. Gorla and A. Ravagnani, Partial spreads in random network coding,, Finite Fields Appl., 26 (2014), 104. doi: 10.1016/j.ffa.2013.11.007. Google Scholar
B. S. Hernandez and V. P. Sison, Grassmannian codes as lifts of matrix codes derived as images of linear block codes over finite fields,, Global J. Pure Appl. Math., 12 (2016), 1801. Google Scholar
T. Honold, M. Kiermaier and S. Kurz, Optimal binary subspace codes of length $6$, constant dimension $3$ and minimum subspace distance $4$,, Contemp. Math., 632 (2015), 157. doi: 10.1090/conm/632/12627. Google Scholar
A. Khaleghi, D. Silva and F. R. Kschischang, Subspace codes,, in Proc. 12th IMA Conf. Crypt. Coding, (2009), 1. doi: 10.1007/978-3-642-10868-6_1. Google Scholar
R. Koetter and F. R. Kschischang, Coding for errors and erasures in random network coding,, IEEE Trans. Inform. Theory, IT-54 (2008), 3579. doi: 10.1109/TIT.2008.926449. Google Scholar
A. Kohnert and S. Kurz, Construction of large constant dimension codes with a prescribed minimum distance,, in Mathematical Methods in Computer Science (eds. J. Calmet, (2008), 31. doi: 10.1007/978-3-540-89994-5_4. Google Scholar
J. Rosenthal and A.-L. Trautmann, A complete characterization of irreducible cyclic orbit codes and their Plücker embedding,, Des. Codes Cryptogr., 66 (2013), 275. doi: 10.1007/s10623-012-9691-5. Google Scholar
N. Silberstein and A.-L. Trautmann, Subspace codes based on graph matchings, Ferrers diagrams, and pending blocks,, IEEE Trans. Inform. Theory, IT-61 (2015), 3937. doi: 10.1109/TIT.2015.2435743. Google Scholar
D. Silva and F. R. Kschischang, On metrics for error correction in network coding,, IEEE Trans. Inform. Theory, IT-55 (2009), 5479. doi: 10.1109/TIT.2009.2032817. Google Scholar
D. Silva, F. R. Kschischang and R. Kötter, A rank-metric approach to error control in random network coding,, IEEE Trans. Inform. Theory, IT-54 (2008), 3951. doi: 10.1109/TIT.2008.928291. Google Scholar
V. Skachek, Recursive code construction for random networks,, IEEE Trans. Inform. Theory, IT-56 (2010), 1378. doi: 10.1109/TIT.2009.2039163. Google Scholar
A.-L. Trautmann, F. Manganiello, M. Braun and J. Rosenthal, Cyclic orbit codes,, IEEE Trans. Inform. Theory, IT-59 (2013), 7386. doi: 10.1109/TIT.2013.2274266. Google Scholar
A.-L. Trautmann and J. Rosenthal, New improvements on the Echelon Ferrers construction,, in Proc. 19th Int. Symp. Math. Theory Netw. Syst., (2010), 405. Google Scholar
Thomas Honold, Michael Kiermaier, Sascha Kurz. Constructions and bounds for mixed-dimension subspace codes. Advances in Mathematics of Communications, 2016, 10 (3) : 649-682. doi: 10.3934/amc.2016033
Anna-Lena Trautmann. Isometry and automorphisms of constant dimension codes. Advances in Mathematics of Communications, 2013, 7 (2) : 147-160. doi: 10.3934/amc.2013.7.147
Natalia Silberstein, Tuvi Etzion. Large constant dimension codes and lexicodes. Advances in Mathematics of Communications, 2011, 5 (2) : 177-189. doi: 10.3934/amc.2011.5.177
Roland D. Barrolleta, Emilio Suárez-Canedo, Leo Storme, Peter Vandendriessche. On primitive constant dimension codes and a geometrical sunflower bound. Advances in Mathematics of Communications, 2017, 11 (4) : 757-765. doi: 10.3934/amc.2017055
Daniele Bartoli, Matteo Bonini, Massimo Giulietti. Constant dimension codes from Riemann-Roch spaces. Advances in Mathematics of Communications, 2017, 11 (4) : 705-713. doi: 10.3934/amc.2017051
Ernst M. Gabidulin, Pierre Loidreau. Properties of subspace subcodes of Gabidulin codes. Advances in Mathematics of Communications, 2008, 2 (2) : 147-157. doi: 10.3934/amc.2008.2.147
Keisuke Minami, Takahiro Matsuda, Tetsuya Takine, Taku Noguchi. Asynchronous multiple source network coding for wireless broadcasting. Numerical Algebra, Control & Optimization, 2011, 1 (4) : 577-592. doi: 10.3934/naco.2011.1.577
Min Ye, Alexander Barg. Polar codes for distributed hierarchical source coding. Advances in Mathematics of Communications, 2015, 9 (1) : 87-103. doi: 10.3934/amc.2015.9.87
Daniel Heinlein, Sascha Kurz. Binary subspace codes in small ambient spaces. Advances in Mathematics of Communications, 2018, 12 (4) : 817-839. doi: 10.3934/amc.2018048
Frédéric Vanhove. A geometric proof of the upper bound on the size of partial spreads in $H(4n+1,$q2$)$. Advances in Mathematics of Communications, 2011, 5 (2) : 157-160. doi: 10.3934/amc.2011.5.157
Michael Braun. On lattices, binary codes, and network codes. Advances in Mathematics of Communications, 2011, 5 (2) : 225-232. doi: 10.3934/amc.2011.5.225
Stefan Martignoli, Ruedi Stoop. Phase-locking and Arnold coding in prototypical network topologies. Discrete & Continuous Dynamical Systems - B, 2008, 9 (1) : 145-162. doi: 10.3934/dcdsb.2008.9.145
Giuseppe Bianchi, Lorenzo Bracciale, Keren Censor-Hillel, Andrea Lincoln, Muriel Médard. The one-out-of-k retrieval problem and linear network coding. Advances in Mathematics of Communications, 2016, 10 (1) : 95-112. doi: 10.3934/amc.2016.10.95
T. Jäger. Neuronal coding of pacemaker neurons -- A random dynamical systems approach. Communications on Pure & Applied Analysis, 2011, 10 (3) : 995-1009. doi: 10.3934/cpaa.2011.10.995
Washiela Fish, Jennifer D. Key, Eric Mwambene. Partial permutation decoding for simplex codes. Advances in Mathematics of Communications, 2012, 6 (4) : 505-516. doi: 10.3934/amc.2012.6.505
Al-hassem Nayam. Constant in two-dimensional $p$-compliance-network problem. Networks & Heterogeneous Media, 2014, 9 (1) : 161-168. doi: 10.3934/nhm.2014.9.161
Łukasz Struski, Jacek Tabor, Tomasz Kułaga. Cone-fields without constant orbit core dimension. Discrete & Continuous Dynamical Systems - A, 2012, 32 (10) : 3651-3664. doi: 10.3934/dcds.2012.32.3651
Deena Schmidt, Janet Best, Mark S. Blumberg. Random graph and stochastic process contributions to network dynamics. Conference Publications, 2011, 2011 (Special) : 1279-1288. doi: 10.3934/proc.2011.2011.1279
Gokhan Calis, O. Ozan Koyluoglu. Architecture-aware coding for distributed storage: Repairable block failure resilient codes. Advances in Mathematics of Communications, 2018, 12 (3) : 465-503. doi: 10.3934/amc.2018028
Heide Gluesing-Luerssen Carolyn Troha
|
CommonCrawl
|
Relation between the Glauber correlation functions and statistical correlation
As stated in Wikipedia's page:
Correlation or dependence is any statistical relationship, whether causal or not, between two random variables.
Now in "The Quantum Theory of Optical Coherence" Glauber introduces a set of functions he says quantifies correlations:
The field average (3.3) which determines the counting rate of an ideal photodetector is a particular form of a more general type of expression whose properties are of considerable interest. In the more general expression, the fields $E^{(-)}$ and $E^{(+)}$ are evaluated at different spacetime points. Statistical averages of the latter type furnish a measure of the correlations of the complex fields at separated positions and times. We shall define such a correlation function, $G^{(1)}$, for the $\mathbf{e}$ components of the complex fields as $$G^{(1)}(\mathbf{r},t;\mathbf{r}',t')=\operatorname{tr}\left\{\rho E^{(-)}(\mathbf{r},t)E^{(+)}(\mathbf{r}',t')\right\}\tag{3.6}$$
He then more generaly defines the $n$-th order correlation functions as a function of $2n$ spacetime points $x_1,\dots, x_{2n}$:
$$G^{(n)}(x_1,\dots, x_n;x_{n+1},\dots, x_{2n})=\operatorname{tr}\left\{\rho E^{(-)}(x_1)\cdots E^{(-)}(x_n)E^{(+)}(x_{n+1})\cdots E^{(+)}(x_{2n})\right\}\tag{3.8}$$
In all the above, $\rho$ is the quantum state of the electromagnetic field.
My question here is the following: why these $G^{(n)}$ are called correlation functions? How they are connected with the statistical idea of correlation?
I imagine in some sense they should quantify some sort of statistical relationship between measurements associated to photons at each of the spacetime points (detection of the photons, perhaps?), but this is an impression because of the name. I want to understand really why these $G^{(n)}$ functions quantify correlations and what are the correlations they are quantifying.
quantum-information
quantum-optics
correlation-functions
33k1616 gold badges9090 silver badges224224 bronze badges
The $G^{(n)}$ are the expectation values of the products of $2n$ of the $E(x_i)$ variables, since $\mathrm{Tr}(\rho X)$ is how one writes the expectation value for $X$ in a mixed state with density matrix $\rho$. In particular $G^{(1)}$ is the expectation value of a product of two random variables. Since we can usually set the expectation value of the fields to zero by adding constants, this is the same as the covariance of the two random variables. The covariance is "unnormalized correlation", the common Pearson correlation coefficient is merely the covariance divided by the standard deviations of the two variables.
For some reason, it has become common in physical applications of statistical methods to not sharply distinguish between "covariance" and "correleation", presumably since both are a measure of how "related" the values of the random variables are. (If you are interested in the history of this "sloppiness" this would seem to fall to History of Science and Mathematics rather than out site.)
So the "correlation functions" really are straightforward measures of (unnormalized) correlation coefficients between the random variables $E^{\pm}(x_i)$ in the standard sense of statistics.
ACuriousMind♦ACuriousMind
$\begingroup$ Thanks @ACuriousMind. There's something I think I'm still missing. I know that given two random variables one may quantify linear correlations with the Pearson correlation coefficient. In the setting of the question this would be $G^{(1)}(x,y)=\langle E^+(x)E^-(y)\rangle$. But the $G^{(n)}(x_1,\dots x_n,x_{n+1},\dots, x_{2n})$ involves $2n$ random variables. What is the interpretation then? Isn't correlation something associated to just a pair of random variables? Or it is some generalization of the Pearson approach to take into account higher order correlations? $\endgroup$
$\begingroup$ @user1620696 You're right that the standard Pearson coefficient is only defined for two variables and the quantitative relation of the expectation value of the product to the correlation of the variables is less clear in the case of more variables, but it is still true that if the variables are all independent, the covariance vanishes, so this still at least qualitatively measures their dependence/correlation. $\endgroup$
– ACuriousMind ♦
Correlation functions and connection to ward identities
How to interpret correlation functions in QFT?
Algebraic QFT has some way to deal with Haag's theorem and the interaction picture?
Correlation function late time decay and information loss
Constraints on correlation functions of Quasi Primary Fields
|
CommonCrawl
|
Interesting ML Techniques
Part 1: Deep Representations, a way towards neural style transfer
Artistic style transfer is an algorithm proposed by Gatys et al. In A Neural Algorithm of Artistic Style, the authors talk about the difficulties in segregating the content and style of an image. The content of an image refers to the discernible objects in an image. The style of an image, on the other hand, refers to the abstract configurations of the elements in the image that make it unique. The style and content segregation is difficult because of the unavailability of representations that hold the semantic understanding of images. Now, due to the advancement of convolutional neural networks, such semantic representations are possible.
*Aritra + Pablo = :art: *
This is part one of the two. This report will be structured as follows:
Understanding deep image representations by inverting them.
Normalized VGG16
Content representations
The second part will talk about an image's style and how to extract it from an image.
Understanding Deep Image Representation by Inverting Them
This is the title of a research paper by Aravindh Mahendran et al. We think this paper is the root cause of artistic style transfer. In this paper, the authors dive deep into the interpretability of visual models. Their novelty lies in an algorithm that helps visualize deep image representation form a convolution neural network.
Idea: For any given task, a convolutional neural network builds representations of images in the intermediate layers. These representations stem from the different weights of filter kernels. The authors pose a pretty basic question and answer it themselves, Given an encoding of an image, to which extent is it possible to reconstruct the image itself?
They take a pre-trained CNN, VGG16 trained on ImageNet, forward propagate an Image $(x)$, and extract all the intermediate activation maps from the model. The activation maps of the image $(X)$ can be considered to be the encoding of that image. They also forward propagate a White Noise Image $(a)$ through the model and extract its activation maps $(A)$. They argue that, if the loss between the activation maps of the original image and the white noise image is minimized by updating the white noise image pixels, the white noise image would resemble the original image.
The objective function becomes pretty simple: $ \downarrow (A-X) $
Convolutional layer with its activation maps and filter kernels
A note for the readers: In deep learning, we update the weights of neurons such that the modeling of input space helps the objective function. In this paper, the idea of updating the input itself makes it quite mesmerizing.
The authors visualize every convolutional layer, extract the Image and the White Noise Image's activation maps, and back-propagate the loss between the maps. The back-propagation does not update the weights of the filters but helps the white noise image update. The updated White Noise Image should give us the image that results in the activation maps of the original Image's chosen convolutional layer.
Checkout the code on GitHub
This section deals with us explaining an excerpt from the Artistic Style Transfer paper.
We used the feature space provided by a normalized version of the 16 convolutional and 5 pooling layers of the 19-layer VGG network. We normalized the network by scaling the weights such that the mean activation of each convolutional filter over images and positions is equal to one. Such re-scaling can be done for the VGG network without changing its output because it contains only rectifying linear activation functions and no normalization or pooling over feature maps.
VGG16 architecture does not house any normalization layers. This has a downside; the activation maps are ReLU activated but do not have an upper bound to it. This means that the loss between the activation maps cannot be constrained to any range, which has an adverse effect on back-propagation and optimization. The authors devise a relatively simple scheme to normalize the architecture by scaling the weights themselves.
This StackOverflow thread sheds some light on how to normalize the weights. It is quite simple, a set of images are taken and forward propagated through the model. The activation maps are stored, and the mean is gathered. The weights and biases are then divided by this mean so that the activation provides normalized activation maps. A comment down at the bottom suggests that the paper's authors have normalized the model by using ImageNet's validation images.
A smart catch comes from this StackOverflow thread where it is noticed that dividing by the activation mean does not sound right. The problem that we would face if weights and biases are merely normalized with the immediate activation means is that the input distribution to the next layers would disrupt. This calls for a method that takes care of the joint normalization of the weights and inputs. We have taken help from this repository. Here the weights are first multiplied to the mean of the previous convolutional layer's mean and then are normalized with the mean of its activation maps. All of the concept of normalization, boils down to these three lines of code.
# conv layer weights layout is: (dim1, dim2, in_channels, out_channels)
W *= prev_conv_layer_means[np.newaxis, np.newaxis, : , np.newaxis]
b /= means
W /= means[np.newaxis, np.newaxis, np.newaxis, :]
Checkout the code in GitHub
Content Representation
The algorithm of content representation of an image comes directly from the paper Understanding deep image representation by inverting them. We have already laid the foundations of the process in a section above. Here we will understand and visualize the steps.
We will take an Image $(x)$ from which content needs to be extracted. We take another image $(a)$ that is a White Noise Image. The objective of our algorithm is to extract the content from $x$ and imprint it upon $a$. The best part about this idea is that the authors treat the problem as an optimization problem.
The optimization probelm: A layer with $N_{l}$ distinct filters has $N_{l}$ feature maps each of size $M_{l}$, where $M_{l}$ is the height times the width of the feature map. So the responses in a layer $l$ can be stored in a matrix $F_{l} \in \ \Re ^{N_{l} \times M_{l}}$ where $F_{ij}^{l}$ is the activation of the $i^{th}$ filter at position $j$ in layer $l$. Let us consider a layer $l$ and forward propagate the two images $x$ and $a$, the Image and the White Noise Image respectively. The activation maps at the layer $l$ are $X^{l}$ and $A^{l}$. We can define the squared error loss between the two as $ L(a,x,l) = \sum {ij}\left( A^{l}{ij} \ -\ X^{l}_{ij}\right)^{2} $ Here lie the optimization criteria. With an optimizer in place, this loss is considered to be minimized. As mentioned in a previous section, the loss is optimized by updating the White Noise Image $a$.
We calculate the loss function's derivative $L(x, a, l)$ with respect to the activation map of the White Noise Image $ A $. $ \frac{\partial L( a,x,l)}{\partial A^{l}_{ij}} $ With this in hand, we can easily back-propagate this derivation and then finally update the White Noise Image $a$. This process can be applied to every layer $l$ of the network at hand. In our experiments, we take the pre-trained VGG16 normalized model. We run our experiments on the conv1 layers of each block of the model. Below we see the transformation of the White Noise Images $a$ to the content Image $x$.
This section is very dear to us. This is not taken from any of the papers mentioned above. It is a set of elementary experiments that came naturally to us. What if we used another image instead of the white noise image? With this statement in mind, we went ahead and chose the style image and tried to imprint the content image on it. The same mathematics applies here as well. The mean squared error between the two images is optimized, and the content of the content image is imprinted on the style image.
How so ever luring it might seem, but this is not Artistic Image Style Transfer. We have not segregated the style of the Style Image. Instead, we have just imprinted the content of the Content Image. Amalgamation is the term we chose in order to make the reader understand the due process. This experiment results in the superimposition of the content of one image on the other image.
The first part deals with the content representations and the way we can visualize the embeddings of a convolutional neural network. We have kept this report as intuitive as possible for the readers to be creative about the process. We would like for you to figure a way out to harness the style information from an image.
In the next part of the report, we will be writing about the style representations. The way where we can understand the problem of texture transfer in the realm of an optimization problem too. We would be more than happy to get your feedback on this report.
Reach the authors
Aritra Roy Gosthipaty @ariG23498 @ariG23498
Devjyoti Chakraborty @cr0wley-zz @Cr0wley_zz
|
CommonCrawl
|
International Journal of Science and Mathematics Education
May 2016 , Volume 14, Issue 4, pp 757–776 | Cite as
Hierarchical Levels of Abilities that Constitute Fraction Understanding at Elementary School
Aristoklis A. Nicolaou
Demetra Pitta-Pantazi
This article examines whether the 7 abilities found in a previous study carried out by the authors to constitute fraction understanding of sixth grade elementary school students determine hierarchical levels of fraction understanding. The 7 abilities were as follows: (a) fraction recognition, (b) definitions and mathematical explanations for fractions, (c) argumentations and justifications about fractions, (d) relative magnitude of fractions, (e) representations of fractions, (f) connections of fractions with decimals, percentages, and division, and (g) reflection during the solution of fraction problems. The sample comprised of 182 sixth grade students that were clustered into 3 categories by means of latent class analysis: those of low fraction understanding, those of medium fraction understanding, and those of high fraction understanding. It was found that low fraction understanding students were sufficient in fraction recognition and relative magnitude of fractions, those belonging to the medium category in fraction recognition, relative magnitude of fractions, as well as in connections with decimals, percentages and division and representations of fractions, while high fraction understanding students were sufficient in all 7 abilities. It was also found that these levels were stable across time; the hierarchical levels were the same across three measurements that took place. Possible implications for fraction understanding are discussed, and directions for future research are drawn.
Elementary school Fraction understanding Hierarchical levels Sixth grade Students' abilities
Items used to measure each ability:
Fraction recognition
One of the following fractions differs from the others. Find that fraction and circle it.
$$ \begin{array}{ccccccccccccc}\hfill \frac{2}{7}\hfill & \hfill \hfill & \hfill \hfill & \hfill \frac{3}{2}\hfill & \hfill \hfill & \hfill \hfill & \hfill \frac{14}{49}\hfill & \hfill \hfill & \hfill \hfill & \hfill \frac{10}{35}\hfill & \hfill \hfill & \hfill \hfill & \hfill \frac{4}{14}\hfill \end{array} $$
Definitions and mathematical explanations for fractions
How many fractions exist in the interval 0–1. Explain your answer.
(Niemi, 1996c)
Argumentations and justifications about fractions
Task 11
For each of the following statements, circle T if it is true or F if it is false. You must also justify your answer. You should definitely justify your answer.
If I double both the numerator and the denominator of a fraction, then the formed fraction will have double the value of the initial one.
Justification:
(Lamon, 1999)
Reflection during the solution of fraction problems
The following diagram represents a machine that outputs \( \frac{2}{3} \) of the input quantity. What is the input quantity if the output quantity is equal to 12? You should definitely reason about your thinking and your answer.
(Charalambous & Pitta-Pantazi, 2007; Lamon, 1999)
Relative magnitude of fractions
Order the fractions \( \frac{1}{2} \), \( \frac{4}{3} \), \( \frac{2}{3} \), and \( \frac{1}{4} \) starting from the smallest one.
(Clarke & Roche, 2009)
Representations of fractions
Marinos ate \( \frac{1}{2} \) of a cake and Marina \( \frac{3}{8} \) of the same cake. Construct a drawing to show what part each child ate and what part the two children ate together.
(Gagatsis et al., 2001)
Connections of fractions with decimals, percentages, and division
Decide whether the following statements are correct or incorrect and circle C if they are correct and I if they are incorrect.
\( \frac{2}{3} \) is the quotient of the division 2 ÷ 3.
\( \frac{12}{7} \) is the quotient of the division 7 ÷ 12.
Three pizzas were evenly shared among six children. Each child got \( \frac{3}{6} \) of the pizza.
(Kieren, 1993)
Behr, M. J., Lesh, R., Post, T. R. & Silver, E. A. (1983). Rational number concepts. In R. Lesh & M. Landau (Eds.), Acquisition of mathematics concepts and processes (pp. 91–126). New York: Academic. New York, NY.Google Scholar
Charalambous, C. Y. & Pitta-Pantazi, D. (2007). Drawing on a theoretical model to study students' understandings of fractions. Educational Studies in Mathematics, 64, 293–316. doi: 10.1007/s10649-006-9036-2.CrossRefGoogle Scholar
Clarke, D. M. & Roche, A. (2009). Students' fraction comparison strategies as a window into robust understanding and possible pointers for instruction. Educational Studies in Mathematics, 72(1), 127–138. doi: 10.1007/s10649-009-9198-9.CrossRefGoogle Scholar
Cyprus National Mathematics Curriculum (2010). Mathematics curriculum in Cyprus for grades K-12. Nicosia, Cyprus: Lithostar. Retrieved 10 March, 2013, from http://www.schools.ac.cy/klimakio/Themata/Mathimatika/analytika_programmata/ektenes_programma_mathimatika.pdf.
Deliyianni, E. & Gagatsis, A. (2013). Tracing the development of representational flexibility and problem solving in fraction addition: A longitudinal study. Educational Psychology, 33(4), 427–442. doi: 10.1080/01443410.2013.765540.CrossRefGoogle Scholar
Duval, R. (2006). A cognitive analysis of problems of comprehension in a learning of mathematics. Educational Studies in Mathematics, 61, 103–131. doi: 10.1007/s10649-006-0400-z.CrossRefGoogle Scholar
Gagatsis, A., Michaelidou, E. & Shiakalli, M. (2001). Representational theories and the learning of mathematics. Nicosia: ERASMUS IP 1. Nicosia, Cyprus.Google Scholar
Kieren, T. E. (1976). On the mathematical, cognitive, and instructional foundations of rational numbers. In R. Lesh (Ed.), Number and Measurement: Papers from a Research Workshop ERIC/SMEAC (pp. 101–144). Columbus, OH.Google Scholar
Kieren, T. E. (1993). Rational and fractional numbers: From quotient fields to recursive understanding. In T. P. Carpenter, E. Fennema & T. A. Romberg (Eds.), Rational numbers: An integration of research (pp. 49–84). NJ: Erlbaum. Hillsdale, NJ.Google Scholar
Lamon, S. J. (1999). Teaching fractions and ratios for understanding. Mahwah, New Jersey: Lawrence Erlbaum Associates. Mahwah, NJ.Google Scholar
Lamon, S. J. (2012). Teaching fractions and ratios for understanding: Essential content knowledge and instructional strategies for teachers (3rd ed). New York, NY:Routledge.Google Scholar
Marcoulides, G. A. & Kyriakides, L. (2010). Structural equation modelling techniques. In B. Creemers, L. Kyriakides & P. Sammons (Eds.), Methodological advances in educational effectiveness research (pp. 277–303). London and New York: Routledge.Google Scholar
National Council of Teachers of Mathematics (2000). Principles and standards for school mathematics. Reston, VA: Author.Google Scholar
Newstead, K. & Murray, H. (1998). Young students' constructions of fraction. Proceedings of the 22nd Conference of the International Group for Psychology of Mathematics Education. Stellenbosch, South Africa: University of Stellenbosch.Google Scholar
Nicolaou, A. & Pitta-Pantazi, D. (2011a). Factors that constitute understanding a mathematical concept at the elementary school: Fractions as the concept of reference. Article presented at the 4th Conference of the Union of Greek Researchers in Mathematics Education (pp. 351–361). University of Ioannina, Ioannina.Google Scholar
Nicolaou, A. A. & Pitta-Pantazi, D. (2011b). A theoretical model for understanding fractions at elementary school. Proceedings of the 7th Conference of the European Society of Mathematics Education (pp. 366–375). Univeristy of Rzeszow, Poland.Google Scholar
Niemi, D. (1996a). A fraction is not a piece of pie: Assessing exceptional performance and deep understanding in elementary school mathematics. Gifted Child Quarterly, 40(2), 70–80.CrossRefGoogle Scholar
Niemi, D. (1996b). Assessing conceptual understanding in mathematics: Representations, problem solutions, justifications and explanations. Journal of Educational Research, 89(6), 351–363.CrossRefGoogle Scholar
Niemi, D. (1996c). Instructional influences on content area explanations and representational knowledge: Evidence for the construct validity of measures of principled understanding. CSE Technical Report 403. CRESST/University of California, Los Angeles.Google Scholar
Oppenheimer, L. & Hunting, R. P. (1999). Relating fractions & decimals: listening to students talk. Mathematics Teaching in the Middle School, 4(5), 318–321.Google Scholar
Pantziara, M. & Philippou, G. (2012). Levels of students' "conception" of fractions. Educational Studies in Mathematics, 79, 61–83. doi: 10.1007/s10649-011-9338-x.CrossRefGoogle Scholar
Pirie, S. & Kieren, T. (1994). Growth in mathematical understanding: How can we characterize it and how can we represent it? Educational Studies in Mathematics, 26, 165–190.CrossRefGoogle Scholar
Scaptura, C., Suh, J. & Mahaffey, G. (2007). Masterpieces to mathematics: Using art to teach fraction, decimal, and percent equivalents. Mathematics Teaching in the Middle School, 13(1), 24–28.Google Scholar
Sfard, A. (1991). On the dual nature of mathematical conceptions: Reflections on processes and objects as different sides of the same coin. Educational Studies in Mathematics, 22, 1–36.CrossRefGoogle Scholar
Streefland, L. (1991). Fractions in realistic mathematics education: A paradigm of developmental research. Dordrecht, The Netherlands: Kluwer.Google Scholar
Stylianides, A. J. (2007). The notion of proof in the context of elementary school mathematics. Educational Studies in Mathematics, 65, 1–20. doi: 10.1007/s10649-006-9038-0.CrossRefGoogle Scholar
Vamvakoussi, X. & Vosniadou, S. (2010). How many decimals are there between two fractions? Aspects of secondary school students' understanding of rational numbers and their notation. Cognition and Instruction, 28(2), 181–209. doi: 10.1080/07370001003676603.CrossRefGoogle Scholar
Whitin, D. J. & Whitin, P. (2012). Making sense of fractions and percentages. Teaching Children Mathematics, 18(8), 490–496.CrossRefGoogle Scholar
© Ministry of Science and Technology, Taiwan 2014
1.Ministry of EducationLimassolCyprus
2.Department of EducationUniversity of CyprusLimassolCyprus
Nicolaou, A.A. & Pitta-Pantazi, D. Int J of Sci and Math Educ (2016) 14: 757. https://doi.org/10.1007/s10763-014-9603-4
Accepted 19 November 2014
Publisher Name Springer Netherlands
|
CommonCrawl
|
Geometry and Physics
Uppsala University Department of Physics ... Research Theoretical Physics People Henrik Johansson
Henrik Johansson
Uppsala university, Box 516
SE-75120 Uppsala
phone: +46(0)18 471 3243
e-mail: [email protected]
My research interests are in quantum field theory and supergravity, with a focus on formal aspects of scattering amplitudes in these theories. Scattering amplitudes can be used as a powerful tool to understand hidden symmetries and remarkable relations between different classes of theories. My work has led to the realization that a generic gravity theory can be formally understood as a product of two gauge theories.
KAW project – From Scattering Amplitudes to Gravitational Waves
This project will develop new methods for precise calculations at the forefront of theoretical
physics, ranging from scattering processes in quantum field theory to gravitational wave
emission, by using the Bern-Carrasco-Johansson (BCJ) double-copy framework, that
connects gauge, gravity and string theories. The project will involve cooperation between
the Division of Theoretical Physics at Uppsala University, and the Nordic Institute for Theoretical Physics (Nordita) that is located in Stockholm.
The project consists of five semi-independent parts:
Develop new methods for gauge, gravity and string theory scattering amplitudes
Simplify perturbative GR: potentials, black-hole mergers and gravitational waves
Advance integration techniques for loop amplitudes and classical gravity
Understand the origins of color-kinematics duality and the double copy
Extend the double copy to curved spaces
Background: In one of my papers from 2008, we introduce the notion of a duality between kinematical quantities (spacetime quantities) and color quantities (internal space quantities). In this framework gauge theories are organized as a specific product of two copies of Lie algebras, one for the color degrees of freedom and one for the kinematical degrees of freedom. Gravitational theories are analogously organized as a double copy of the kinematical Lie algebras. This is most transparent for S-matrix elements, where this powerful structure has been used for amplitude calculations up to the fifth loop order in certain supersymmetric gauge and gravity theories.
There is by now a growing list of theories where the duality and double-copy structures have been observed; it includes: pure (super-)Yang-Mills theories, pure (super)gravities, QCD and its supersymmetric extensions, Yang-Mills-Einstein (super)gravities, the nonlinear sigma model (NLSM), Born-Infeld theories and also string theory. Gauge and gravity theories are now more closely linked to each other than ever before, but even effective theories that have no gauge symmetry fit into the new picture. New connections to string theory have also emerged out of this structure: heterotic/closed string theories obeys color-kinematics duality and open string theories are double copies of simpler objects.
When the LIGO experiment in September 2015 discovered the first gravitational waves from binary black holes—which awarded them the 2017 Nobel Prize in Physics—a new window for observations of the universe opened up. In order to fully utilize this new opportunity, both theoretical calculation methods and experiments are expected to undergo significant upgrades in the future. Recent initial studies have convincingly demonstrated that the BCJ double-copy method is able to reproduce low-order binary black-hole dynamics and associated gravitational-wave emissions at significantly reduced computational cost compared to standard methods, and as such it has the potential to revolutionize analytical calculations of gravitational waves.
Mathematical Methods of Physics QNV VT-15
Mathematical Methods of Physics F VT-16
Analytical Mechanics HT-16
The Duality Between Color and Kinematics and its Applications
arXiv:1909.01358
CERN-TH-2019-135
UCLA/TEP/2019/104
NUHEP-TH/19-11
UUITP-35/19, NORDITA 2019-079
by: Bern, Zvi (UCLA) et al.
This review describes the duality between color and kinematics and its applications, with the aim of gaining a deeper understanding of the perturbative structure of gauge and gravity theories. We emphasize, in particular, applications to loop-level calculations, the broad web of theories linked by the duality and the associated double-copy structure, and the issue of extending the duality and do...
Double copy for massive quantum particles with spin
UUITP-24/19
NORDITA 2019-070
JHEP 1909 (2019) 040
by: Johansson, Henrik (Uppsala U.) et al.
The duality between color and kinematics was originally observed for purely adjoint massless gauge theories, and later found to hold even after introducing massive fermionic and scalar matter in arbitrary gauge-group representations. Such a generalization was critical for obtaining both loop amplitudes in pure Einstein gravity and realistic gravitational matter from the double copy. In this paper we elaborate on the d...
On the kinematic algebra for BCJ numerators beyond the MHV sector
HU-EP-19/17
QMUL-PH-19-14
by: Chen, Gang (Zhejiang Normal U.) et al.
The duality between color and kinematics present in scattering amplitudes of Yang-Mills theory strongly suggests the existence of a hidden kinematic Lie algebra that controls the gauge theory. While associated BCJ numerators are known on closed forms to any multiplicity at tree level, the kinematic algebra has only been partially explored for the simplest of four-dimensional amplit...
The Full-Color Two-Loop Four-Gluon Amplitude in $\mathcal{N} = 2$ Super-QCD
CP3-19-15
Phys.Rev.Lett. 123 (2019) 241601
by: Duhr, Claude (CERN) et al.
We present the fully integrated form of the two-loop four-gluon amplitude in N=2 supersymmetric quantum chromodynamics with gauge group SU(Nc) and with Nf massless supersymmetric quarks (hypermultiplets) in the fundamental representation. Our result maintains full dependence on Nc and Nf, and relies on the existence of a compact integrand representation that exhibits the duality b...
Non-Abelian gauged supergravities as double copies
by: Chiodaroli, Marco (Uppsala U.) et al.
Scattering amplitudes have the potential to provide new insights to the study of supergravity theories with gauged R-symmetry and Minkowski vacua. Such gaugings break supersymmetry spontaneously, either partly or completely. In this paper, we develop a framework for double-copy constructions of Abelian and non-Abelian gaugings of N=8 supergravity with these properties. They are generally obtained as the double copy of...
Registration number: 202100-2932 VAT number: SE202100293201 PIC: 999985029 Registrar About this website Privacy policy Editor: Anastasios Gorantis
|
CommonCrawl
|
JEE Main 2014 (Offline)
On heating water, bubbles being formed at the bottom of the vessel detach and rise. Take the bubbles to be spheres of radius $$R$$ and making a circular contact of radius $$r$$ with the bottom $$R$$ and making a circular contact of radius $$r$$ with the bottom of the vessel. If $$r < < R$$ and the surface tension of water is $$T,$$ value of $$r$$ just before bubbles detach is: (density of water is $${\rho _w}$$)
$${R^2}\sqrt {{{{\rho _w}g} \over {3T}}} $$
$${R^2}\sqrt {{{{\rho _w}g} \over {T}}} $$
$${R^2}\sqrt {{{{2\rho _w}g} \over {3T}}} $$
When the bubble gets detached, Buoyant force $$=$$ force due to surface tension
Force due to excess pressure $$=$$ upthrust
Access pressure in air bubble $$ = {{2T} \over R}$$
$${{2T} \over R}\left( {\pi {r^2}} \right) = {{4\pi {R^3}} \over {3T}}{\rho _w}g$$
$$ \Rightarrow {r^2} = {{2{R^4}{\rho _w}g} \over {3T}}$$
$$ \Rightarrow r = {R^2}\sqrt {{{2{\rho _w}g} \over {3T}}} $$
An open glass tube is immersed in mercury in such a way that a length of $$8$$ $$cm$$ extends above the mercury level. The open end of the tube is then closed and scaled and the tube is raised vertically up by additional $$46$$ $$cm$$. What will be length of the air column above mercury in the tube now? (Atmospheric pressure $$=76$$ $$cm$$ of $$Hg$$)
$$16$$ $$cm$$
$$6$$ $$cm$$
Length of the air column above mercury in the tube is,
$$P + x = {P_0}$$
$$ \Rightarrow P = \left( {76 - x} \right)$$
$$ \Rightarrow 8 \times A \times 76 = \left( {76 - x} \right) \times A \times \left( {54 - x} \right)$$
$$\therefore$$ $$x=38$$
Thus, length of air column $$=54-38=16cm.$$
Assume that a drop of liquid evaporates by decreases in its surface energy, so that its temperature remains unchanged. What should be the minimum radius of the drop for this to be possible ? The surface tension is $$T,$$ density of liquid is $$\rho $$ and $$L$$ is its latent heat of vaporization.
$$\rho L/T$$
$$\sqrt {T/\rho L} $$
$$T/\rho L$$
$$2T/\rho L$$
When radius is decrease by $$\Delta R,$$
$$4\pi {R^2}\Delta R\rho L = 4\pi T\left[ {{R^2} - {{\left( {R - \Delta R} \right)}^2}} \right]$$
$$ \Rightarrow \rho {R^2}\Delta RL = T\left[ {{R^2} - {R^2} + 2R\Delta R - \Delta {R^2}} \right]$$
$$ \Rightarrow \rho {R^2}\Delta RL = T2R\Delta R\,\,$$ [ $$\Delta R$$ is very small ]
$$ \Rightarrow R = {{2T} \over {\rho L}}$$
A uniform cylinder of length $$L$$ and mass $$M$$ having cross-sectional area $$A$$ is suspended, with its length vertical, from a fixed point by a mass-less spring such that it is half submerged in a liquid of density $$\sigma $$ at equilibrium position. The extension $${x_0}$$ of the spring when it is in equilibrium is:
$${{Mg} \over k}$$
$${{Mg} \over k}\left( {1 - {{LA\sigma } \over M}} \right)$$
$${{Mg} \over k}\left( {1 - {{LA\sigma } \over {2M}}} \right)$$
$${{Mg} \over k}\left( {1 + {{LA\sigma } \over M}} \right)$$
From figure, $$k{x_0} + {F_B} = Mg$$
$$k{x_0} + \sigma {L \over 2}Ag = Mg$$
[ as mass $$=$$ density $$ \times $$ volume ]
$$ \Rightarrow k{x_0} = Mg - \sigma {L \over 2}Ag$$
$$ \Rightarrow {x_0} = {{Mg - {{\sigma LAg} \over 2}} \over k}$$
$$ = {{Mg} \over k}\left( {1 - {{LA\sigma } \over {2M}}} \right)$$
Questions Asked from Properties of Matter
|
CommonCrawl
|
February 2018 , Volume 146, Issue 3–4, pp 547–560 | Cite as
Projected changes in tropical cyclone activity under future warming scenarios using a high-resolution climate model
Julio T. Bacmeister
Kevin A. Reed
Cecile Hannay
Peter Lawrence
John E. Truesdale
Nan Rosenbloom
Michael Levy
This study examines how characteristics of tropical cyclones (TCs) that are explicitly resolved in a global atmospheric model with horizontal resolution of approximately 28 km are projected to change in a warmer climate using bias-corrected sea-surface temperatures (SSTs). The impact of mitigating from RCP8.5 to RCP4.5 is explicitly considered and is compared with uncertainties arising from SST projections. We find a reduction in overall global TC activity as climate warms. This reduction is somewhat less pronounced under RCP4.5 than under RCP8.5. By contrast, the frequency of very intense TCs is projected to increase dramatically in a warmer climate, with most of the increase concentrated in the NW Pacific basin. Extremes of storm related precipitation are also projected to become more common. Reduction in the frequency of extreme precipitation events is possible through mitigation from RCP8.5 to RCP4.5. In general more detailed basin-scale projections of future TC activity are subject to large uncertainties due to uncertainties in future SSTs. In most cases these uncertainties are larger than the effects of mitigating from RCP8.5 to RCP4.5.
Tropical cyclones Climate change High-resolution
This article is part of a Special Issue on "Benefits of Reduced Anthropogenic Climate ChangE (BRACE)" edited by Brian O'Neill and Andrew Gettelman.
The online version of this article (doi: 10.1007/s10584-016-1750-x) contains supplementary material, which is available to authorized users.
Tropical cyclones (TCs) have been explicitly simulated in a number of global climate models with resolutions finer than 50 km. Despite some sensitivities to model physics and dynamical core design, climate models running at high resolution are capable of capturing many aspects of the observed TC climatology with reasonable fidelity, including geographical, seasonal and even inter-annual variations (e.g.; Zhao et al. 2009; Manganello et al. 2012; Zhao et al. 2012; Murakami et al. 2012a, 2012b, Knutson et al. 2015). The Community Atmosphere Model (CAM) has also produced reasonable simulations of TC climatology (Bacmeister et al. 2014; Reed et al. 2015) when run at horizontal resolutions of around 28 km.
Determining possible changes to the global TC climatology in a warming climate is a critical but vexing problem for models (e.g.; Murakami et al. 2012a, 2012b; Walsh et al. 2015). Indices of TC activity derived from large scale atmospheric quantities in low-resolution simulations, e.g., Genesis Potential Index (GPI; Emanuel and Nolan 2004) do not always agree with explicit simulations of TC activity in high-resolution simulations using the same climate model (e.g.,Wehner et al. 2015). Explicit counts of storms in low-resolution simulation with a given model are also poor predictors of TC counts in the same model at higher resolution (Wehner et al. 2015). Thus, explicit projections of TC activity from high resolution models may provide new information not available from more economical approaches.
In addition to the obstacle posed by the resolution sensitivity of climate change projections in TC activity, a second major obstacle is posed by the scenario design itself. A straightforward approach to assessing future TC activity would be to simply conduct high-resolution experiments using the fully-coupled Community Earth System Model (CESM) into the future using Representative Concentration Pathways (RCP) scenario forcing. However, CESM at high horizontal resolution retains substantial sea surface temperature (SST) biases in the tropical Atlantic and Pacific oceans (Small et al. 2014). These biases are similar to those common in lower-resolution CMIP5 models (e.g; Wang et al. 2014) and are seen to have a noticeable negative impact on the climatology of TCs at high resolution, particularly in the N Atlantic and NE Pacific basins (Small et al. 2014). Idealized prescribed scenarios such as uniform 2 K SST warming with a forced doubling of CO2 (e.g., Walsh et al. 2015; Wehner et al. 2015) avoid these biases, but do not capture geographic variation in warming.
In this study we have chosen to use a hybrid approach to force high-resolution atmospheric simulations. We force the atmospheric model using SSTs from fully-coupled RCP4.5 and RCP8.5 CESM1 simulations at a lower standard resolution, but apply a bias correction to the SSTs based on present day biases in CESM (e.g.; Murakami et al. 2012a). We perform a total of 11 simulations (Table 1, Section 2). A total of 4 future SSTs are used, of which two are derived from the 30-member Large Ensemble (LE, Kay et al. 2015). This study complements that of Done et al. (2015) which uses large-scale model fields from lower-resolution ensembles to make projections of hurricane damage in the future. Simulated storm tracks from the simulations described here, are also used by Gettelman et al. (2016) as inputs for a statistical model of storm damage used in the re-insurance industry.
Experiment names and configurations used in this study. Asterisks following "BRACE set" experiments are intended as a reminder that 1o air/sea coupling was used in these runs
Period (scenario)
Coupler grid
SST Forcing
"BRACE set"
PD-1*
Hurrell
RCP4*
2070–2090 (RCP4.5)
RCP4.5
RCP8-SST1–1*
RCP8.5(SST1)
"Variability set"
PD-{2,3,4}
RCP8-SST1-{2,3,4}
RCP8-SST2
2070–2100(RCP8.5)
This paper is part of a larger project on the Benefits of Reducing Anthropogenic Climate changE (BRACE; O'Neill and Gettelman in preparation). It seeks to understand the impacts of mitigation from RCP8.5 to RCP4.5 – the "BRACE signal" – on TC activity, and to compare it with differences related to uncertainties in SST projections derived from coupled climate models.
2 Model, experiments, and analysis methods
2.1 Model configuration and experimental setup
This study uses the Community Atmosphere Model (CAM) - the atmospheric component of CESM with version 5 atmospheric physics as described by Neale et al. (2012). We utilize the atmospheric spectral element dynamical core (SE; Dennis et al. 2011) at a horizontal resolution of around 28 km. CAM5 is able to capture TCs at these high horizontal resolutions (e.g., Bacmeister et al. 2014). Reed et al. (2015) discuss the sensitivity of tropical cyclone activity in decadal simulations with CAM5 to the dynamical core used. Our study uses prescribed SSTs in both present day and future runs. Present day simulations use observed monthly-mean SSTs (Hurrell et al. 2008). Our technique for generating bias corrected future SSTs from fully-coupled simulations is described in the Supplementary Material and in Section 2.2. All forcing other than SST is historical through 2005 followed by RCP4.5 or RCP8.5 forcing (VanVuuren et al. 2011) for 2006 onwards.
The high-resolution runs discussed here represent the "bleeding edge" of what is feasible for extended (multi-decadal) runs with CAM. They are demanding in terms of both human and computational resources. Simulation costs for the 28 km configuration of CAM-SE are approximately 100 times that of the LE configuration used by several of the studies in this issue. In addition, performing 28-km simulations requires significant investment of human resources to port and optimize model codes on numerous high-performance computing (HPC) platforms, and to manage the large volumes of high-resolution output generated. As a consequence the number and length of high-resolution simulations we can perform and analyze is small compared to the lower resolution CESM runs discussed elsewhere in this issue.
Some aspects of the CESM high-resolution configuration are still evolving. Initially, we used a coarse 1o ocean model grid to couple between ocean and atmosphere. This was done in part to minimize the number of separate boundary forcing data sets in CESM. However, we discovered that this coupling had detectable impacts on the climatology of tropical cyclone winds (Supplementary Material). Better physical consistency is obtained by coupling on the atmosphere's higher resolution 28 km grid using SST data sets that have been pre-interpolated to this resolution. The discussion in the Supplementary Material shows that TC statistics for both configurations can be largely harmonized using a rescaling procedure based on pressure-wind relationships although some statistically significant differences remain.
We performed an initial set of 3 experiments (present day, RCP4.5 and RCP8.5) with coupling on the 1o degree grid. These were used to characterize the basic model response to a warming climate and to explore differences between RCP4.5 and RCP8.5 scenarios. We then performed two ensembles (3 members) in the present and in the future under RCP8.5, to characterize internal atmospheric variability. Ensemble members were generated by statistically perturbing initial temperatures at levels below 0.01 K, and used coupling on the 28 km atmospheric grid. Finally, in order to probe uncertainties introduced by future SST variability, two more runs were done using two different future SST data sets constructed from LE members. These additional SSTs were chosen based on a rough analysis of features thought to control N Atlantic hurricane activity.
Table 1 summarizes the runs examined in this study. Asterisks in the experiment names are to emphasize that the 1o coupler grid was used in these simulations.
2.2 Prescribed, bias-corrected SSTs
Fully coupled CESM1 retains substantial SST biases (Fig. S1) in the tropical Atlantic and Pacific oceans, which have a noticeable impact on the climatology of tropical cyclones (Small et al. 2014). Although coupled simulations would be preferable in many ways – more frequent air/sea coupling, realistic representation of TC cold wakes and consequent reduction of TC intensities – we decided that the large errors in TC climatology that result from coupled SST biases are unacceptable in a study that seeks to understand impact of climate change on TC statistics. In this study we utilize simulations with prescribed SSTs. Future SSTs are obtained from fully-coupled CESM1 simulations at standard 1o atmospheric and 1o ocean resolution. However, before applying these SSTs as forcing for our future simulations they are corrected by subtracting present day CESM1 biases with respect to observations. Our procedure for generating future SSTs is described in the Supplementary Material. A key assumption in our approach is that CESM biases will not change significantly with time.
Figure 1 illustrates the 4 bias corrected SSTs used in our simulations. SSTs for 2070–2090 in the single RCP4.5 scenario run (Fig. 1a) show 1 to 3 K warming over most of the world ocean. The RCP8.5 SST1 (Fig. 1b) shows warming of 2 to 3 K over most tropical ocean areas with even larger values (>3.5 K) in the eastern Pacific. The RCP4.5 SST is typically 0.5-1 K cooler over most of the tropics compared to RCP8.5 SST1. Both SSTs show a distinctive El Niño-like pattern of warming with the strongest tropical warming in the eastern equatorial Pacific. The off-equatorial tropical N. Atlantic in both SST sets is typically 0.5 to 1 K cooler than the tropical eastern Pacific. This relatively cool region is roughly coincident with the main development region (MDR) for Atlantic hurricanes and would therefore be expected to affect the formation of N. Atlantic storms (e.g. Zhao and Held 2012).
a Annual mean (2070–2090) surface temperature difference from the present day (1985–2005) in RCP4.5 scenario. b Same as a except for RCP8.5 SST1. c Mean 2070–2090 difference in SST's between RCP8.5 SST1 and SST2 (warmer Atlantic MDR). d Same as c except for RCP8.5 SST3 (warmer Atlantic colder tropics)
Figures 1c, d show the mean differences between the additional LE-based datasets and SST1 during the northern hemisphere warm season (July–November). The differences are small compared to the overall warming in Fig. 1b. However even small differences (0.1–0.5 K) in SSTs within main basin development regions relative to the tropical mean may affect cyclogenesis (Zhao and Held 2012). SST2 (Fig. 1c) has a broad tongue of slightly warmer SSTs in the eastern tropical Atlantic and a somewhat cooler central equatorial Pacific than SST1. SST3 (Fig. 1d) is sharply colder (0.2–0.4 K) over much of the tropics than SST1. In the tropical Atlantic SST3 and SST2 have similar east-west dipoles of cooling/warming with respect to SST1, but SST3 is slightly cooler overall in the tropical Atlantic than SST2. SST3 also possesses the least El Niño-like pattern of warming of the 3 SSTs used.
2.3 Cyclone tracker
The TC detection algorithm and tracker utilized for this analysis is that used and described in Zhao et al. (2009) with 3-hly model output. Following the same approach as in Bacmeister et al. (2014), the surface winds (commonly taken to be at a height of 10 m) used for the TC tracker are estimated using the lowermost model level winds at around 60 m and the power-wind law. The basic output of the tracker includes storm center longitude λ c ; n (t), latitude ϕ c ; n (t) as well as maximum wind V max ; n (t) and minimum central pressure p min ; n (t). The subscript n in these expressions designates the n-th storm identified by the tracker. The time variable t is discretized in 3-h intervals.
2.4 Track density
The storm location output from the tracker is used to calculate tropical cyclone track densities on a regular 4ox4o lat-lon grid. Occurrences of storms exceeding a given threshold within each 4ox4o gridbox are counted to give a "density" in units of hrs yr.−1 (4o)−2.
2.5 Storm precipitation
In addition to the standard tracker fields described above we also examine precipitation and storm size statistics for simulated TCs. We calculate average precipitation falling within 500 km of the storm center diagnosed by the TC tracking algorithm (e.g.; Jiang and Zipser 2010). This quantity \( {\mathcal{P}}_{500;n}(t) \) is calculated using instantaneous 3-hrly precipitation fields from the model and is analogous to other track quantities like V max ; n (t) and p min ; n (t). Storm size is discussed in the Supplementary Material.
2.6 Bootstrap analysis of significance
To test the significance of our results, we perform a simple bootstrap analysis (e.g.; Efron and Tibshirani 1998) using the period 1985–2005 for the present day runs and 2070–2090 for the future runs. We generate 2000 synthetic 20-year TC track files from each of our runs. Sampling is with replacement and individual storms are assigned to years according to their genesis time. PDFs of a statistic over the 2000 member synthetic ensembles are compared to evaluate significance.
3.1 BRACE signal
Figure 2 shows global maps of 20-year annual mean tropical cyclone track densities obtained from the BRACE set PD-1*, RCP4*, and RCP8-SST1–1* along with present day biases. The top row of Fig. 2 shows 20-year mean track densities for all storms (TS-Cat5). Detailed discussions of TC distributions in CAM5 versus observations can be found in Bacmeister et al. (2014), Wehner Small et al. (2014) and Reed et al. (2015). Biases with respect to IBTrACS (Knapp et al. 2010) are shown in the bottom row of Fig. 2. Here we will focus on projected changes to TC distributions in the future. At first glance there is little change in overall TC activity projected for 2070–2090, but closer inspection reveals that TC activity in the N. Atlantic decreases significantly under both RCP4* and RCP8-SST1–1*. In particular, in RCP8-SST-1* the largest TC densities decrease to 8–10 h yr.−1 4o-2 in the N Atlantic from values near 18 h yr.−1 4o-2 in PD-1*. The area of high track densities in the N. Atlantic is also dramatically smaller in RCP8.5 2070–2090 than in the present day. Under RCP4.5 N. Atlantic TC activity is reduced as well, but by a smaller amount. Elsewhere, small reductions in TC activity are also evident, e.g. central Pacific. Murakami et al. 2014 have analyzed the relationship between present day model biases and projections of future TC activity. This is discussed further in the Supplementary Material (S3).
Track densities for the BRACE simulation set PD-1*, RCP4*, and RCP8-SST1–1*. The top row shows the track densities for all tropical cyclones (TS-Cat 5), while the middle row shows the track densities for all extreme storms (Cat 4 and 5). Units are average hours per year in which a storm is found within a 4o x 4o gridbox. Bottom panels show present day biases in PD-1* with respect to IBTrACS best-track estimates
The middle row of Fig. 2 shows track densities for extreme storms (Cat 4 and 5). There are pronounced increases in extreme storm activity in both RCP4* and RCP8-SST1–1*. These increases are most pronounced in the NW Pacific, but are also apparent in the Southern Indian Ocean near Madagascar and in the South Pacific east of Australia. Little or no increase in extreme storm frequency is projected for the N Atlantic or NE Pacific. Increases in extreme TC frequency under warming scenarios have been found in other studies (e.g; Murakami et al. 2012a, 2012b; Knutson et al. 2015). We note that our simulations underpredict present day Cat 4 and 5 storms in all basins (Fig. 2, bottom row) This may have implications for future projections.
Bootstrap results for global and basin average track densities are given in Table 2. Bootstrap ensemble PDFs for the BRACE set are shown in the top row of Fig. 3. There is a significant BRACE signal evident in global mean and N. Atlantic TS-Cat5 track densities. The global mean track density of TS-Cat5 shows significant decreases in the future; from a present day value of around 5.8 h yr.−1 4o-2 in PD-1* to a value of around 4.7 h yr.−1 4o-2 for 2070–2090 in RCP8-SST1–1*. Experiment RCP4* has an intermediate value of around 5.4 h yr.−1 4o-2. For Cat4–5 storms in the NW Pacific track densities for RCP4* and RCP8-SST1–1* overlap almost completely, but are clearly distinct from the PD-1* result. Both global mean and NW Pacific densities of Cat4–5 storms double in both future runs with respect to their value in PD-1*.
Results from bootstrap analysis of tropical cyclone track densities (see text). Ensemble means from 2000 bootstrapped 20-year samples are shown along with 1st %-ile and 99th %-ile values in italics. Results are for 1985–2005 in the present day scenario and for 2070–2090 under RCP4.5 and RCP8.5. The RCP8.5 results include 4 experiments with SST1 as well as two additional experiments with SST2 and SST3. Results are shown for areal averages over the global tropics (42S-42N), N. Atlantic (80 W–20 W,10 N–38 N), NW. Pacific (120E-180,10 N–38 N), NE. Pacific (120 W–96 W,10 N–26 N), S. Pacific (140E-180, 30S–10S) and S. Indian Ocean (28E-68E, 34S–10S). Results from runs using the 1o are highlighted. Blank cells indicate that no storms were detected
Global All
Global Cat4–5
N Atl
NW Pac Cat4–5
NE Pac
S Pac Cat4–5
N Atl Cat4–5
Present day runs 1985–2005
0.03, 0.05
0, 0.38
PD-2
5.7, 6 .7
Future runs 2070–2090
0.1, 0.63
RCP8-SST1–2
First column: Frequency distribution of 20-year mean global (42S-42N) averaged track densities for TS-Cat5. Horizontal axis is units of hrs yr.−1 4o-2. Distributions are accumulated over 2000 randomly sampled 20-year subsets of data for each experiment. The top panel shows results for PD-1* (black), RCP4* (green) and RCP8-SST1–1* (red). Middle panel shows results for PD-2,3, and 4. Bottom panel shows results for RCP8-SST1–2,3,4 (red) as well as RCP8-SST2 (light blue) and RCP8-SST3 (magenta). Second column: Same as first expect for N Atlantic means (80 W–20 W,10 N–38 N). Third column: Same as first except for for NW Pacific (120E-180, 10 N–38 N) Cat4–5 averaged track densities
As suggested by Fig. 2, the increased frequency of extreme storms is not uniform over all basins. It is dominated by strong increases in the NW Pacific basin with secondary centers of action in the western Southern Indian Ocean and the S Pacific. This is confirmed in Table 2. Extreme storm activity in the NW Pacific in the future is projected to more than double, and dramatic increases occur in the S Pacific and S Indian oceans as well. Importantly for the purposes of the BRACE study, the differences in extreme storm activity between RCP4* and RCP8-SST1–1* are generally small except in the S Pacific storm basin. It is also noteworthy that CAM5 projects no increased Cat4–5 frequency in the N. Atlantic.
3.2 Uncertainty and variability
Results from our variability set are shown in Table 2 as well as in the second and third rows of Fig. 3. First, we note that there remain significant differences between calculations using the 1o degree coupler grid and those using the 28-km coupler grid despite the rescaling described in the Supplementary Material. Runs with the 1o coupler grid exhibit smaller mean track densities for TS-Cat5 than those using the 28 km coupler grid for present day and future conditions both globally and in the N Atlantic basin and NW Pacific means for Cat4–5 are larger.
Despite differences in the means, the spread in the PDFs of mean track densities is similar using both coupler grids. This justifies the use of 28-km coupler runs to evaluate the significance of the BRACE signal obtained from PD-1*, RCP4* and RCP8-SST1–1*. The PDFs for PD-2,3, and 4 as well as RCP8-SST-2,3,4 overlap in the global means and for most major basins. This is evident in Fig. 3 as well as in the tabulated results in Table 2. This gives us confidence that 20-year means of track densities with given SST fields are sufficiently stable for use in detecting climate signals under both present day and future conditions.
From these results we argue that the difference between PD-1*, RCP4*, and RCP8-SST1–1* seen in global mean and N Atlantic mean track densities for TS-Cat5, i.e. the BRACE signal, is significant. The global BRACE signal is brought about by strong signals in NE Pacific (not shown) and N Atlantic. Other basins show projected decreases TS-Cat5 densities, but not significant differences between RCP4* and RCP8*. The same is true for the strong future increase in Cat4–5 densities projected in the NW Pacific in both RCP4* and RCP8-SST1–1*. It is worth noting here that Cat4–5 activity in CAM5 is overwhelmingly dominated by activity in the NW Pacific, and while 20-year means seem sufficient to characterize this activity it is not clear this is true of other basins. Table 2 shows large differences in 20-year Cat4–5 mean track densities among members of the variability set for both the S. Indian Ocean and S. Pacific basins.
Large uncertainties exist in future SSTs even under a single forcing scenario such as RCP8.5 (Kay et al. 2015). These must be considered when projecting future TC activity. As described in Section 2 we performed 2 additional simulations with SSTs derived from the LE selected because they possessed warmer tropical Atlantic SSTs (Fig. 1b,c). The light blue and magenta curves in the bottom row of Fig. 3 show bootstrap results for RCP8-SST2 and RCP8-SST3. The global mean track densities for TS-Cat5 appear relatively insensitive to SST choice (Fig. 3, lower left). This increases our confidence in the signal obtained from the BRACE set, which suggests that mitigating from RCP8.5 to RCP4.5 would result in greater overall TC activity globally.
At the basin scale and for Cat4–5 storms, SST differences are more significant. Track densities for TS-Cat5 in the N Atlantic are substantially higher in RCP8-SST2 and RCP8-SST3, with SST3 producing around 50 % more N. Atlantic activity than SST1,. In the NW Pacific both SST2 and SST3 produce a noticeable and significant decrease in the density of Cat 4 and 5 storms relative to SST1. This is reflected in global mean densities for Cat4–5, although in RCP8-SST2 the substantial decrease in the NW Pacific is offset by somewhat enhanced high track densities in the S. Indian Ocean, S. Pacific and NE Pacific. This offset does not appear in RCP8-SST3 perhaps related to the generally cooler tropics in SST3 (Fig. 1).
The use of different couplers for the BRACE set and for the variability set (Table 1) adds an unfortunate complication to the analysis, but it seems clear that SST uncertainties can have an impact comparable to mitigation from RCP8.5 to RCP4.5. The TS-Cat5 track densities in the N Atlantic in RCP8-SST3 for 2070–2090 are halfway between those for the present day and those for 2070–2090 using SST1. This is similar to the position of RCP4* results with respect to PD-1* and RCP8-SST1–1*. Even though different couplers are used in these comparisons we would argue that the size of the SST effect in the N Atlantic is similar to the BRACE signal in our experiments. A more complete exploration of SST variability under both RCP4.5 and RCP8.5 could reveal a robust BRACE signal, but the small number of SST realizations explored here is not sufficient to do so.
The relationship of basin-scale TC activity to SSTs is not yet completely understood (Bell and Chelliah 2006; Zhao and Held 2012). N Atlantic activity is believed to respond to both Atlantic and Pacific SSTs, with cool conditions in the E Pacific, as during La Niña events, enhancing Atlantic hurricanes, in contrast to suppression during warm E. Pacific/El Niño conditions (Gray 1984; Pielke Jr and Landsea 1999). Warm SSTs in the main development region (MDR) of any basin, relative to the tropical mean, may enhance TC activity in that basin (Zhao and Held 2012). Camargo and Sobel (2005) contend that El Niño conditions enhance TC activity in the NW Pacific. Our results are roughly consistent with all of these proposals. The generally El Niño-like character of the RCP4.5 and RCP8.5 warming signals (Fig. 1a, b) could explain both the enhancements in intense NW Pacific storms and the overall suppression of Atlantic activity. As the El Niño-like character of the warming is reduced, or alternatively the relative warmth of the tropical Atlantic is increased, as in SST3, N Atlantic TC activity increases and NW Pacific activity decreases.
3.3 Tropical cyclone precipitation
Extreme precipitation is expected to become more frequent under global warming (e.g.; O'Gorman and Schneider 2009). Knutson et al. (2010) also discuss precipitation increases in simulated TCs under global warming scenarios. Figure 4 (left) presents annual frequency histograms of\( {\mathcal{P}}_{500;n}(t) \), the areally-averaged precipitation within 500 km of diagnosed storm centers (Section 2.5). These histograms are compiled over each 3-hourly sample in the lifetime of each storm diagnosed by the tracker. The histograms on the left were compiled using the raw 3-hourly precipitation fields in each simulation. Note that these histograms are in terms of absolute annual frequency so that increased frequency represents an increased annual frequency of intense TC rain. It is immediately clear that strong precipitation (>40 mm d−1) near TCs is projected to become more common in warmer climates. The frequency of TC precipitation over 80 mm d−1 is projected to increase by a factor of nearly 10 in 2070–2090 under RCP8.5 over its present day value, and by a factor of 3 to 5 over present day frequency under RCP4.5. These increases with warming are generally consistent with the work of Villarini et al. (2014) and Wehner et al. (2015) which found that idealized warming scenarios produced substantial increases in mean daily and maximum instantaneous TC precipitation rates.
a: Intensity distributions of mean 500 km storm-precipitation along storm tracks: black) 1985–2005; green) RCP4.5 2070–2090; and red) RCP8.5 2070–2090 b Same as a except for storm precipitation scaled by saturation specific humidity (see text)
Since saturation specific humidity q sat (T, p) is a rapidly increasing function of temperature for constant pressure we should expect a warmer climate to possess higher absolute specific humidities in the lower atmosphere, which will translate to higher overall rain rates if other aspects of meteorological forcing remain reasonably constant (Held and Soden 2006). To isolate the role of increased q sat in generating more frequent intense rainfall we scaled \( {\mathcal{P}}_{500;n} \) by the local saturation specific humidity before compiling frequency histograms. We used the relationship;
$$ {P^{*}}_{500;n}(t)=\frac{q_{sat}\left({T}_0,{p}_0\right)}{q_{sat}\left({T}_{s;n}(t),{p}_0\right)}{P}_{500;n}(t) $$
where T 0 is set to 300 K, T s ; n (t) is the average surface temperature within a 500 km radius following the n-th storm track, and p 0 is a reference pressure set to 1000 hPa. Assuming the moisture content of air entering TCs is controlled by low-level temperature near the storm, this scaling should remove the thermodynamic effect responsible for increased precipitation intensities.
Histograms of \( {{\mathcal{P}}^{*}}_{500} \)are shown in Figure 4 (right). We see that a large part, but not all, of the increased frequency of intense rain events has been removed by the q sat scaling. This suggests that most of the projected change in the intensity histograms of \( {\mathcal{P}}_{500} \) is a simple consequence of increased atmospheric humidity, but that some increase in the dynamical forcing of TC precipitation is also taking place. Wehner et al. (2015) also found increases in maximum TC precipitation in their idealized future SST simulations (i.e., uniformly increased SSTs of +2 K) that exceed what would be expected from increases in q sat alone. Storm size also shows a modest gain in the future (Fig. S5). Similar results are found when this analysis is performed for individual basins. The exception is the N Atlantic where decreases in projected TC frequency may compensate for warmer SSTs leading to lower absolute frequencies of TC precipitation for all intensities.
4 Discussion and conclusions
Frequency/track density
As in several other models (e.g. Knutson et al. 2010, etc.) CESM projects a modest (~20 %) decrease in global TS-Cat 5 frequency in warmer climates. According to our results, this potential "benefit" of a warmer climate could be reduced under RCP4.5 compared to the RCP8.5 scenarios (Figure 3, Table 2). Projections of TC activity for individual basins are difficult to make with confidence because of significant differences in projected SSTs at the basin scale, as well as remaining uncertainty about the factors that control TC activity in individual basins. Figure 3 suggests that a decrease in future TC activity in the N Atlantic is a robust feature of a warmer climate. However, the magnitude of the reduction depends on details of the future SSTs. In particular, our simulations suggest that uncertainties in future SSTs are as important in projecting future N Atlantic hurricane activity as the effects of mitigating from RCP8.5 to RCP4.5.
Other changes in future TC climatology projected by CESM under global warming are not so benign. CESM projects a significant increase (200–300 %) in the global frequency of very intense (Cat4–5) storms under RCP8.5. This increase is concentrated in several hotspots around the world, with by far the largest contribution coming in the NW Pacific basin. This feature appears in all of the future runs we have performed, and our experiments show no benefit from mitigating to RCP4.5 (Table 2, Fig. 3). We should emphasize that the projected increase in Cat4–5 storms in the NW Pacific is significantly affected by the choice of future SST (Fig. 3). Since only one RCP4.5 SST was used, it is possible we have missed a potential benefit of mitigation from RCP8.5 to RCP4.5.
Storm precipitation
The risk of intense TC precipitation (>50 mm d−1 average within 500 km of center) increases dramatically as the climate warms (Fig. 4). This increased risk is largely due to increased humidity in the atmosphere rather than intensified dynamical forcing and would be reduced by about a half by mitigating to RCP4.5 from RCP8.5. In the case of extreme TC precipitation the benefits of mitigation are clearly larger than the effects of SST uncertainties (Fig. S5).
Our experiments suggest that much of the increased impact from TCs in a warmer climate may be felt away from the United States and in quantities not commonly assessed in current impact models (e.g.; Gettelman et al. 2016) such as rainfall and storm size (Fig. S5). A complete picture of future societal and economic impacts of TCs will require extending impact models both geographically and in terms of physical variables considered.
The BRACE signal we were able to detect in this study is comparable to that arising from different future SSTs in many of the features we examined. A complete evaluation of the benefits of mitigation from RCP8.5 to RCP4.5 would require the use of a large number of SST projections under both scenarios. It is not clear whether projects such as the large ensemble (Kay et al. 2015) or even coupled model intercomparisons have spanned the range of possible future SSTs. Global mean track densities for TS-Cat5 appear to have a clear BRACE signal although it is not necessarily a beneficial one. Our projection for the global frequency of intense TC precipitation also shows a clear signal from mitigation to RCP4.5. However, for basin scale projections of TC frequency, or for projections of intense tropical cyclone activity, our study suggests that SST uncertainty is a key factor. This uncertainty is important not only in the context of the BRACE project, but also for any detailed projections of TC activity in a warming climate.
Computing resources for this work were provided by; The Argonne Leadership Computing Facility at Argonne National Laboratory (Office of Science of the US Department of Energy) through the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, and the Climate Simulation Laboratory at NCAR's Computational and Information Systems Laboratory, sponsored by the National Science Foundation and other agencies. This work also utilized part of the "Using Petascale Computing Capabilities to Address Climate Change Uncertainties" PRAC allocation support by the National Science Foundation (NSF), and the Blue Waters sustained-petascale computing project supported by the NSF and the state of Illinois.
The authors would also like to acknowledge support from the Regional and Global Climate Modeling Program (RGCM) of the US Department of Energy, Office of Science (BER), Cooperative Agreement DE-FC02-97ER62402 and from NSF's EaSM program.
10584_2016_1750_MOESM1_ESM.pdf (7.3 mb)
ESM 1 (PDF 7455 kb)
Bacmeister JT et al. (2014) Exploratory high-resolution climate simulations using the community atmosphere model (CAM). J Clim 27(9):3073–3099CrossRefGoogle Scholar
Bell GD, Chelliah M (2006) Leading tropical modes associated with interannual and multidecadal fluctuations in North Atlantic hurricane activity. J Clim 19(4):590–612CrossRefGoogle Scholar
Camargo SJ, Sobel AH (2005) Western North Pacific tropical cyclone intensity and ENSO. J Clim 18(15):2996–3006CrossRefGoogle Scholar
Dennis J et al. (2011) CAM-SE: A scalable spectral element dynamical core for the Community Atmosphere Model. International Journal of High Performance Computing Applications 26:74–89CrossRefGoogle Scholar
Done JM, PaiMazumder D, Towler E et al (2015) Estimating Impacts of North Atlantic Tropical Cyclones Using an Index of Damage Potential. Clim Change. doi: 10.1007/s10584-015-1513-0
Efron B, Tibshirani RJ (1998) An Introduction to the bootstrap. Monographs on Statistics and Applied Probablity 57. Chapman and Hall/CRC Press, Boca Raton FL. 435 pp.Google Scholar
Emanuel KA, D Nolan, 2004: Tropical cyclone activity and the global climate system. Preprints, 26th Conf. on Hurricanes and Tropical Meteorology, Miami, FL, Amer. Meteor. Soc., 10 A.2. [Available online at https://ams.confex.com/ams/26HURR/techprogram/paper_75463.htm.]
Gettelman, A., D. Bresch, C. C. Chen, et al. (2016) Projections of future tropical cyclone damage with a high resolution global climate model. Climatic Change. (in review)Google Scholar
Gray WM (1984) Atlantic seasonal hurricane frequency. Part I: El Nino and 30 mb quasi-biennial oscillation influences. Mon Weather Rev 112(9):1649–1668CrossRefGoogle Scholar
Held IM, Soden BJ (2006) Robust responses of the hydrological cycle to global warming. J Clim 19(21):5686–5699CrossRefGoogle Scholar
Hurrell JW, Hack JJ, Shea D, Caron JM, Rosinski J (2008) A new sea surface temperature and sea ice boundary dataset for the Community Atmosphere Model. J Clim 21:5145–5153. doi: 10.1175/2008JCLI2292.1. CrossRefGoogle Scholar
Jiang H, Zipser E (2010) Contribution of tropical cyclones to the global precipitation from eight seasons of TRMM data: Regional, seasonal, and interannual variations. J Clim 23:1526–1543CrossRefGoogle Scholar
Kay J et al. (2015) The Community Earth System Model (CESM) Large Ensemble Project: A Community Resource for Studying Climate Change in the Presence of Internal Climate Variability. Bull. Amer. Meteorol. Soc. doi: 10.1175/BAMS-D-13-00255.1 Google Scholar
Knapp KM et al. (2010) The International Best Track Archive for Climate Stewardship (IBTrACS): Unifying tropical cyclone data. BAMS 91:363–376CrossRefGoogle Scholar
Knutson TR, McBride JL, Chan J, Emanuel K, Holland G, Landsea C, Held I, Kossin JP, Srivastava AK, Sugi M (2010) Tropical cyclones and climate change. Nat Geosci 3(3):157–163CrossRefGoogle Scholar
Knutson TR et al. (2015) Global Projections of Intense Tropical Cyclone Activity for the Late Twenty-First Century from Dynamical Downscaling of CMIP5/RCP4.5 Scenarios. J Clim 28(18). doi: 10.1175/JCLI-D-15-0129.1
Manganello JV et al. (2012) Tropical cyclone climatology in a 10-km global atmospheric GCM: toward weather-resolving climate modeling. J Clim 25(11):3867–3893CrossRefGoogle Scholar
Murakami H, Mizuta R, Shindo E (2012a) Future changes in tropical cyclone activity projected by multi-physics and multi-SST ensemble experiments using the 60-km-mesh MRI-AGCM. Clim Dyn 39(9–10):2569–2584CrossRefGoogle Scholar
Murakami H et al. (2012b) Future changes in tropical cyclone activity projected by the new high-resolution MRI-AGCM. J Clim 25:3237–3260CrossRefGoogle Scholar
Murakami H, Hsu P-C, Arakawa O, Li T (2014) Influence of model biases on projected future changes in tropical cyclone frequency of occurrence. J Clim 27:2159–2181CrossRefGoogle Scholar
Neale, R. J. et al. (2012) Description of the NCAR Community Atmosphere Model (CAM 5.0). NCAR Tech. Note NCARTN-4861STR, 274 ppGoogle Scholar
O'Gorman PA, Schneider T (2009) The physical basis for increases in precipitation extremes in simulations of 21st-century climate change. Proc Natl Acad Sci 106(35):14773–14777CrossRefGoogle Scholar
Pielke Jr RA, Landsea CN (1999) La nina, el nino and atlantic hurricane damages in the united states. Bull Am Meteorol Soc 80(10):2027–2033CrossRefGoogle Scholar
Reed KA, Bacmeister JT, Rosenbloom N et al. (2015) Impact of the dynamical core on the direct simulation of tropical cyclones in a high-resolution global model. Geophys Res Lett 42(9):3603–3608Google Scholar
Small RJ et al. (2014) A new synoptic scale resolving global climate simulation using the Community Earth System Model. Journal of Advances in Modeling Earth Systems 6(4):1065–1094CrossRefGoogle Scholar
VanVuuren DP et al. (2011) Representative concentration pathways: an overview. Clim Chang 109:1–2 5–31CrossRefGoogle Scholar
Villarini G et al. (2014) Sensitivity of tropical cyclone rainfall to idealized global-scale forcings. J Clim 20(12):2307–2314Google Scholar
Walsh KJ et al. (2015) Hurricanes and climate: the US CLIVAR working group on hurricanes. Bull Am Meteorol Soc 96:997–1017. doi: 10.1175/BAMS-D-13-00242.1 CrossRefGoogle Scholar
Wang C, Zhang L, Lee SK, Wu L, Mechoso CR (2014) A global perspective on CMIP5 climate model biases. Nat Clim Chang 4(3):201–205CrossRefGoogle Scholar
Wehner M et al. (2015) Resolution dependence of future tropical cyclone projections of CAM5. 1 in the US CLIVAR Hurricane Working Group idealized configurations. J Clim 28(10):3905–3925CrossRefGoogle Scholar
Zhao M, Held IM (2012) TC-permitting GCM simulations of hurricane frequency response to sea surface temperature anomalies projected for the late-twenty-first century. J Clim 25(8):2995–3009CrossRefGoogle Scholar
Zhao M, Held IM, Lin SJ, Vecchi GA (2009) Simulations of global hurricane climatology, interannual variability, and response to global warming using a 50-km resolution GCM. J Clim 22(24):6653–6678CrossRefGoogle Scholar
Zhao M, Held IM, Lin S-J (2012) Some counter-intuitive dependencies of tropical cyclone frequency on parameters in a GCM. J Atmos Sci 69(7). doi: 10.1175/JAS-D-11-0238.
© Springer Science+Business Media Dordrecht 2016
1.Climate and Global Dynamics Division, National Center for Atmospheric ResearchBoulderUSA
2.School of Marine and Atmospheric SciencesStony Brook UniversityStony BrookUSA
Bacmeister, J.T., Reed, K.A., Hannay, C. et al. Climatic Change (2018) 146: 547. https://doi.org/10.1007/s10584-016-1750-x
Accepted 12 July 2016
DOI https://doi.org/10.1007/s10584-016-1750-x
Not logged in Not affiliated 35.168.111.191
|
CommonCrawl
|
On the control of stability of periodic orbits of completely integrable systems
JGM Home
On the discretization of nonholonomic dynamics in $\mathbb{R}^n$
March 2015, 7(1): 81-108. doi: 10.3934/jgm.2015.7.81
Higher-order variational calculus on Lie algebroids
Eduardo Martínez 1,
IUMA and Department of Applied Mathematics, University of Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza, Spain
Received August 2014 Revised January 2015 Published March 2015
The equations for the critical points of the action functional defined by a Lagrangian depending on higher-order derivatives of admissible curves on a Lie algebroid are found. The relation with Euler-Poincaré and Lagrange Poincaré type equations is studied. Reduction and reconstruction results for such systems are established.
Keywords: Lagrangian mechanics, Lie algebroids, higher order mechanics., Variational calculus.
Mathematics Subject Classification: 70H25, 70H30, 70H50, 37J15, 58K05, 70H03, 37K0.
Citation: Eduardo Martínez. Higher-order variational calculus on Lie algebroids. Journal of Geometric Mechanics, 2015, 7 (1) : 81-108. doi: 10.3934/jgm.2015.7.81
L. Abrunheiro and M. Camarinha, Optimal control of affine connection control systems from the point of view of Lie algebroids,, Int. J. Geom. Methods Mod. Phys., 11 (2014). doi: 10.1142/S0219887814500388. Google Scholar
L. Abrunheiro, M. Camarinha, J. F. Cariñena, J. Clemente-Gallardo, E. Martínez and P. Santos, Some applications of quasi-velocities in optimal control,, International Journal of Geometric Methods in Modern Physics, 8 (2011), 835. doi: 10.1142/S0219887811005427. Google Scholar
A. J. Bruce, K. Grabowska and J. Grabowski, Higher order mechanics on graded bundles,, preprint, (). Google Scholar
M. Camarinha, F. Silva-Leite and P. Crouch, Splines of class $C^k$ on non-Euclidean spaces,, IMA J. Math. Control Info., 12 (1995), 399. doi: 10.1093/imamci/12.4.399. Google Scholar
J. F. Cariñena, C. López and E. Martínez, Sections along a map applied to higher-order Lagrangian mechanics. Noether's theorem,, Acta Applicandae Mathematicae, 25 (1991), 127. Google Scholar
J. F. Cariñena and E. Martínez, Lie algebroid generalization of geometric mechanics,, in Lie Algebroids and related topics in differential geometry, (2001), 201. Google Scholar
J. F. Cariñena, J. M. Nunes da Costa and P. Santos, Quasi-coordinates from the point of view of Lie algebroid structures,, Journal of Physics A: Mathematical and Theoretical, 40 (2007), 10031. doi: 10.1088/1751-8113/40/33/008. Google Scholar
H. Cendra, J. E. Marsden and T. S. Ratiu, Lagrangian reduction by stages,, Mem. Amer. Math. Soc., 152 (2001). doi: 10.1090/memo/0722. Google Scholar
L. Colombo, Lagrange-Poincaré reduction for optimal control of underactuated mechanical systems,, preprint, (). Google Scholar
L. Colombo and D. Martín de Diego, A variational and geometric approach for the second-order Euler-Poincaré equations,, Notes of the talk delivered at XIII Encuentro de Invierno, (2011). Google Scholar
L. Colombo and D. Martín de Diego, On the geometry of higher-order variational problems on Lie groups,, preprint, (). Google Scholar
L. Colombo, D. Martín de Diego and M. Zuccalli, Optimal control of underactuated mechanical systems: A geometric approach,, J. Math. Phys., 51 (2010). doi: 10.1063/1.3456158. Google Scholar
J. Cortés, M. de León, J. C. Marrero and E. Martínez, Nonholonomic Lagrangian systems on Lie algebroids,, Discrete and Continuous Dynamical Systems, 24 (2009), 213. doi: 10.3934/dcds.2009.24.213. Google Scholar
M. Crainic and R. L. Fernandes, Integrability of Lie brackets,, Ann. of Math. (2), 157 (2003), 575. doi: 10.4007/annals.2003.157.575. Google Scholar
M. Crampin, W. Sarlet and F. Cantrijn, Higher order differential equations and higher order Lagrangian mechanics,, Math. Proc. Camb. Phil. Soc., 99 (1986), 565. doi: 10.1017/S0305004100064501. Google Scholar
M. de León, J. C. Marrero and E. Martínez, Lagrangian submanifolds and dynamics on Lie algebroids,, J. Phys. A: Math. Gen., 38 (2005). doi: 10.1088/0305-4470/38/24/R01. Google Scholar
M. de León and P. Rodrigues, Generalized Classical Mechanics and Field Theory,, North-Holland Math. Stu., (1985). Google Scholar
F. Gay-Balmaz, D. D. Holm and T. S. Ratiu, Higher order Lagrange-Poincaré and Hamilton-Poincaré reductions,, J. Braz. Math. Soc., 42 (2011), 579. doi: 10.1007/s00574-011-0030-7. Google Scholar
F. Gay-Balmaz, D. D. Holm, D. M. Meier, T. S. Ratiu and F. X. Vialard, Invariant higher-order variational problems,, Comm. Math. Phys., 309 (2012), 413. doi: 10.1007/s00220-011-1313-y. Google Scholar
F. Gay-Balmaz and T. S. Ratiu, Clebsch optimal control formulation in mechanics,, J. Geometric Mechanics, 3 (2011), 41. doi: 10.3934/jgm.2011.3.41. Google Scholar
K. Grabowska and J. Grabowski, Variational calculus with constraints on general algebroids,, J. Phys. A: Math. Theor., 41 (2008). doi: 10.1088/1751-8113/41/17/175204. Google Scholar
K. Grabowska, J. Grabowski and P. Urbański, Geometrical mechanics on algebroids,, Int. J. Geom. Meth. Mod. Phys., 3 (2006), 559. doi: 10.1142/S0219887806001259. Google Scholar
K. Grabowska, J. Grabowski and P. Urbański, Lie brackets on affine bundles,, Ann. Global Anal. Geom., 24 (2003), 101. doi: 10.1023/A:1024457728027. Google Scholar
J. Grabowski and M. Jóźwikowski, Pontryagin maximum principle on almost Lie algebroids,, SIAM J. Control Optim., 49 (2011), 1306. doi: 10.1137/090760246. Google Scholar
M. Jóźwikowski and M. Rotkiewicz, Prototypes of higher algebroids with applications to variational calculus,, preprint, (). Google Scholar
J. Klein, Espaces variationnels et mécanique,, Ann. Inst. Fourier, 12 (1962), 1. doi: 10.5802/aif.120. Google Scholar
K. J. Kyriakopoulos and G. N. Saridis, Minimum jerk path generation,, in Proceedings of the 1988 IEEE International Conference on Robotics and Automation, (1988), 364. doi: 10.1109/ROBOT.1988.12075. Google Scholar
L. Machado, F. Silva-Leite and K. Krakowski, Higher-order smoothing splines versus least squares problems on Riemannian manifolds,, J. Dyn. and Control Syst., 16 (2010), 121. doi: 10.1007/s10883-010-9080-1. Google Scholar
K. C. H. Mackenzie, General Theory of Lie Groupoids and Lie Algebroids,, Cambridge University Press, (2005). doi: 10.1017/CBO9781107325883. Google Scholar
E. Martínez, Lagrangian Mechanics on Lie algebroids,, Acta Appl. Math., 67 (2001), 295. doi: 10.1023/A:1011965919259. Google Scholar
E. Martínez, Geometric formulation of Mechanics on Lie algebroids,, in Proceedings of the VIII Fall Workshop on Geometry and Physics (Medina del Campo, (1999), 209. Google Scholar
E. Martínez, Reduction in optimal control theory,, Rep. Math. Phys., 53 (2004), 79. doi: 10.1016/S0034-4877(04)90005-5. Google Scholar
E. Martínez, Lie algebroids in classical mechanics and optimal control,, SIGMA, 3 (2007). doi: 10.3842/SIGMA.2007.050. Google Scholar
E. Martínez, Classical field theory on Lie algebroids: Variational aspects,, J. Phys. A: Mat. Gen., 38 (2005), 7145. doi: 10.1088/0305-4470/38/32/005. Google Scholar
E. Martínez, The momentum equation,, in Groups, (2006), 187. Google Scholar
E. Martínez, T. Mestdag and W. Sarlet, Lie algebroid structures and Lagrangian systems on affine bundles,, J. Geom. Phys., 44 (2002), 70. doi: 10.1016/S0393-0440(02)00114-6. Google Scholar
E. Martínez, Variational calculus on Lie algebroids,, ESAIM: Control, 14 (2008), 356. doi: 10.1051/cocv:2007056. Google Scholar
L. Noakes, G. Heinzinger and B. Paden, Cubic splines on curved spaces,, IMA Journal of Mathematical Control & Information, 6 (1989), 465. doi: 10.1093/imamci/6.4.465. Google Scholar
L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze and E. F. Mishchenko, The Mathematical Theory of Optimal Processes,, Interscience Publishers John Wiley & Sons, (1964). Google Scholar
P. D. Prieto-Martínez and N. Román-Roy, Lagrangian-Hamiltonian unified formalism for autonomous higher-order dynamical systems,, J. Phys. A: Math. Theor., 44 (2011). doi: 10.1088/1751-8113/44/38/385203. Google Scholar
W. Tulczyjew, The Lagrange differential,, Bull. Acad. Polon. Sci., 24 (1976), 1089. Google Scholar
A. Weinstein, Lagrangian Mechanics and groupoids,, Fields Inst. Comm., 7 (1996), 207. Google Scholar
H. Yoshimura and J. E. Marsden, Reduction of Dirac structures and the Hamilton-Pontryagin principle,, Rep. Math. Phys., 60 (2007), 381. doi: 10.1016/S0034-4877(08)00004-9. Google Scholar
Melvin Leok, Diana Sosa. Dirac structures and Hamilton-Jacobi theory for Lagrangian mechanics on Lie algebroids. Journal of Geometric Mechanics, 2012, 4 (4) : 421-442. doi: 10.3934/jgm.2012.4.421
Juan Carlos Marrero, D. Martín de Diego, Diana Sosa. Variational constrained mechanics on Lie affgebroids. Discrete & Continuous Dynamical Systems - S, 2010, 3 (1) : 105-128. doi: 10.3934/dcdss.2010.3.105
Pedro D. Prieto-Martínez, Narciso Román-Roy. Higher-order mechanics: Variational principles and other topics. Journal of Geometric Mechanics, 2013, 5 (4) : 493-510. doi: 10.3934/jgm.2013.5.493
Robert I. McLachlan, Ander Murua. The Lie algebra of classical mechanics. Journal of Computational Dynamics, 2019, 6 (2) : 345-360. doi: 10.3934/jcd.2019017
Jorge Cortés, Manuel de León, Juan Carlos Marrero, Eduardo Martínez. Nonholonomic Lagrangian systems on Lie algebroids. Discrete & Continuous Dynamical Systems - A, 2009, 24 (2) : 213-271. doi: 10.3934/dcds.2009.24.213
Katarzyna Grabowska, Marcin Zając. The Tulczyjew triple in mechanics on a Lie group. Journal of Geometric Mechanics, 2016, 8 (4) : 413-435. doi: 10.3934/jgm.2016014
Michał Jóźwikowski, Mikołaj Rotkiewicz. Bundle-theoretic methods for higher-order variational calculus. Journal of Geometric Mechanics, 2014, 6 (1) : 99-120. doi: 10.3934/jgm.2014.6.99
Mariano Giaquinta, Paolo Maria Mariano, Giuseppe Modica. A variational problem in the mechanics of complex materials. Discrete & Continuous Dynamical Systems - A, 2010, 28 (2) : 519-537. doi: 10.3934/dcds.2010.28.519
Leonardo Colombo. Second-order constrained variational problems on Lie algebroids: Applications to Optimal Control. Journal of Geometric Mechanics, 2017, 9 (1) : 1-45. doi: 10.3934/jgm.2017001
Leonardo Colombo, David Martín de Diego. Higher-order variational problems on lie groups and optimal control applications. Journal of Geometric Mechanics, 2014, 6 (4) : 451-478. doi: 10.3934/jgm.2014.6.451
Henry O. Jacobs, Hiroaki Yoshimura. Tensor products of Dirac structures and interconnection in Lagrangian mechanics. Journal of Geometric Mechanics, 2014, 6 (1) : 67-98. doi: 10.3934/jgm.2014.6.67
Gennadi Sardanashvily. Lagrangian dynamics of submanifolds. Relativistic mechanics. Journal of Geometric Mechanics, 2012, 4 (1) : 99-110. doi: 10.3934/jgm.2012.4.99
Juan Carlos Marrero, David Martín de Diego, Ari Stern. Symplectic groupoids and discrete constrained Lagrangian mechanics. Discrete & Continuous Dynamical Systems - A, 2015, 35 (1) : 367-397. doi: 10.3934/dcds.2015.35.367
José F. Cariñena, Irina Gheorghiu, Eduardo Martínez. Jacobi fields for second-order differential equations on Lie algebroids. Conference Publications, 2015, 2015 (special) : 213-222. doi: 10.3934/proc.2015.0213
Jean-Marie Souriau. On Geometric Mechanics. Discrete & Continuous Dynamical Systems - A, 2007, 19 (3) : 595-607. doi: 10.3934/dcds.2007.19.595
Michał Jóźwikowski, Mikołaj Rotkiewicz. Models for higher algebroids. Journal of Geometric Mechanics, 2015, 7 (3) : 317-359. doi: 10.3934/jgm.2015.7.317
Gianne Derks. Book review: Geometric mechanics. Journal of Geometric Mechanics, 2009, 1 (2) : 267-270. doi: 10.3934/jgm.2009.1.267
Andrew D. Lewis. The physical foundations of geometric mechanics. Journal of Geometric Mechanics, 2017, 9 (4) : 487-574. doi: 10.3934/jgm.2017019
Jean-Claude Zambrini. Stochastic deformation of classical mechanics. Conference Publications, 2013, 2013 (special) : 807-813. doi: 10.3934/proc.2013.2013.807
Vieri Benci. Solitons and Bohmian mechanics. Discrete & Continuous Dynamical Systems - A, 2002, 8 (2) : 303-317. doi: 10.3934/dcds.2002.8.303
Eduardo Martínez
|
CommonCrawl
|
Networks & Heterogeneous Media
December 2015 , Volume 10 , Issue 4
Select all articles
Export/Reference:
Optima and equilibria for traffic flow on networks with backward propagating queues
Alberto Bressan and Khai T. Nguyen
2015, 10(4): 717-748 doi: 10.3934/nhm.2015.10.717 +[Abstract](3096) +[PDF](606.8KB)
This paper studies an optimal decision problem for several groups of drivers on a network of roads. Drivers have different origins and destinations, and different costs, related to their departure and arrival time. On each road the flow is governed by a conservation law, while intersections are modeled using buffers of limited capacity, so that queues can spill backward along roads leading to a crowded intersection. Two main results are proved: (i) the existence of a globally optimal solution, minimizing the sum of the costs to all drivers, and (ii) the existence of a Nash equilibrium solution, where no driver can lower his own cost by changing his departure time or the route taken to reach destination.
Alberto Bressan, Khai T. Nguyen. Optima andequilibria for traffic flow on networkswith backward propagating queues. Networks & Heterogeneous Media, 2015, 10(4): 717-748. doi: 10.3934/nhm.2015.10.717.
Analysis of a system of nonlocal conservation laws for multi-commodity flow on networks
Martin Gugat, Alexander Keimer, Günter Leugering and Zhiqiang Wang
2015, 10(4): 749-785 doi: 10.3934/nhm.2015.10.749 +[Abstract](3750) +[PDF](1920.3KB)
We consider a system of scalar nonlocal conservation laws on networks that model a highly re-entrant multi-commodity manufacturing system as encountered in semi-conductor production. Every single commodity is modeled by a nonlocal conservation law, and the corresponding PDEs are coupled via a collective load, the work in progress. We illustrate the dynamics for two commodities. In the applications, directed acyclic networks naturally occur, therefore this type of networks is considered. On every edge of the network we have a system of coupled conservation laws with nonlocal velocity. At the junctions the right hand side boundary data of the foregoing edges is passed as left hand side boundary data to the following edges and PDEs. For distributing junctions, where we have more than one outgoing edge, we impose time dependent distribution functions that guarantee conservation of mass. We provide results of regularity, existence and well-posedness of the multi-commodity network model for $L^{p}$-, $BV$- and $W^{1,p}$-data. Moreover, we define an $L^{2}$-tracking type objective and show the existence of minimizers that solve the corresponding optimal control problem.
Martin Gugat, Alexander Keimer, G\u00FCnter Leugering, Zhiqiang Wang. Analysis of a system of nonlocal conservation laws for multi-commodity flow on networks. Networks & Heterogeneous Media, 2015, 10(4): 749-785. doi: 10.3934/nhm.2015.10.749.
Practical synchronization of generalized Kuramoto systems with an intrinsic dynamics
Seung-Yeal Ha, Se Eun Noh and Jinyeong Park
We study the practical synchronization of the Kuramoto dynamics of units distributed over networks. The unit dynamics on the nodes of the network are governed by the interplay between their own intrinsic dynamics and Kuramoto coupling dynamics. We present two sufficient conditions for practical synchronization under homogeneous and heterogeneous forcing. For practical synchronization estimates, we employ the configuration diameter as a Lyapunov functional, and derive a Gronwall-type differential inequality for this value.
Seung-Yeal Ha, Se Eun Noh, Jinyeong Park. Practical synchronization of generalized Kuramoto systems with an intrinsic dynamics. Networks & Heterogeneous Media, 2015, 10(4): 787-807. doi: 10.3934/nhm.2015.10.787.
(Almost) Everything you always wanted to know about deterministic control problems in stratified domains
Guy Barles and Emmanuel Chasseigne
We revisit the pioneering work of Bressan & Hong on deterministic control problems in stratified domains, i.e. control problems for which the dynamic and the cost may have discontinuities on submanifolds of $\mathbb{R}^N$. By using slightly different methods, involving more partial differential equations arguments, we $(i)$ slightly improve the assumptions on the dynamic and the cost; $(ii)$ obtain a comparison result for general semi-continuous sub and supersolutions (without any continuity assumptions on the value function nor on the sub/supersolutions); $(iii)$ provide a general framework in which a stability result holds.
Guy Barles, Emmanuel Chasseigne. (Almost) Everything you always wanted to know about deterministic control problems in stratified domains. Networks & Heterogeneous Media, 2015, 10(4): 809-836. doi: 10.3934/nhm.2015.10.809.
Regularity of densities in relaxed and penalized average distance problem
Xin Yang Lu
The average distance problem finds application in data parameterization, which involves ``representing'' the data using lower dimensional objects. From a computational point of view it is often convenient to restrict the unknown to the family of parameterized curves. The original formulation of the average distance problem exhibits several undesirable properties. In this paper we propose an alternative variant: we minimize the functional \begin{equation*} \int_{{\mathbb{R}}^d\times \Gamma_\gamma} |x-y|^p {\,{d}}\Pi(x,y)+\lambda L_\gamma +\varepsilon\alpha(\nu) +\varepsilon' \eta(\gamma)+\varepsilon''\|\gamma'\|_{TV}, \end{equation*} where $\gamma$ varies among the family of parametrized curves, $\nu$ among probability measures on $\gamma$, and $\Pi$ among transport plans between $\mu$ and $\nu$. Here $\lambda,\varepsilon,\varepsilon',\varepsilon''$ are given parameters, $\alpha$ is a penalization term on $\mu$, $\Gamma_\gamma$ (resp. $L_\gamma$) denotes the graph (resp. length) of $\gamma$, and $\|\cdot\|_{TV}$ denotes the total variation semi-norm. We will use techniques from optimal transport theory and calculus of variations. The main aim is to prove essential boundedness, and Lipschitz continuity for Radon-Nikodym derivative of $\nu$, when $(\gamma,\nu,\Pi)$ is a minimizer.
Xin Yang Lu. Regularity of densities in relaxed and penalized average distance problem. Networks & Heterogeneous Media, 2015, 10(4): 837-855. doi: 10.3934/nhm.2015.10.837.
A destination-preserving model for simulating Wardrop equilibria in traffic flow on networks
Emiliano Cristiani and Fabio S. Priuli
In this paper we propose a LWR-like model for traffic flow on networks which allows to track several groups of drivers, each of them being characterized only by their destination in the network. The path actually followed to reach the destination is not assigned a priori, and can be chosen by the drivers during the journey, taking decisions at junctions.
The model is then used to describe three possible behaviors of drivers, associated to three different ways to solve the route choice problem: 1. Drivers ignore the presence of the other vehicles; 2. Drivers react to the current distribution of traffic, but they do not forecast what will happen at later times; 3. Drivers take into account the current and future distribution of vehicles. Notice that, in the latter case, we enter the field of differential games, and, if a solution exists, it likely represents a global equilibrium among drivers.
Numerical simulations highlight the differences between the three behaviors and offer insights into the existence of equilibria.
Emiliano Cristiani, Fabio S. Priuli. A destination-preserving model for simulating Wardrop equilibria in traffic flow on networks. Networks & Heterogeneous Media, 2015, 10(4): 857-876. doi: 10.3934/nhm.2015.10.857.
Modeling opinion dynamics: How the network enhances consensus
Marina Dolfin and Mirosław Lachowicz
In this paper we analyze emergent collective phenomena in the evolution of opinions in a society structured into few interacting nodes of a network. The presented mathematical structure combines two dynamics: a first one on each single node and a second one among the nodes, i.e. in the network. The aim of the model is to analyze the effect of a network structure on a society with respect to opinion dynamics and we show some numerical solutions addressed in this direction, i.e. comparing the emergent behaviors of a consensus-dissent dynamic on a single node when the effect of the network is not considered, with respect to the emergent behaviors when the effect of a network structure linking few interacting nodes is considered. We adopt the framework of the Kinetic Theory for Active Particles (KTAP), deriving a general mathematical structure which allows to deal with nonlinear features of the interactions and representing the conceptual framework toward the derivation of specific models. A specific model is derived from the general mathematical structure by introducing a consensus-dissent dynamics of interactions and a qualitative analysis is given.
Marina Dolfin, Miros\u0142aw Lachowicz. Modeling opinion dynamics: How the network enhances consensus. Networks & Heterogeneous Media, 2015, 10(4): 877-896. doi: 10.3934/nhm.2015.10.877.
Singular perturbation and bifurcation of diffuse transition layers in inhomogeneous media, part II
Chaoqun Huang and Nung Kwan Yip
In this paper, we study the connection between the bifurcation of diffuse transition layers and that of the underlying limit interfacial problem in a degenerate spatially inhomogeneous medium. In dimension one, we prove the existence of bifurcation of diffuse interfaces in a pitchfork spatial inhomogeneity for a partial differential equation with bistable type nonlinearity. Bifurcation point is characterized quantitatively as well. The main conclusion is that the bifurcation diagram of the diffuse transition layers inherits mostly from that of the zeros of the spatial inhomogeneity. However, explicit examples are given for which the bifurcation of these two are different in terms of (im)perfection. This is a continuation of [8] which makes use of bilinear nonlinearity allowing the use of explicit solution formula. In the current work, we extend the results to a general smooth nonlinear function. We perform detail analysis of the principal eigenvalue and eigenfunction of some singularly perturbed eigenvalue problems and their interaction with the background inhomogeneity. This is the first result that takes into account simultaneously the interaction between singular perturbation, spatial inhomogeneity and bifurcation.
Chaoqun Huang, Nung Kwan Yip. Singular perturbation and bifurcation of diffuse transition layers in inhomogeneous media, part II. Networks & Heterogeneous Media, 2015, 10(4): 897-948. doi: 10.3934/nhm.2015.10.897.
5 Year Impact Factor: 1.384
RSS this journal
Tex file preparation
Abstracted in
Add your name and e-mail address to receive news of forthcoming issues of this journal:
Select the journal
Select Journals
|
CommonCrawl
|
A comparative study of morphological characteristics of medium-density fiberboard dust by sieve and image analyses
Tao Ding1,
Jiafeng Zhao1,
Nanfeng Zhu ORCID: orcid.org/0000-0002-4981-68311 &
Chengming Wang2
Journal of Wood Science volume 66, Article number: 55 (2020) Cite this article
Sanding dust is the main source of dust emission during the manufacturing process of medium-density fiberboard (MDF), and particle size and shape characteristics are the fundamental properties influencing its environmental influence and handling behaviors. However, there are few deep and comprehensive researches on the morphology of MDF sanding dust. In this study, the morphological characteristics of MDF sanding dust were explored by sieve and image analyses. It was found that more than 95% of MDF sanding dust was inhalable particles smaller than 100 μm, which poses a considerable potential risk to human health and safety, especially with the presence of other chemical constituents. The particle size span of MDF dust was relatively wide though the particle surface texture was quite uniform. The particle geometric proportion represented by aspect ratio decreased markedly with the reduction of particle size. The larger particles presented typical anisotropic structure, while the smaller ones showed homogeneous appearance, indicating quite complex handling behaviors. In addition, image analysis was found to provide a better insight into the morphological characteristics of MDF sanding dust compared to sieve analysis, and could be a promising dust morphology evaluation technology.
Medium-density fiberboard (MDF) is a wood-based panel product made primarily from wood fibers, which are bonded together by synthetic resins under heat and pressure. MDF is a prominent nonstructural composite widely used in furniture and cabinet industries. In 2017, the world's fiberboard output surpassed 118 million m3, and China alone contributed about 60 million m3 [1].
Panel sanding is a critical operation in the finishing stage of MDF manufacturing because it determines both the thickness and surface quality of the product. Large amounts of dust are generated in MDF sanding process, and the dust load could be as high as 53.67 kg/m3 [2], which means 50t sanding dust may have to be handled every day in a typical modern MDF mill with an annual output of 300 thousand m3. Although dust collecting and conveying systems have been equipped in most MDF mills, the system failures or safety accidents still occur occasionally.
The handling of sanding dust affects penal grade, environmental quality and workplace safety of MDF manufacturing. If sanding dust had not been smoothly sucked into suction hoods, sand belts would be clogged, which would in turn deteriorate sanding quality and become the main reason for the failure of sand belts [3]. The MDF particles leaked into the air are responsible for respiratory diseases among continuously exposed workers. Wood dust is classified as carcinogenic to humans [4], and MDF dust is more hazardous as MDF is usually impregnated with urea–formaldehyde (UF) resin. The chemical composition makes it a source of formaldehyde exposure [5]. MDF dust was reported to cause more nasal symptoms among workers than solid wood dust [6]. Sanding dust particles are regarded as the finest dust in the wood processing industry [7], and are more likely to penetrate into human respiratory system. The amount of respirable dust generated during the sanding of MDF is more than other wood working processes, including solid wood sanding [8, 9]. The fineness of MDF dust also increases both the feasibility and violence of dust explosion [10, 11].
Proper handling of MDF sanding dust requires a full understanding of its properties. Morphological characteristics including particle size distribution (PSD) and shape distribution are the fundamental factors influencing dust handling behaviors such as flowability, bulk density and compressibility, etc. [12,13,14]. Large particles with spherical shape generally have good flowability, which deteriorates with decreased particle size as the inter-particle cohesive force increases [15]. For irregularly shaped particles, the relative motion becomes difficult due to the presence of more contacting points between them. If elongated and hook-shaped particles are involved, it will be more complicated because they tend to form bridges by particle interlocking [16].
Limited studies have been performed on size and shape characteristics of MDF sanding dust. Mazumder suggested that a significant portion of MDF sanding dust was respirable particles with aerodynamic diameters smaller than 10 μm, and that the particles were of irregular shapes with sharp edges [5]. Chung et al. investigated the MDF sanding dust emitted from handheld sander and found the portion of respirable dust was less than 10%, but a portion as high as 30% was also cited in his paper [9]. Očkajová et al. studied the size distribution of MDF sanding dust by sieve analysis and found that 96.16% of the sample particles were smaller than 100 μm, and that the most common particles were in the range between 32 and 63 μm [17]. No quantitative study on shape distribution of MDF sanding dust has been found by the authors of this paper.
In this study, morphological characteristics of MDF sanding dust were investigated by sieve analysis (SA) and image analyses (IA). SA has been widely used to determine the PSD of bio-based particles. Its popularity derives from low cost, simple procedure, straightforward results and the similarity to the particle separating practice in wood-based panel industry. SA is a standard method to determine the PSD of some bio-based particles [18,19,20], and has been applied in many scientific studies [21,22,23].
But in recent years, questions arose on the competency of SA for bio-based particles. The size that SA measures is the second smallest dimension, i.e., the width of a particle [21, 24]. For spherical particles, the PSD obtained by SA is quite reasonable. But most wood dust generated from mechanical processes is irregularly shaped due to the anisotropic structure of wood. In this case, SA alone can barely present the morphological characteristics of wood dust. Besides, it is hard for the elongated or fibrous wood particles to fall through the sieves. The sieving efficiency, the percentage of the particles that can properly fall through the sieves according to their width, was reported around 70% [21]. In some studies, IA was suggested as an alternative method or a combination of SA and IA was applied to get a more comprehensive understanding of particle morphology [24, 25]. Once considered time consuming, IA systems are now capable of handling a large quantity of particles and presenting the statistical results instantaneously. The major advantage of IA is that, besides size distribution, it can give quantitative particle shape distribution. In this study, 2 IA technologies, i.e., scanning electron microscopy (SEM) and flatbed scanning image analysis, were applied. The former was used as a qualitative description method and the latter provided quantitative analysis. The results were compared with those of the SA to evaluate the robustness of the technologies.
MDF sanding dust was taken from a MDF mill in Jiangsu Province, China. The main panel constituents were hybrid poplar (Populus sp.) fibers and UF resin. The panel sanding line was composed of 3 wide belt sanding machines. Five types of sanding belts were mounted and the grit sizes were P36, P80, P120, P150 and P180 from the entry to the outlet of the sanding line. MDF panels were fed at a speed of 55 m/min and sanded at a speed of 1460 rpm. The dust emitted during the sanding process was collected by a dust collecting system and stored in a silo where the dust was sampled for the experiments. The moisture content of the sample dust was 6.5%.
Sieve analysis (SA)
In the sieve analysis, 85 g sample particles were sieved by a sieve shaker (A3, Fritsch GmbH, Idar-Oberstein, Germany) for 10 min with 3 mm amplitude. The sieve stack was composed of 5 sieves, and their mesh sizes were 1000, 500, 250, 100 and 40 μm from the top to the bottom, respectively. Wood dust retained on each sieve and the collecting pan was then weighted for size distribution by an electronic balance (BS2202S, Sartorius AG, Goettingen, Germany). The analysis was performed twice and the average values were considered the results.
Scanning electron microscopy (SEM)
The particles used for SEM analysis were taken from subsamples left on each sieve and the collecting pan. They were dried to the oven dry state, and then coated with gold using a sputter coater (JFC 1600, JEOL Ltd, Tokyo, Japan) and placed in a SEM (JSM 7600F, JEOL Ltd, Tokyo, Japan) for photographing.
Flatbed scanning image analysis
For the flatbed scanning, 20 mg sample particles were dispersed by a vacuum dispersion device (VDD270, Occhio s.a., Angleur, Belgium) where they were placed on a plastic membrane covering the top of a cylindrical chamber as the air inside was pumped out. Once the vacuum level in the chamber was low enough to destroy the plastic membrane, the particles fell into the chamber and gently settled on the glass plate of the image analyzer (500 Nano, Occhio s.a., Angleur, Belgium) for image analysis. The images of the scanned samples were instantaneously analyzed by the built-in software CallistoEXPERT for calculating particle size and shape distribution. The number-weighted statistical results were converted to the mass-weighted ones by assuming that all particles have identical flatness ratios.
The inner diameter (Din), i.e., the biggest circle inscribed into the projection area of a particle (Fig. 1) was chosen as the representative size parameter because it showed a good correlation with sieving diameter [26]. It has been suggested that one or two key shape factors can well describe the shape characteristics of a certain kind of particle [27]. Since the basic constituent of MDF is wood fiber, the length-to-width ratio was chosen as a macro-shape descriptor to describe particle geometric proportion. Particle solidity was chosen as a meso-shape descriptor, which reflects the overall concavity of a particle projection area and provides the information of particle surface structure.
Inner diameter, convex hull, length and width of a particle projection
The aspect ratio (AR) of the dust particle was calculated by Eq. 1:
$$ {\text{AR}} = 1 - \frac{W}{L}, $$
where W is the width of the smallest box that contains the projection of a particle with the principal directions the same as the projection of the particle, and L is the length of the box (Fig. 1).
Solidity was calculated by Eq. 2:
$$ S = \frac{S}{{S_{\text{A}} }}, $$
where S is the projection area of the particle, and SA is the area of the convex hull bounding the projection (Fig. 1).
The relative extent of size or shape distribution was evaluated by relative span (RS):
$$ {\text{RS}} = \frac{{P_{90} - P_{10} }}{{P_{50} }}, $$
where P90, P50 and P10 are the 90th, 50th and 10th percentiles of the size and shape distribution, respectively.
Morphological characteristics determined by SA and SEM
The SA showed that the great majority (96%) of the MDF sanding dust particles was smaller than 100 μm (Fig. 2). They belong to inhalable particles tending to stay longer and travel wider in the air, which are unsuitable for the living and working environments [8, 28]. Notably, the particles smaller than 40 μm accounted for 79.6%, which are capable of penetrating into the upper respiratory tract and pose health risk to humans [5, 29].
Mass-weighted size distribution of MDF sanding dust determined by sieve analysis (histogram: differential size distribution, line: cumulative size distribution)
The size characteristic distinguishes sanding dust from wood dust emitted during other machining processes, like sawing, planning and milling. There are orders of magnitude differences in size between them. The SA of pine sawdust by Chaloupkova et al. showed that only 11.93% of the particles were smaller than 630 μm [22]. A similar study on timber sawdust performed by Benthien et al. also indicated a 20% portion in the same range [25]. As mentioned above, smaller size means lower flowability due to the increase of cohesive force between particles. For food particles, the influence of cohesive force could still be significant when the particle size was up to 200 μm [15, 30]. The size distribution of MDF sanding dust clearly indicates an even lower flowability.
Particles retained on each sieve were observed by SEM. Fibrous particles were found in the sample retained on the 40-μm mesh-size sieve (Fig. 3a). Most of them were around 1 mm in length and 10 μm to 20 μm in width, similar to the size of typical hardwood fibers [31]. No fibrous particle existed in the sample falling down the 40-μm sieve though some elongated particles were found (Fig. 3b). These particles, and those with even shorter lengths, were irregularly shaped fiber fragments generated by transwall failure of the fibers under the sanding forces, which were hard to be classified into a certain shape category. However, there also existed some particles with relatively regular shape, like crystal or brick shape as shown in Fig. 4. They were remarkably different from wood particles in shape and surface texture. This indicated the presence of components other than wood in the MDF sanding dust, which should be derived from chemical components in the MDF panel like UF resin or the combination of wood fibers and chemical additives. In small quantities as they presented, the physiochemical properties of MDF sanding dust could be significantly changed to cause environmental impacts. For example, the burning and pyrolysis of UF-containing wood wastes can release environmentally harmful gases, which results in a restriction of their energy utilization [32].
SEM images of MDF sanding dust. a Dust retained on the 40-μm sieve, b dust passing through the 40-μm sieve
Non-wood components in MDF sanding dust sample. a Crystal-shaped particle; b brick-shaped particle
Morphological characteristics determined by flatbed scanning image analysis
The IA provided much detailed statistical results on particle morphology. According to Table 1, the median size (50th percentile value) of MDF sanding dust was 12.90 μm, and the mean value was a little bit higher (17.28 μm), which might be due to the existence of relatively large particles with the size up to 192.70 μm. The RS of particle size was 2.20, which is wider than some other bio-based particles [33], indicating the heterogeneity of MDF sanding dust size.
Table 1 Size and shape distributions of MDF sanding dust determined by flatbed scanning image analysis
Sub-micrometer particles as small as 0.66 μm were also detected, showing the presence of ultrafine particles in MDF sanding dust. Similarly, particles smaller than 0.1 μm was detected when sanding MDF panels with P240 sandpaper according to Welling et al. [7]. In general, sub-micrometer is the domain of fumes and smokes, and mechanical processing of solid materials seldom produces particles less than 1 μm, which might be attributed to the volatile compounds in the MDF resin [7]. It can therefore be suggested that the existence of the resin in MDF not only influences the physiochemical properties of MDF sanding dust, but also extends the lowest limit of particle size.
The PSD obtained by IA also showed the dominance of inhalable particles in MDF sanding dust with 99.6% of the sample particles smaller than 100 μm (Fig. 5). What's more, of all the particles, around one-third was smaller than 10 μm and termed PM10, which can penetrate into the lower region of human respiratory tract. A small quantity (1.5%) with the size lower than 2.5 μm termed PM2.5 was also detected, which are fine inhalable particles to have the greatest health risks to humans [29].
Mass-weighted size distribution of MDF sanding dust determined by flatbed scanning image analysis (histogram: differential size distribution, line: cumulative size distribution)
The shape analysis showed that the MDF sanding dust samples had a pretty low aspect ratio as a whole, and the mean AR value was 0.32, almost the same as the median value of 0.31 (Table 1), which indicated that the width of at least 50% of the particles was comparable to the length. This coincided with the visual observation of SEM pictures (Fig. 3b). But elongated or fibrous particles also existed. Around 10% of the particles had length-to-width ratios bigger than 2, and the particles with AR as high as 0.86 existed, representing wood fibers observed in the SEM picture (Fig. 3a), which are likely to interlock with each other and form mechanical bridges in the handling processes. But given that they only accounted for less than 2% of the total in weight, interlocking should not be considered as the main mechanism influencing the handling behaviors of MDF sanding dust.
The AR of MDF sanding dust showed different distribution characteristics in different size ranges. Larger particles presented a wider AR distribution. But for smaller particles, a much less AR variation was found (Fig. 6). The AR of particles less than 10 μm were concentrated in a range between 0.2 and 0.3, which meant the size reduction gradually reduces the shape variation of MDF sanding dust and made them more homogeneous. This seems to be a general trend for bio-based particles and has been repeatedly reported [23, 34, 35]. Figure 7 illustrates how particle shape varies with size reduction. The big particles shown in Fig. 7a are fragments of fiber bundles with various aspect ratios while the fine particle shown in Fig. 7b presents an appearance similar to a sphere. It can also be found that big particles inherit the anisotropic nature of wood. The length of the particles is parallel to the longitudinal direction of wood fibers, which makes the particle orientation an important factor influencing its handling properties. On the contrary, when the particle size is similar to the fiber cell wall thickness, almost no anatomical characteristics of wood can be found, indicating a homogeneous handling behavior totally different from that of big particles.
Shape distribution of MDF sanding dust and its correlation with particle size
Shape comparison of big (a) and small (b) MDF sanding particles
Contrary to AR, solidity analysis revealed a very narrow distribution with a RS value of only 0.26. The mean and 50th percentile solidity values were 0.89 and 0.93, respectively (Table 1). The high solidity means less concave positions on MDF sanding dust surfaces, which can thereby be characterized as flat or smooth surface.
The AR and solidity values demonstrated that full breakage and surface erosion of wood fibers occurred during the sanding of the MDF panels. The wood fibers were subject to the interactions with the sanding belt grits, the panel surface and other particles, which broke the fibers and eroded small irregularities on particle surfaces, resulting in smaller, shorter and smoother particles discovered in this study.
Comparison between SA and IA
Both SA and IA revealed that more than 95% of the MDF sanding dust particles was suspended particles, most of which were smaller than 40 μm and capable of entering in the human respiratory tract. However, difference existed between SA and IA in the mass percentage of those smaller than 40 μm. According to IA, more than 90% of the particles fell in this range while it was 79.6% in SA. Several factors contribute to the statistical difference between IA and SA. For the fibrous particles, it is possible to fall through millimeter-wide apertures after sufficient vibration. But the penetration of apertures as small as 40 μm wide can hardly be achieved as they are much smaller than the longitudinal dimension of the fibers. Some of sieve apertures were even clogged by the partially passed fibers in the vibration process, which was the main reason for the 2% mass loss in the SA experiments. That's why fibers could still be found in the sample retained on the 40 μm sieve, but no single one was present in the sample passing through (Fig. 3). Besides shape characteristics, factors like the impact forces between the particles during sieve vibration and the cohesion of small particles all contribute to the retaining of particles on the sieves.
Compared to SA, IA obviously provided a better insight into the morphological characteristics of MDF sanding dust by providing detailed information down to the sub-micrometer level. SA did not present enough information on the fine particles due to the limit of mesh size, which makes it more suitable for the analysis of coarse particles from sawing or milling processes. Particle irregularity, especially aspect ratio, was another factor limiting the application of SA for wood particles, which was the main contributor to the under-evaluation of fine particles smaller than 40 μm in the SA study.
The morphology of MDF sanding dust was investigated by sieve analysis, scanning electron microscopy and flatbed scanning image analysis. The great majority of the MDF sanding dust was found to be inhalable particles smaller than 100 μm. Moreover, other chemical components were found in the dust samples, which influences not only the size distribution, but also physiochemical properties of MDF sanding dust. The relative span of particle size was wide. Bigger particles showed a wider distribution of aspect ratio, while smaller ones exhibited homogeneous appearance. Only the surface texture was uniform and could be characterized as smooth surface. Taken together, MDF sanding dust might pose a considerable occupational health risk and imply quite complex handling behaviors as well. For the MDF industry, care should be taken when the sanding dust is handled. Frequent inspections are suggested to be made where the particles are easy to accumulate and filtering materials with higher efficiency are recommended to separate them from the air.
The sieve analysis presented particle size distribution comparable to the image analysis, but it failed to provide detailed information on fine fractions of the sample. It is suggested to be applied for homogeneous or coarse particles ready to settle down in the air. The image analysis demonstrated itself a robust particle morphology analysis technology by offering detailed results of both size and shape distribution and their correlations. It deserves further exploration for the better application in the field of bio-particle analysis.
Most data analyzed during this study are included in this published article. The supplementary information is available from the corresponding author on reasonable request.
MDF:
Medium-density fiberboard
Urea formaldehyde
PSD:
Particle size distribution
IA:
AR:
Relative span
Food and Agriculture Organization of the United Nations (2019) FAO yearbook of forest products 2017. FAO, Geneva. ISBN 978-92-5-131717-4
Rivela B, Moreira MT, Feijoo G (2007) Life cycle inventory of medium density fibreboard. Int J LCA 12:143–150
Zhang B (2013) Operating principles of wide belt sanding machine for wood-based panel industry (continued). China Wood Panels 20:25–31 (In Chinese)
International Agency for Research on Cancer (1995) IARC monographs on the evaluation of carcinogenic risks to humans volume 62, wood dust and formaldehyde. World health organization, Geneva
Mazumder MK (1997) Aerodynamic properties and respiratory deposition characteristics of formaldehyde impregnated medium-density fiberboard particles. Part Sci Technol 15:37–49
Priha E, Pennanen S, Rantio T, Uitti J, Liesivuori J (2004) Exposure to and acute effects of medium-density fiber board dust. J Occup Environ Hyg 1:738–744
Welling I, Lehtimaki M, Rautio S, Lahde T, Enbom S, Hynynen P, Hameri K (2009) Wood dust particle and mass concentrations and filtration efficiency in sanding of wood materials. J Occup Environ Hyg 6:90–98
Hursthouse A, Allan F, Rowley L, Smith F (2004) A pilot study of personal exposure to respirable and inhalable dust during the sanding and sawing of medium density fibreboard (MDF) and soft wood. Int J Environ Health Re 14:323–326
Chung KY, Cuthbert RJ, Revell SR, Wassel SG, Summers N (2000) A study on dust emission, particle size distribution and formaldehyde during machining of medium density fiberboard. Ann Occup Hyg 44:455–456
Calle S, Klaba L, Thomas D, Perrin L, Dufaud O (2005) Influence of the size distribution and concentration on wood dust explosion: experiments and reaction modelling. Powder Technol 157:144–148
Eckhoff RK (2009) Understanding dust explosions. The role of powder science and technology. J Loss Prev Process Ind 22:105–116
Fua XW, Huckb D, Makeinb L, Armstronga B, Willenb U, Freemana T (2012) Effect of particle shape and size on flow properties of lactose powders. Particuology 10:203–0208
Cleary PW, Sawley ML (2002) DEM modelling of industrial granular flows: 3D case studies and the effect of particle shape on hopper discharge. Appl Math Model 26:89–111
Ganesana V, Rosentraterb KA, Muthukumarappana K (2008) Flowability and handling characteristics of bulk solids and powders—a review with implications for DDGS. Biosyst Eng 101:425–435
Lee YJ, Yoon WB (2015) Flow behavior and hopper design for black soybean powders by particle size. J Food Eng 144:10–19
Gil M, Schott D, Arauzo I, Teruel E (2013) Handling behavior of two milled biomass: SRF poplar and corn stover. Fuel Process Technol 112:76–85
Očkajová A, Kučerka M (2009) Granular analysis of dust particles from profiling and sanding process of MDF. In: Proceedings of the 3rd International Scientific Conference on Woodworking Technique, Zalesina, 2–5 September 2009
EN 15149-1:2010 standard (2010) Solid biofuels—determination of particle size distribution—part 1: oscillating screen method using sieve apertures of 1 mm and above. CEN European Committee for Standardization
EN 15149-2:2010 standard (2010) Solid biofuels. Determination of particle size distribution. Vibrating screen method using sieve apertures of 3.15 mm and below. CEN European Committee for Standardization
ANSI/ASAE S424.1 MAR1992 R2007. ASABE Standards (2008) Method of determining and expressing particle size of chopped forage materials by screening
Gil M, Teruel E, Arauzo I (2014) Analysis of standard sieving method for milled biomass through image processing. Effects of particle shape and size for poplar and corn stover. Fuel 116:328–340
Chaloupkova V, Ivanova T, Havrland B (2016) Sieve analysis of biomass: accurate method for determination of particle size distribution. In: Proceedings of the 15th International Scientific Conference on Engineering for Rural Development, Jelgava, 25–27 May, 2016
Guo Q, Chen X, Liu H (2012) Experimental research on shape and size distribution of biomass particle. Fuel 94:551–555
Igathinathanea C, Pordesimoa LO, Columbusa EP, Batchelora WD, Sokhansanjb S (2009) Sieveless particle size distribution analysis of particulate materials through computer vision. Comput Electron Agr 66:147–158
Benthien JT, Heldner S, Ohlmeyer M (2018) Size distribution of wood particles for extruded particleboard production determined by sieve analysis and image analysis-based particle size measurement. Eur J Wood Prod 76:375–379
Pirard E, Vergara N, Chapeau V (2004) Direct estimation of sieve size distributions from 2-D image analysis of sand particles. In: Proceeding of International Congress for Particle Technology, Nuremberg, 16–18 March, 2004
Bouwmana AM, Bosmaa JC, Vonkb P (2004) Which shape factor(s) best describe granules? Powder Technol 146:66–72
Ockajova A, Beljakova A, Luptakova J (2008) Selected properties of spruce dust generated from sanding operations. Drvna Ind 59:3–10
Elizabeth M, Yepes G, Cremades LV (2011) Characterization of wood dust from furniture by scanning electron microscopy and energy-dispersive X-ray analysis. Ind Health 49:492–500
Teunou E, Fitzpatrick JJ, Synnott EC (1999) Characterisation of food powder flowability. J Food Eng 39:31–37
Shmulsky R, Jones PD (2011) Forest products and wood science an introduction (6th edition). Wiley-Blackwell, Chichester, p 80
Zhan H, Zhuang X, Song Y, Liu J, Li S, Chang G, Yin X, Wu C, Wang X (2019) A review on evolution of nitrogen-containing species during selective pyrolysis of waste wood-based panels. Fuel 253:1214–1228
Bitra VSP, Womac AR, Chevanan N, Miu PI, Igathinathane C, Sokhansanj S, Smith DR (2009) Direct mechanical energy measures of hammer mill comminution of switchgrass, wheat straw, and corn stover and analysis of their particle size distributions. Powder Technol 193:32–45
Lu Z, Hu X, Lu Y (2018) Particle morphology analysis of biomass material based on improved image processing method. Int J Anal Chem 2017:1–9
Saad M, Sadoudi A, Rondet E, Cuq B (2011) Morphological characterization of wheat powders, how to characterize the shape of particles? J Food Eng 102:293–301
The authors acknowledge the help of the Advanced Analysis and Testing Center of Nanjing Forestry University and the China office of Occhio s.a.
The work in this paper is financially supported by the National Key Research and Development Program of China (2016YFD0600703).
Nanjing Forestry University, 159 Longpan Rd., Nanjing, 210037, China
Tao Ding, Jiafeng Zhao & Nanfeng Zhu
Holtrop & Jansma BV Qingdao Office, No.1 Shangma Section of Aodong Road, Qingdao, 266114, China
Chengming Wang
Tao Ding
Jiafeng Zhao
Nanfeng Zhu
TD analyzed the experimental data and drafted the manuscript. JZ performed the experiments and prepared the figures. NZ is the project leader and responsible for the experimental design and manuscript review. CW collected the MDF sanding dust samples and contributed to the image analysis. All authors read and approved the final manuscript.
Correspondence to Nanfeng Zhu.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Ding, T., Zhao, J., Zhu, N. et al. A comparative study of morphological characteristics of medium-density fiberboard dust by sieve and image analyses. J Wood Sci 66, 55 (2020). https://doi.org/10.1186/s10086-020-01896-x
DOI: https://doi.org/10.1186/s10086-020-01896-x
Sanding dust
|
CommonCrawl
|
Study of kerosene caustic wash process for jet fuel production
Ahmed Mohamed Selim Abdelhamid ORCID: orcid.org/0000-0001-6422-98571 &
Hamdy Abdel-Aziz Mustafa2
Caustic wash is one of many industrial processes that are used to produce jet fuel. In this study, an analysis of the key parameters of the kerosene caustic wash process was conducted to improve the total performance of the treatment process. The investigated parameters are caustic concentration (from 0.03 to 3.0 wt%), caustic volume (from 110% of theoretical to 250%), number of treatment stages (one and two stages), wash water type (demineralized water and alkaline soft water), and wash water volume (10% and 30% of kerosene feed volume). Results revealed that the reaction between sodium hydroxide and naphthenic acids is a diffusion-controlled chemical reaction. The diluted caustic solutions (0.5 wt%) are better than the concentrated ones (3 wt%). Higher excess caustic volume has a slight effect on kerosene acidity. Performing the caustic treatment process in one stage is sufficient, and the two-stage process has no effect on acidity. Washing caustic-treated kerosene with demineralized water (pH=7) has a slight adverse effect on kerosene acidity. Increasing the demineralized water volume results in a slight increase in the acidity of the treated kerosene. Wash water should be slightly alkaline (pH 7.5–8) to prevent the reverse reaction of sodium naphthenates back into naphthenic acid. Increasing wash water volume (more than 10 vol% of kerosene feed) has no noticeable effect on the water content of treated kerosene.
The aviation sector is a fast-growing transportation sector, although it faces big challenges today due to COVID-19. The worldwide airline operations consume annually around 1500–1700 million barrels of Jet A-1 fuel. Forecasts indicate that the aviation sector will grow at 4.8% per year until 2036. Airlines from all over the world must purchase quality and safe fuels, and hence, jet fuel must meet very restricted international specifications [1, 2]. International Norms establish the quality specifications of the jet fuel: ASTM D-1655 and DEF STAN 91-91. Table 1 lists the standard specifications for kerosene-type aviation turbine fuel (Jet A-1) [3,4,5].
Table 1 Standard specifications for kerosene-type aviation turbine fuel (Jet A-1) [3,4,5]
Many industrial processes are used to produce jet fuels with those specifications, based on the impurities in the kerosene. Among these processes are the caustic wash process, UOP caustic-free Merox process, Merichem Napfining and MERICAT processes, hydrotreating, and alternative renewable jet fuels [6,7,8,9,10,11,12].
The caustic wash process is limited to refineries that produce kerosene fractions which already meet the international jet fuel specifications except for the total acidity. The process consists of withdrawing a side-stream kerosene from the atmospheric crude distillation unit followed by stripping, cooling, caustic washing, water washing, salt drying, clay filtration, and final water separation. Total acidity is the only specification that is caustic-extractable. Other specifications such as aromatics, smoke point, sulfur content, and freezing point are not caustic-extractable and hence not affected by the caustic wash process [2, 6].
Water content is an important parameter that reflects the fuel purity. Water content, in the dissolved phase alone, does not affect fuel performance. However, water in any other phase could participate in aircraft incidents and accidents. The excess water content affects directly the fuel quality and the normal operation of the flight equipment, even severely endangering the flight safety. Free water can affect the aircraft's fuel system reliability and lead to operational delays and increased maintenance costs [13,14,15,16,17].
Many improvements have been done to enhance the caustic wash performance. The Fiber-Film Contactor employs non-dispersive contacting of the caustic and hydrocarbon phases. This prevents emulsion formation and minimizes caustic and water carryover. The contactor provides a large interfacial surface area which increases the mass transfer rate [2, 10]. The sodium hydroxide solution of ethanol was used as the acid removal reagent. This process was introduced to solve the problem of emulsion formation associated with aqueous sodium hydroxide [18].
In this study, an analysis of the key parameters of the kerosene caustic wash process was conducted to improve the total performance of the treatment process (minimizing caustic consumption, minimizing wash water consumption, and minimizing residual water carried-over in the treated product). The investigated parameters are caustic concentration, caustic volume, number of treatment stages, wash water type, and wash water volume. Focus is placed on reducing the total acidity of petroleum kerosene to meet Jet A-1 specifications.
In this study, the main target is to analyze the caustic wash process of petroleum kerosene fractions to produce jet fuel matching the international standard specifications of Jet A-1 (ASTM D-1655 and DEF STAN 91-91) [4, 5].
A sample of straight-run kerosene was taken from the atmospheric distillation unit. Table 2 summarizes the properties of this sample. Two types of wash water were used, demineralized water and alkaline soft water. Table 3 summarizes the properties of both types.
Table 2 Properties of kerosene sample
Table 3 Properties of wash water
Chemicals, reagents, tests, and analytical equipment
All chemicals used were of analytical higher grades. The total acidity test was carried out using the ASTM method D-3242 standard. The water content of kerosene was measured by Coulometric Karl Fischer Titration (Karl-Fischer Moisture Titrator MKC-520–KEM, Co.). The water content of the sample was measured by three parallel experiments, and the maximum value was reported [14]. Wash water pH and TDS were measured by Mettler-Toledo AG FiveEasyTM Plus FEB30.
Methodology of kerosene treatment
One liter of kerosene feed (Table 2) is mixed with a calculated volume of caustic solution and stirred together for 5 min with a 300 RPM laboratory mixer. The mixture is settled for 30 min for separation (by gravity) of the aqueous phase from the "treated" kerosene phase. Figure 1 illustrates a flow chart of the overall steps in the current study.
Flow chart of all steps in the current study
Effect of caustic concentration on the treatment process
Different caustic concentrations were used: 0.03 wt%, 0.05 wt%, 0.125 wt%, 0.25 wt%, 0.5 wt%, 1 wt%, and 3 wt%. The volume of the caustic solution depends on the caustic concentration. Table 4 indicates the volume of caustic solutions used in the treatment process. As the caustic concentration increases, the stoichiometric caustic volume decreases. Excess caustic is added to ensure a complete reaction. In this stage, 10% excess caustic (110% of the theoretical) is used. The effect of excess caustic will be studied later in section 2.5. The treatment process as per section 2.3 was performed to measure the total acidity of treated kerosene.
Table 4 Volume of caustic solution used in the treatment process
Effect of caustic volume on the treatment process
Kerosene sample was treated using the caustic solution with concentration 0.5 wt% and different excess volumes of caustic: 10%, 100%, 150%, 200%, and 250%. The total acidity of treated kerosene was then measured.
Effect of the number of treatment stages on the treatment process.
Kerosene sample was treated with caustic solution 1 wt% in two stages (using 10% excess caustic) to study the effect of the number of treatment stages on the effectiveness of the treatment process. The total acidity of the treated kerosene was measured and compared with the results of the one-stage process.
Effect of water wash on the treatment process
In the industrial plants, the caustic wash is normally followed by water wash to remove any entrained droplets of caustic solution that escapes with the treated kerosene and impairs the downstream systems. Here, caustic wash followed by water wash will be studied. Two types of wash water were used, demineralized water and alkaline soft water. Table 3 summarizes the properties of both types of wash water. Step (1) of caustic wash is carried out as per section 2.3 using the caustic solution with concentration 1 wt% and 10% excess caustic. Step (2) of water wash was carried out using the volume of water equal to 10% of kerosene feed (100 ml wash water per 1 l of kerosene feed).
Caustic-washed kerosene (from step 1) is washed with water by stirring together with a 300 RPM laboratory mixer for 5 min. The mixture is left for 30 min for separation of the aqueous phase (by gravity) from the "treated" kerosene. Total acidity and water content of treated kerosene were measured and recorded.
Effect of wash water volume percent on the treatment process
The volume of wash water has increased from 10 to 30%. The treatment process as per section 2.7 was performed. Total acidity and water content of treated kerosene are measured and compared with section 2.7.
Process chemistry
Sodium hydroxide readily reacts with naphthenic acids to form sodium naphthenate and water according to the following reaction [2, 18]:
$$ \mathrm{RCOOH}+\mathrm{NaOH}\rightleftarrows \mathrm{RCOONa}+{\mathrm{H}}_2 $$
(RCOOH represents naphthenic acids which consist of one or more saturated cyclic rings, alkylated at various positions, and a straight-chain carboxylated alkyl group)
Sodium hydroxide also reacts with H2S (if any) contained in the kerosene fraction in accordance with the following Eq. (2):
$$ 2\ \mathrm{NaOH}+{\mathrm{H}}_2\mathrm{S}\to {\mathrm{Na}}_2\mathrm{S}+2{\mathrm{H}}_2\mathrm{O} $$
The reaction of sodium hydroxide with naphthenic acids is a reversible reaction. This means that the operating parameters should be adjusted to keep the reaction in the forward direction.
Effect of sodium hydroxide (caustic) concentration on the treatment process
Figure 2a, b demonstrates the effect of caustic concentration on the acidity of treated kerosene. Diluted caustic solutions (with higher caustic volume) have more effect than the concentrated solutions (with less caustic volume). Table 4 indicates the volume of caustic solution associated with each concentration. The amount of NaOH molecules is the same in all solutions (27.5 mg), but concentration and volume are different.
a Effect of caustic concentration on kerosene acidity. b Effect of volume of caustic solution on kerosene acidity
As the caustic concentration increases, the stoichiometric caustic volume decreases. The reaction is more favorable with diluted solutions rather than concentrated ones. This behavior reflects that the reaction between sodium hydroxide and naphthenic acids is a diffusion-controlled chemical reaction.
The process of acids removal from kerosene in flow contactor or stirred tank mixer can be divided into two steps:
Diffusion of acids from kerosene (continuous phase) to the surface of droplets of the aqueous phase of sodium hydroxide (dispersed phase).
Reaction of acids with alkali in the droplets of the aqueous phase and removal of reaction products with the aqueous phase.
The diffusion step is controlled mainly by the surface area of droplets. The chemical reaction step in the droplets is mainly affected by the concentration of sodium hydroxide in the aqueous phase.
In industrial practice, a small volume of high concentration of NaOH aqueous solution is used (1–2 volumes of the aqueous solution to 100 volumes of kerosene). In this case, the surface area of droplets is very small, the diffusion rate is small (resistance is high), and the chemical reaction rate is high (resistance is small). Thus, the overall process is controlled by diffusion.
If the volume of the aqueous phase is increased by adding water only, the surface area of the dispersed phase increases, while the concentration of sodium hydroxide is decreased. This means the resistance of the diffusion step is decreased, while the resistance of reaction increases but the diffusion step is still controlling. This behavior continues with the dilution of NaOH solution, and the overall process of acid removal from kerosene is improved. At some point (optimum point of operation), the effect of the chemical reaction step becomes appreciable.
In our study, the point of maximum efficiency is at 5.5 volumes of the aqueous solution to 100 volumes of kerosene (using 110% of the theoretical amount of NaOH) and the efficiency is 91.8%. In actual refinery operations, using less volume of aqueous solution with a high concentration of NaOH (3 wt%), the efficiency is 63.6%.
For the given kerosene sample, the optimum caustic concentration is 0.5 wt% and using caustic solutions less than 0.5 wt% have a negligible effect on product acidity. From Table 4, the volume of caustic solution is 0.55% (by volume) of kerosene feed (with caustic concentration 0.5 wt%). In engineering applications, caustic solutions with 1–3 wt % are common according to kerosene feed acidity.
Calculation of the amount of NaOH and cost impact of the diluted solutions
The conventional caustic wash process is an economically attractive process, since no catalyst or any special chemicals. From Table 4, all the prepared caustic solutions contain 27.5 mg of NaOH (including 10% excess).
Process efficiency with caustic solution of 3 wt% concentration
$$ =\left(0.044-0.016\right)/0.044=63.6\%. $$
Process efficiency with caustic solution of 0.5 wt% concentration
$$ =\left(0.044-0.0036\right)/0.044=91.8\%. $$
Required amount of NaOH to attain the same efficiency with caustic solution of 3 wt% concentration = 27.5 × 91.8/63.6 = 39.7 mg (including 10% excess).
Saving in caustic consumption with diluted caustic solutions of 0.5 wt% concentration
$$ =\left(1-\left(27.5/39.7\right)\right)\times 100=30.7\% $$
Effect of excess caustic volume on the treatment process
Figure 3 indicates the effect of excess caustic (at constant concentration 0.5 wt%) on product acidity. As shown, using more excess caustic solution has a slight effect on the acidity. For the given kerosene sample, 10% excess caustic is sufficient for the treatment. It is not economical to use a very large excess caustic solution with a minor effect on acidity. If we tried to use less than 10% excess caustic solution, higher product acidity would appear (lower process efficiency).
Effect of excess caustic solution on acidity
Effect of the number of treatment stages on the treatment process
Figure 4 demonstrates the effect of a number of treatment stages on the acidity of treated kerosene. As shown, increasing the treatment stages has no effect. Therefore, one stage process is sufficient to remove acids.
Effect of number of treatment stages on acidity
Figure 5 indicates the effect of water wash on the acidity of treated kerosene. As shown, washing caustic-treated kerosene with water has a slight effect on the acidity. Using demineralized water (with pH=7) has a slightly adverse effect on kerosene acidity. Increasing the demineralized water volume (with respect to kerosene feed volume) results in a slight increase in the acidity of the treated kerosene.
Effect of water wash on kerosene acidity
On the other hand, using alkaline soft water (with pH=9.44) has a slightly positive effect on kerosene acidity. Increasing the alkaline soft water volume results in a slight decrease in the acidity of the treated kerosene.
The abovementioned behavior can be interpreted by the effect of wash water pH. Demineralized water has pH =7 which is lower than the pH of soft water (soft water pH=9.44). As more demineralized water is added, some sodium naphthenates convert to naphthenic acid by the reverse reaction (Eq. 1).
On the other hand, alkaline soft water contains some alkalinity (carbonate alkalinity, Table 3) due to the addition of lime solution in the water treatment plant. Carbonates can react with existing acids in kerosene and reduce the kerosene acidity. Adding more volume of the alkaline soft water (with higher pH) increases the forward reaction of naphthenic acid to sodium naphthenate, which reduces the acidity.
Figure 6 shows the effect of water wash on the water content of treated kerosene. Increasing wash water volume has no noticeable effect on water content. Both types of wash water have the same effect on water content. For the given kerosene sample, washing the caustic-treated kerosene with alkaline soft water (10% of kerosene feed) is sufficient for the treatment.
Effect of wash water (type and volume) on the water content of treated kerosene
Two main steps are involved in the reaction between sodium hydroxide and naphthenic acids: Diffusion step of acids to the surface of droplets of sodium hydroxide; and reaction step of acids with alkali inside the droplets and removal of reaction products with the aqueous phase.
The results revealed that diluted caustic solutions are better than the concentrated ones. Thus, the reaction between sodium hydroxide and naphthenic acids is a diffusion-controlled chemical reaction; as the volume of the aqueous phase is increased by dilution, the surface area of the dispersed phase increased, and resistance of diffusion step is decreased, the overall rate of chemical reaction increased.
For the given kerosene sample, the optimum caustic concentration is 0.5 wt%. The volume of caustic solution is 0.55% (by volume) of kerosene feed.
For the given kerosene sample, saving in caustic consumption with diluted caustic solutions of 0.5 wt% concentration is 30.7% compared with caustic solutions of 3 wt%.
Using more excess caustic solution has a slight effect on kerosene acidity. For the given kerosene sample, 10% excess caustic (110% of the theoretical) is sufficient.
Performing the caustic treatment process in one stage is sufficient and the two-stage process has no effect on acidity.
Washing caustic-treated kerosene with demineralized water (pH=7) has a slight adverse effect on kerosene acidity. Increasing the demineralized water volume results in a slight increase in the acidity of the treated kerosene. Wash water should be slightly alkaline (pH 7.5–8) to prevent the reverse reaction of sodium naphthenates back into naphthenic acid.
Increasing wash water volume has no noticeable effect on the water content of treated kerosene. For the given kerosene sample, washing the caustic-treated kerosene with alkaline soft water (10% of kerosene feed) is sufficient for the treatment. Both types of wash water have the same effect on water content.
All data presented and analyzed during the current study are reproducible with the provided information.
ASTM:
American Society for Testing and Materials
El-Araby R, Abdelkader E, El Diwani G, Hawash S (2020) Bio-aviation fuel via catalytic hydrocracking of waste cooking oils. Bull Natl Res Centre 44(1):1–9. https://doi.org/10.1186/s42269-020-00425-6
Forero P, Suarez FJ, Dupont A (1997) Caustic treatment of jet fuel streams. Petroleum Technol Quarterly 1997(Q1):43-48
Donkor A, Nyarko S, Asemani KO, Bonzongo J-C, Kyeremeh K, Ziwu C (2016) A novel approach for reduction of total acidity in kerosene based on alkaline rich materials readily available in tropical and sub-tropical countries. Egyptian J Petroleum 25(4):473–480. https://doi.org/10.1016/j.ejpe.2015.10.010
ASTM D-1655-20b Standard specification for aviation turbine fuels (2020). Annual Book of Standards. doi: https://doi.org/10.1520/D1655-20B
DEF STAN 91-91 Issue 7 (Amd. 3) Turbine fuel, aviation kerosine type, Jet A-1 NATO Code: F-35, Joint Service Designation: AVTUR (2015) Defence Equipment and Support, UK Defence Standardization, British Ministry of Defence, 2 February 2015
Meyers RA (2016) Handbook of petroleum refining processes. McGraw-Hill Education, New York
Radchenko E, Khavkin V, Kurganov V, Gulyaeva L, Laz'yan N (1993) Hydrogenation processes in jet fuel production. Chem Technol Fuels Oils 29(9):459–463. https://doi.org/10.1007/BF00723201
IsoTherming® Kerosene Hydrotreating Technology. https://cleantechnologies.dupont.com/technologies/isotherming/isothermingr-kerosene-hydrotreating-technology/. Accessed 18 Sept 2021
Budukva S, Eletskii P, Zaikina O, Sosnin G, Yakovlev V (2019) Secondary middle distillates and their processing. Petroleum Chem 59(9):941–955. https://doi.org/10.1134/S0965544119090044
Sweetening mercaptans in kerosene, jet fuel, middle distillate and condensate. https://www.merichem.com/technology/heavy-mercaptan-sweetening-with-mericat-j-and-mericat-ii/. Accessed 18 Sept 2021
Lin C-H, Wang W-C (2020) Direct conversion of glyceride-based oil into renewable jet fuels. Renewable Sustainable Energy Rev 132:110109. https://doi.org/10.1016/j.rser.2020.110109
Chen Y-K, Lin C-H, Wang W-C (2020) The conversion of biomass into renewable jet fuel. Energy 201:117655. https://doi.org/10.1016/j.energy.2020.117655
Oreshenkov A (2004) Accumulation of water in jet fuels. Mathematical modeling of the process. Chem Technol Fuels Oils 40(5):320–325. https://doi.org/10.1023/B:CAFO.0000046266.83408.d7
Wu N, Zong Z, Hu J, Ma J (2017) Mechanism of dissolved water in jet fuel. In: AIP Conference Proceedings, vol 1. AIP Publishing LLC, p 040014. https://doi.org/10.1063/1.4977286
U.S. Federal Aviation Administration Advisory Circular (1985) AC 20-125: Water in Aviation Fuels. U.S. Department of Transportation, Federal Aviation Administration, 12 October 1985
Zherebtsov VL, Peganova MM (2012) Water solubility versus temperature in jet aviation fuel. Fuel 102:831–834. https://doi.org/10.1016/j.fuel.2012.06.070
Dumbolov D, Lyapich E, Suslin M, Zaitseva A (2021) Application of the electrometric method to determine the free water content of jet fuels. Chem Technol Fuels Oils 57(1):65–71. https://doi.org/10.1007/s10553-021-01227-w
Shi L, Wang G, Shen B (2010) The removal of naphthenic acids from Beijiang crude oil with a sodium hydroxide solution of ethanol. Petroleum Sci Technol 28(13):1373–1380. https://doi.org/10.1080/10916460903058129
The authors thank all members of utilities and chemical laboratories in the Nasr Petroleum Company, especially Mr. Mohamed Meshry, for their support.
This study had no funding from any resource.
Nasr Petroleum Company, Suez, Egypt
Ahmed Mohamed Selim Abdelhamid
Faculty of Engineering, Cairo University, Cairo, Egypt
Hamdy Abdel-Aziz Mustafa
Hamdy Abdel-Aziz Mustafa was supervising the work. Ahmed Mohamed Selim Abdelhamid performed the lab experiments and wrote the manuscript. The author(s) read and approved the final manuscript.
Correspondence to Ahmed Mohamed Selim Abdelhamid.
Abdelhamid, A.M.S., Mustafa, H.AA. Study of kerosene caustic wash process for jet fuel production. J. Eng. Appl. Sci. 68, 26 (2021). https://doi.org/10.1186/s44147-021-00029-5
Total acidity
Caustic wash
|
CommonCrawl
|
A question on an assumption of space-time
"A four-dimensional differentiable (Hausdorff and paracompact) manifold $M$ will be called a space time if it possesses a pseudo-Riemannian metric of hyperbolic normal signature $(+,-,-,-)$ and a time orientation. There will be no real loss of generality in physical applications if we assume that $M$ and its metric are both $\mathcal{C}^{\infty}$ ."
The above is an excerpt is taken from this paper. I'd like to know how the assumption that $M$ and its metric are both $\mathcal{C}^{\infty}$ be made with out any real loss of generality in physical applications. Any intuitive answer is also appreciated.
general-relativity
Rajesh Dachiraju
Rajesh DachirajuRajesh Dachiraju
$\begingroup$ Possibly related: physics.stackexchange.com/q/1324/2451 physics.stackexchange.com/q/1628/2451 $\endgroup$ – Qmechanic♦ May 24 '11 at 18:40
Dear Rajesh, in reality, physics sometimes works with continuous functions that are not infinitely differentiable - for example look at the energy of the beam at atlas.ch (click at the "Status" button in the middle) when they ramp it up - there are all kinds of discontinuities.
But an arbitrary function that is smooth almost everywhere - and this is a description of functions that really covers everything that a physicist would use - may be approximated by infinitely differentiable functions with an arbitrary accuracy. So it doesn't really hurt if your theorems assume that all the spacetime fields including the metric tensor are infinitely differentiable; you may solve the situations involving functions that are not infinitely differentiable by taking a limit of the infinitely differentiable ones.
May we ask a physics question whether the fields in electromagnetism are infinitely differentiable? Well, we may but it is a meaningless question because the real world is not described by classical physics. So classical physics itself is just an approximation, so both "classical physics with all differentiable functions" and "classical physics with infinitely differentiable functions only" are just approximations of the reality, with none of them being more "physically real".
In quantum physics, we don't use classical fields but we may use wave functions. The time evolution is dictated by Schrödinger's equations - but may we ask whether $\psi(x,y,z)$ should be an infinitely differentiable function of $x,y,z$? Well, in this case, we usually don't make this assumption. Instead, we use all $L^2$ functions which are much more natural at the level of the Hilbert space, Fourier transforms, and so on. The $L^2$ condition surely doesn't require infinite differentiability.
On the other hand, a finite expectation value of the kinetic energy does force $\psi(x,y,z)$ to be a continuous function. Nevertheless, the infinite differentiability condition isn't ever natural in quantum mechanics.
I want to emphasize that all these extra conditions - whether something should be smooth or infinitely smooth etc. - are only a matter of mathematical taste. There can't exist an operational "physics" way to determine whether the world allows functions that are not infinitely differentiable. It's because the infinitely smooth and not-infinitely smooth functions can mimic each other with an arbitrary accuracy but the accuracy of any measurement in physics is always limited. So the restrictions on the "nice behavior" of the mathematical functions is always a matter of mathematical taste.
Exotic differentiable structures could be counterexamples to what I just wrote - because they're "qualitatively" different and their pathological behavior is the very point of their existence (so in some important sense, the ability of them to mimic the normal functions disappears) - but their role in physics remains very confusing and limited as of today.
Luboš MotlLuboš Motl
Not the answer you're looking for? Browse other questions tagged general-relativity or ask your own question.
Is the world $C^\infty$?
Is the assumption that space-time has to be a continuum just a matter of mathematical taste?
Is the curvature of space-time a smooth function everywhere ? (except at black holes)
Wormholes & Time Machines - for *experts* in GR/maths
Is spacetime symmetry a gauge symmetry?
Proper mass and space-time wrap question
Reconstructing Minkowski spacetime from its Killing-algebra
What physically determines the point-set topology of a spacetime manifold?
How exactly does "classical physics is continuous" posit "spacetime is a set with a certain topology"?
About physical meaning of Hausdorff, Second Countable and Paracompact conditions of Manifold Theory
|
CommonCrawl
|
General question about new objective function W using the simplex method
In regards to the two-phase simplex method; When creating a new objective function that consists the sum of the constraint(s) with artificial variables, I am told that if the Min value of (wmin) w is >0 it is infeasible and when w(min)=0 we can discard w and move onto phase 2. However what if w(min)<0? I found this to be the case when trying to solve
Maximise: z=x1+x2
Subject to: x2<=8
-x1+x2>=-4
x1+x2<=12
linear-programming simplex two-phase-simplex
BigAl1992
BigAl1992BigAl1992
$\begingroup$ Then show the steps till your case came up. $\endgroup$ – callculus Oct 8 '15 at 6:56
In the first phase of the two-phase method, we add non-negative artificial variables, so $w_{min} < 0$ is impossible. Your example can't illustrate the problem that you raised.
I'll modify your sample LPP so that I can answer your question about the two-phase method by giving you the motivation to add $w$. You may skip the next section to my modified problem.
Your LPP
To find a basic feasible solution to your problem, we transform it into the standard form.
\begin{array}{ccccc} \max &z=&x_1&+x_2 & \\ \text{s.t.}& & & \phantom{+}x_2 &\le 8 \\ & & -x_1&+x_2&\ge-4 \\ & & x_1&+x_2&\le12 \\ \end{array}
Note that the slack/surplus variables $s_1,s_2,s_3$ are non-negative.
\begin{array}{ccclc} \max &z=&x_1&+x_2 & \\ \text{s.t.}& & & \phantom{+}x_2 + s_1 &= 8 \\ & & -x_1&+x_2 - s_2&=-4 \\ & & x_1&+x_2+s_3&= 12 \\ & & & s_1,s_2,s_3 &\ge 0 \end{array}
We'll have no difficulty in writing the simplex tableau of this problem.
\begin{array}{ccclc} \max &z=&x_1&+x_2 & \\ \text{s.t.}& & & \phantom{+}x_2 + s_1 &= 8 \\ & & x_1&-x_2 + s_2&=4 \\ & & x_1&+x_2+s_3&= 12 \\ & & & s_1,s_2,s_3 &\ge 0 \end{array}
\begin{array}{rrrrrr|l} & x_1 & x_2 & s_1 & s_2 & s_3 & \\ \hline s_1 & 0 & 1 & 1 & 0 & 0 & 8 \\ s_2 & 1 & -1 & 0 & 1 & 0 & 4 \\ s_3 & 1 & 1 & 0 & 0 & 1 & 12 \\ \hline & -1 & -1 & 0 & 0 & 0 & 0 \\ \end{array}
We see that $(s_1,s_2,s_3)=(8,4,12)$ is a basic feasible solution, so the two-phase method is not needed. That why callculus asks you to show the steps.
Modified LPP
To let you see that why $w_{min}<0$ is impossible, I changed the '$\ge$' sign in the second constraint to '$\le$'.
\begin{array}{ccccc} \max &z=&x_1&+x_2 & \\ \text{s.t.}& & & \phantom{+}x_2 &\le 8 \\ & & -x_1&+x_2&\color{red}{\le}-4 \\ & & x_1&+x_2&\le12 \\ \label{LPP} \tag{*} \end{array}
To apply the simplex algorithm, we first need a basic feasible solution, which is difficult to guess especially for LPP having lots of decision variables and constraints. Thus, we add slack/surplus variables and transform all inequalities into equalities to make it easier.
\begin{array}{cccll} \color{grey}{\max} &\color{grey}{z=}&\color{grey}{x_1}&\color{grey}{+x_2} & \text{we don't care about this}\\ \text{s.t.}& & & \phantom{+}x_2 + s_1 &= 8 \\ & & -x_1&+x_2 + s_2&=-4 \\ & & x_1&+x_2+s_3&= 12 \\ & & & s_1,s_2,s_3 &\ge 0 \end{array}
Why add $w$, and why $w$ need to be non-negative?
Since in the first phase, we aim at finding a feasible solution to the LPP, we don't care about the optimality. If we dropped the last constraint $s_1,s_2,s_3\ge0$, the most obvious solution would be $x_1=x_2=0$, $s_1=8$, $\color{red}{s_2=-4}$, $s_3=12$. (I used past sentence here like "If I were you, ..." to indicate the impossibility of the condition.)
(For a computer,) a possible way to get a feasible solution is to add an (non-negative) artificial variable $w(=4)$ to the second constraint, then use pivot operations (something that a machine can do) to reduced it to $w=0$ (so that $w$ vanishes).
\begin{array}{cccll} \color{grey}{\max} &\color{grey}{z=}&\color{grey}{x_1}&\color{grey}{+x_2} & \text{we don't care about this}\\ \text{s.t.}& & & \phantom{+}x_2 + s_1 &= 8 \\ & & x_1&-x_2 - s_2 \color{red}{+w}&=4 \\ & & x_1&+x_2+s_3&= 12 \\ & & & s_1,s_2,s_3 \color{red}{,w} &\ge 0 \end{array}
Initially, $w=4$, and we try to make $w=0$. (i.e. "minimize" $w$).
\begin{array}{cccll} \min &z=&\color{red}w& & \text{we do care this}\\ \text{s.t.}& & & \phantom{+}x_2 + s_1 &= 8 \\ & & x_1&-x_2 - s_2 \color{red}{+w}&=4 \\ & & x_1&+x_2+s_3&= 12 \\ & & & s_1,s_2,s_3 \color{red}{,w} &\ge 0 \end{array}
To let others understand that how the $w$ can be discarded when it equals zero, I'll do the pivot operations here.
\begin{equation*} \begin{array}{rrrrrrr|l} & x_1 & x_2 & s_1 & s_2 & s_3 & w & \\ \hline s_1 & 0 & 1 & 1 & 0 & 0 & 0 & 8 \\ w & 1 & -1 & 0 & -1 & 0 & 1 & 4 \\ s_3 & 1 & 1 & 0 & 0 & 1 & 0 & 12 \\ \hline & 0 & 0 & 0 & 0 & 0 & -1 & 0 \\ \end{array} \end{equation*}
Make it a simplex tableau.
\begin{equation*} \begin{array}{rrrrrrr|l} & x_1 & x_2 & s_1 & s_2 & s_3 & w & \\ \hline s_1 & 0 & 1 & 1 & 0 & 0 & 0 & 8 \\ w & 1^* & -1 & 0 & -1 & 0 & 1 & 4 \\ s_3 & 1 & 1 & 0 & 0 & 1 & 0 & 12 \\ \hline & 1 & -1 & 0 & -1 & 0 & 0 & 4 \\ \end{array} \end{equation*}
\begin{equation*} \begin{array}{rrrrrrr|l} & x_1 & x_2 & s_1 & s_2 & s_3 & w & \\ \hline s_1 & 0 & 1 & 1 & 0 & 0 & 0 & 8 \\ x_1 & 1 & -1 & 0 & -1 & 0 & 1 & 4 \\ s_3 & 0 & 2 & 0 & 2 & 1 & -1 & 8 \\ \hline & 0 & 0 & 0 & 0 & 0 & -1 & 0 \\ \end{array} \end{equation*}
Therefore, $(x_1,x_2,s_1,s_2,s_3)=(4,0,8,0,8)$ is a basic feasible solution to the original LPP $\eqref{LPP}$. Here, $w=0$ can be discarded. In fact, in the last simplex tableau, the column $w$ can be omitted since it will never enter the basis again.
Now, it is clear why $w_{min} > 0$ implies that the feasible region is empty.
GNUSupporter 8964民主女神 地下教會GNUSupporter 8964民主女神 地下教會
Not the answer you're looking for? Browse other questions tagged linear-programming simplex two-phase-simplex or ask your own question.
What to do about equality constraints in the Simplex Tableau method
Canonical form simplex method
What is the initial tableau for simplex method with big M method for this problem?
Linear Programming - The Big M Method - Proof questions
Two phase simplex when constraint is the same as minimize function
Why do we introduce artificial variable in two phase simplex method?
Primal-dual correspondence in the simplex method
How to solve the following equations using simplex method?
Simplex Algorithm, determining Two Phase is required and choice of artificial variables
Simplex method using two phase
|
CommonCrawl
|
Search Results: 1 - 10 of 219354 matches for " Adaeze C. Orioha "
Biorthogonality in $\mathcal A$-Pairings and Hyperbolic Decomposition Theorem for $\mathcal A$-Modules
Patrice P. Ntumba,Adaeze C. Orioha
Mathematics , 2009,
Abstract: In this paper, as part of a project initiated by A. Mallios consisting of exploring new horizons for \textit{Abstract Differential Geometry} ($\grave{a}$ la Mallios), \cite{mallios1997, mallios, malliosvolume2, modern}, such as those related to the \textit{classical symplectic geometry}, we show that results pertaining to biorthogonality in pairings of vector spaces do hold for biorthogonality in pairings of $\mathcal A$-modules. However, for the \textit{dimension formula} the algebra sheaf $\mathcal A$ is assumed to be a PID. The dimension formula relates the rank of an $\mathcal A$-morphism and the dimension of the kernel (sheaf) of the same $\mathcal A$-morphism with the dimension of the source free $\mathcal A$-module of the $\mathcal A$-morphism concerned. Also, in order to obtain an analog of the Witt's hyperbolic decomposition theorem, $\mathcal A$ is assumed to be a PID while topological spaces on which $\mathcal A$-modules are defined are assumed \textit{connected}.
Investigation of Relationship between Sociodemographic Factors and HIV Counseling and Testing (HCT) among Young People in Nigeria [PDF]
Adaeze Oguegbu
Advances in Infectious Diseases (AID) , 2016, DOI: 10.4236/aid.2016.61004
Abstract: The main purpose of this study was to examine the association between sociodemographic factors (gender, place of residence, level of education, geopolitical zone, and socioeconomic status) and HCT uptake among young people in Nigeria. The study is a quantitative research guided by one research question and one hypothesis. The target population comprised young people in Nigeria ages 15 to 24 years because the focus of this study was to identify the factors affecting HCT uptake among young people in this age cohort. The representative sample was obtained from the updated master sample frame of rural and urban zones developed by the National Population Commission in Nigeria. This master sample frame was a national survey that comprises all 36 states in Nigeria. Probability sampling technique was used to obtain a sample of 10,091 respondents (ages 15 to 24 years) for the study. The multistage cluster sampling was used to select suitable young people with known probability. Data were collected throughout Nigeria between September and December 2012 from 32,543 households (rural = 22,192; urban = 10,351) using structured and semi-structured questionnaires. The individual questionnaires asked about household characteristics, background characteristics of the respondents. Data were analyzed by inputting them into SPSS v21.0 for analysis and then coded them for each participant. The data were summed using descriptive statistics. Frequencies and percentages; measures of central tendencies were used to answer the research question while nonparametric test such as chi-square was used to analyze non-normally distributed data at 0.5 level of significance. Results of data analysis indicated that the sociodemographic variables of gender, place of residence, level of education, geopolitical zone, and SES were significantly associated with HCT uptake. Among others, it was recommended that examining the efficacy of HCT treatments in Nigeria, along with conducting a demographic analysis of the at-risk population, could be beneficial in informing the authorities who are responsible for allocating finite medical resources.
Relationship between Sexual Risk Behaviors and HIV Counseling and Testing (HCT) Uptake among Young People in Nigeria [PDF]
Adaeze Oguegbu, Frazier Beatty
Health (Health) , 2016, DOI: 10.4236/health.2016.85049
Abstract: This study examined the relationship between sexual risk behavior and HIV counselling and testing uptake among young people in Nigeria. Probability sampling technique was used to obtain a sample of 10,091 respondents (ages 15 to 24 years) for the study. The multistage cluster sampling was used to select suitable young people with known probability. Data were collected throughout Nigeria between September and December 2012 from 32,543 households (rural = 22,192; urban = 10,351) using structured and semi-structured questionnaires. The data were summed using descriptive statistics. Frequencies and percentages; measures of central tendencies were used to answer the research question while nonparametric test such as chi-square was used to analyze non-normally distributed data at 0.5 level of significance. Results of data analysis indicated that sexual risk behaviors comprised three variables: sex with multiple partners, intergenerational sex (sex with partners 10 years older), and transactional sex. The results of the chi-square test of association between sex with multiple partners and HCT uptake showed that there was no statistically significant relationship between sex with multiple partners and HCT uptake among young people ages 15 to 24 years in Nigeria. It was among others recommended that sexually active young people in Nigeria should use protection against HIV infection.
Relationship between HIV Counseling and Testing (HCT) Awareness and HCT Uptake among Young People in Nigeria: Implications for Social Change [PDF]
World Journal of AIDS (WJA) , 2016, DOI: 10.4236/wja.2016.64016
Abstract: This study examined the relationship between HIV counselling and testing (HCT) awareness and HCT uptake among young people in Nigeria and their implications for social change. The study is a quantitative research guided by one research question and one hypothesis. The target population comprised young people in Nigeria ages 15 to 24 years because the focus of this study was to identify the factors affecting HCT uptake among young people in this age cohort. The representative sample was obtained from the updated master sample frame of rural and urban zones developed by the National Population Commission in Nigeria. This master sample frame was a national survey that comprised all 36 states in Nigeria. Probability sampling technique was used to obtain a sample of 10,091 respondents (ages 15 to 24 years) for the study. The multistage cluster sampling was used to select suitable young people with known probability. Data were collected throughout Nigeria between September and December 2012 from 32,543 households (rural = 22,192; urban = 10,351) using structured and semi-structured questionnaires. The individual questionnaires asked about household characteristics, background characteristics of the respondents. Data were analyzed by inputting them into SPSS v21.0 for analysis and then coded them for each participant. The data were summed using descriptive statistics. Frequencies and percentages; measures of central tendencies were used to answer the research question while nonparametric tests such as chi-square were used to analyze non-normally distributed data at 0.5 level of significance. The results of the chi-square test of association between HCT awareness and HCT uptake showed that there was a statistically significant relationship between HCT awareness and HCT uptake among young people ages 15 to 24 years in Nigeria, X2 (1, n = 8916) = 306.66, p < 0.001. In other words, knowledge of the availability of HCT services may have influenced the possibility that the participants would use them. Among others, it was recommended that government should examine the efficacy of HCT treatments in Nigeria, along with conducting a demographic analysis of the at-risk population.
Abstract Geometric Algebra. Orthogonal and Symplectic Geometries
PP Ntumba,Ac Orioha
Abstract: Our main interest in this paper is chiefly concerned with the conditions characterizing \textit{orthogonal and symplectic abstract differential geometries}. A detailed account about the sheaf-theoretic version of the \textit{symplectic Gram-Schmidt theorem} and of the \textit{Witt's theorem} is also given.
Spectrophotometric Data in Human Immunodeficiency Virus (HIV)-Antiretroviral Drug Coated Blood Interactions [PDF]
Okwuchukwu Ani, Adaeze Ani, Jeremiah Chukwuneke
Journal of Biosciences and Medicines (JBM) , 2015, DOI: 10.4236/jbm.2015.38005
Abstract: The spectrophotometric data in the interactions between the Human immunodeficiency virus (HIV) and blood cells treated with antiretroviral drug were collected to be used to show the effects of antiretroviral drugs on the absorbance characteristics of HIV infected and uninfected blood. The methodology involved the serial dilution of the five different antiretroviral drugs (two HAART/FDC and three single drugs) and the subsequent incubation with the blood samples collected from ten HIV infected persons who had not yet commenced treatment with the antiretroviral drugs, ten HIV infected persons who had already commenced treatment with the antiretroviral drugs, and ten HIV negative persons, for the absorbance measurements using a digital Ultraviolet Visible MetaSpecAE1405031Pro Spectrophotometer. The peak absorbance data for various interacting systems were measured. These were used to show that the antiretroviral drug had the effect of increasing the peak absorbance values of both the uninfected and infected blood components, i.e., the drugs were made able to increase the light absorption capacity of the blood cells. The use of the findings of this work in drug design may be expected to yield good results.
Mother-to-child transmission of HIV: the pre-rapid advice experience of the university of Nigeria teaching hospital Ituku/Ozalla, Enugu, South-east Nigeria
Ngozi S Ibeziako, Agozie C Ubesie, Ifeoma J Emodi, Adaeze C Ayuk, Kene K Iloh, Anthony N Ikefuna
BMC Research Notes , 2012, DOI: 10.1186/1756-0500-5-305
Abstract: A retrospective study, involving HIV exposed infants seen at the pediatric HIV clinic of UNTH between March 2006 and September 2008. Relevant data were retrieved from their medical records. The overall rate of mother to child transmission of HIV in this study was 3.9% (95% CI 1.1%- 6.7%). However, in children breastfed for 3?months or less, the rate of transmission was 10% (95% CI ?2.5%-22.5%), compared to 3.5% (95% CI 0.5%-6.5%) in children that had exclusive replacement feeding.This retrospective observational study shows a 3.9% cumulative rate of mother-to-child transmission of HIV by 18?months of age in Enugu. Holistic but cost effective preventive interventions help in reducing the rate of mother-to-child transmission of HIV even in economically-developing settings like Nigeria.The first documented case of Acquired Immune Deficiency Syndrome (AIDS) in Nigeria was in 1986 in a 13?year old child in Calabar, Cross River State [1]. Since then, children have continued to remain vulnerable to this epidemic in Nigeria. Children can be infected with the virus through mother-to-child transmission (MTCT), blood transfusion, unprotected sex and through the use of non-sterile sharp objects [1,2]. MTCT is the most common route and is responsible for as much as 70 to 95% [3-6] of the infection in the pediatric age group. The next most common route of HIV transmission in children living in economically-developing countries is blood transfusion.[7,8]. This route accounts for about 5 to 20% of pediatric AIDS [3,4].MTCT can occur in utero, during labor and delivery, and postnatally through breastfeeding. A number of risk factors for MTCT of HIV have been documented. The risk factors associated with transmission during labor are prolonged rupture of uterine membrane for more than 4 hours, prolonged labor, mixing of maternal and fetal blood which happens more with tears and episiotomies [5,9]. The risk factors associated with transmission post-natally are breastfeeding and mixed f
Assessment of Immunization Status, Coverage and Determinants among under 5-Year-Old Children in Owerri, Imo State, Nigeria [PDF]
Chukwuma B. Duru, Anthony C. Iwu, Kenechi A. Uwakwe, Kevin C. Diwe, Irene A. Merenu, Chima A. Emerole, Chioma A. Adaeze, Chinwe U. Onyekuru, Obinna Ihunnia
Open Access Library Journal (OALib Journal) , 2016, DOI: 10.4236/oalib.1102753
Background: Immunization coverage in different parts of the country varies widely despite efforts to improve the services. The immunization status of children is dependent on the dynamics of vaccination uptake which is complex and involves the interplay of different associated factors. Aim: To determine the immunization coverage, status and the determinants in under 5-year-old children in Owerri municipal, Imo State. Methods: The study was a community based cross-sectional study involving 420 women and 743 under 5-year-old children. A multistage sampling technique was employed and data were collected using a pretested, semi structured interviewer administered questionnaire. Data were analysed using a computer software SPSS-IBM version 20. Results: It revealed that 63.6% and 88.9% of children less than 12 months and between 12 - 59 months respectively according to households were fully immunized. The bivariate analysis showed statistically significant associations between the immunization status of the children and place of birth delivery (p < 0.0001), maternal age (p < 0.0001), level of maternal education (p < 0.01), level of maternal knowledge (p < 0.0001), religion (p < 0.05) and ethnicity (p < 0.01). Significant predictors of being fully immunized were; maternal age 25 - 29 years old (OR = 2.1), children aged 12 - 59 months (OR = 4.6), mother having tertiary education (OR = 5.4), being a Christian Catholic (OR = 12.5), hospital births (OR = 25.2) and good level of maternal knowledge (OR = 37.7). Conclusion: Immunization coverage is relatively high but not optimal among the studied population and thus there is a need to develop strategies aimed at achieving full immunization coverage as this is critical in the reduction of childhood morbidity and mortality.
Level of Adherence to Cytotoxic Drugs by Breast Cancer Patients' in Lagos State University Teaching Hospital [PDF]
Popoola Abiodun, Samira Makanjuola, Sowunmi Anthonia, Igwilo Adaeze, Mobolaji Oludara, Ibrahim Nasir, Omodele Foluso
Journal of Cancer Therapy (JCT) , 2015, DOI: 10.4236/jct.2015.64041
Background: Breast cancer is one of the most common malignant diseases in women and adjuvant combination chemotherapy has been shown to reduce mortality from this disease. Adherence to medical treatment is a multifaceted issue that can substantially alter the outcomes of therapy. Patient non-adherence to chemotherapy is the ultimate barrier to the treatment effectiveness. Objective: This study was carried out to determine the relationship between cancer chemotherapy adherence and breast cancer staging, patient's perception of cancer care and patient's socio-demographic characteristics. Material and method: This was a cross sectional study selection of respondents and was based on simple random sampling technique, 184 patients were interviewed and data was collected using a semi-structured questionnaire to obtain socio-demographic data, adherence data, and facility-related information. Results: There was a significant association between marital status and non-adherence (P?= 0.013). Both separated and single subjects had higher proportion of non-adherence compared with married subjects. Analysis of perception of chemotherapy care revealed a significant association between the satisfaction score and non-adherence, with non-adherent patients showing higher scores or being less satisfied. The quality of service (P?= 0.0052); rating of needs been met (P?= 0.0079); rating on whether the services helped the subject (P?= 0.0405); rating on the general satisfaction of the services provided (P?= 0.0115); and rating on whether subject would seek help again (P?= 0.0320) all had a significant association with non-adherence. Conclusion: The awareness of oncologist and patient of the problem of non-adherence and communication regarding the importance of adherence to therapy may improve health outcomes.
Thrombospondin-1 Interacts with Trypanosoma cruzi Surface Calreticulin to Enhance Cellular Infection
Candice A. Johnson, Yulia Y. Kleshchenko, Adaeze O. Ikejiani, Aniekanabasi N. Udoko, Tatiana C. Cardenas, Siddharth Pratap, Mark A. Duquette, Maria F. Lima, Jack Lawler, Fernando Villalta, Pius N. Nde
Abstract: Trypanosoma cruzi causes Chagas disease, which is a neglected tropical disease that produces severe pathology and mortality. The mechanisms by which the parasite invades cells are not well elucidated. We recently reported that T. cruzi up-regulates the expression of thrombospondin-1 (TSP-1) to enhance the process of cellular invasion. Here we characterize a novel TSP-1 interaction with T. cruzi that enhances cellular infection. We show that labeled TSP-1 interacts specifically with the surface of T. cruzi trypomastigotes. We used TSP-1 to pull down interacting parasite surface proteins that were identified by mass spectrometry. We also show that full length TSP-1 and the N-terminal domain of TSP-1 (NTSP) interact with T. cruzi surface calreticulin (TcCRT) and other surface proteins. Pre-exposure of recombinant NTSP or TSP-1 to T. cruzi significantly enhances cellular infection of wild type mouse embryo fibroblasts (MEF) compared to the C-terminal domain of TSP-1, E3T3C1. In addition, blocking TcCRT with antibodies significantly inhibits the enhancement of cellular infection mediated by the TcCRT-TSP-1 interaction. Taken together, our findings indicate that TSP-1 interacts with TcCRT on the surface of T. cruzi through the NTSP domain and that this interaction enhances cellular infection. Thus surface TcCRT is a virulent factor that enhances the pathogenesis of T. cruzi infection through TSP-1, which is up-regulated by the parasite.
|
CommonCrawl
|
Production of rhamnolipid biosurfactants in solid-state fermentation: process optimization and characterization studies
Shima Dabaghi1,
Seyed Ahmad Ataei1 &
Ali Taheri2
BMC Biotechnology volume 23, Article number: 2 (2023) Cite this article
Rhamnolipids are a group of the extracellular microbial surface-active molecules produced by certain Pseudomonas species with various environmental and industrial applications. The goal of the present research was to identify and optimize key process parameters for Pseudomonas aeruginosa PTCC 1074s synthesis of rhamnolipids utilizing soybean meal in solid state fermentation. A fractional factorial design was used to screen the key nutritional and environmental parameters to achieve the high rhamnolipid production. Response surface methodology was used to optimize the levels of four significant factors.
The characterization of biosurfactant by TLC, FT-IR and H-NMR showed the rhamnolipids presence. In the optimum conditions (temperature 34.5 °C, humidity 80%, inoculum size 1.4 mL, and glycerol 5%), the experimental value of rhamnolipid production was 19.68 g/kg dry substrate. The obtained rhamnolipid biosurfactant decreased water's surface tension from 71.8 ± 0.4 to 32.2 ± 0.2 mN/m with a critical micelle concentration of nearly 70 mg/L. Additionally, analysis of the emulsification activity revealed that the generated biosurfactant was stable throughout a broad pH, temperature, and NaCl concentration range.
The current study confirmed the considerable potential of agro-industrial residues in the production of rhamnolipid and enhanced the production yield by screening and optimizing the significant process parameters.
Biosurfactants are a group of secondary metabolites which can reduce the interfacial tension of different phases with various degrees of polarities and hydrogen bonds in terms of their amphiphilic nature, as well as hydrophobic and hydrophilic moieties [1, 2]. Various microorganisms, including bacteria, fungi, and yeasts, have the ability to produce these molecules during their stationary growth [3, 4]. The biosurfactants are divided into many classes based on their chemical makeup and microbiological source, including glycolipids, phospholipids, lipopeptides, natural lipids, polymeric surfactants, and particulate surfactants [5,6,7,8,9]. The glycolipid biosurfactants "rhamnolipids" produced by certain Pseudomonas species stand apart in terms of their good emulsification potentials, remarkable physicochemical properties, low toxicity, and antimicrobial activities [10]. The aforementioned advantages made them interesting compounds for the application in a broad range of different areas, such as Bioremediation, enhanced oil recovery (EOR), pharmaceuticals, cosmetics, and the food industry as a new class of renewable resources-based surfactants. [11, 12]. Next to the sophorolipids, which can be found in some cleansing agents, rhamnolipids are most likely the next generation of biosurfactants that will reach the market [13].
However, it is necessary to overcome several limitations, including low efficiency of production, high substrate cost, and considerable costs of separation and purification processes to produce biosurfactants on a large scale [5]. Using the agro-industrial residues and wastes known as biomass and enhancement of production efficiency by screening and optimizing of effective parameters can be efficient [14, 15]. Agro-industrial residues are given economic value by being used to make value-added goods like biosurfactants, which solves the issues with the environment and trash disposal [16, 17]. Many researchers used different types of biomasses for the rhamnolipids production using submerged fermentation (SMF) [18,19,20]. Nevertheless, a large amount of foam is produced in this method, increasing contamination risk and reducing process productivity [21, 22]. The production of biological products using solid state fermentation (SSF), which involves the growth of microorganisms on a solid substrate in the absence of free water, eliminates the problem of foam formation [10]. Thus, value-added products can be produced with lower costs, higher water and energy storage, and a decrease in the waste and wastewater produced [23, 24]. There are few reports on the production of rhamnolipids by solid-state fermentation. Table 1 shows previous reports of rhamnolipids production in SSF using agro-industrial residues.
Table 1 Previous reports of rhamnolipids production in SSF using agro-industrial residues
Screening and optimization of significant process parameters are the effective approaches for the enhancement of bioprocess yield. One of the most practical techniques to screen the main factors from a large number of variables considering interaction effects among them is fractional factorial design (FFD) which involves running the partial number of a full factorial design [28]. Response surface method (RSM), a multivariate statistical technique, has long been used in chemical engineering and agro-biotechnology to enhance the outcomes of initial screening as important factors [29]. RSM combines mathematical and statistical techniques to evaluate significant factors' effect which allows the optimization to be effectively conducted [30]. The rhamnolipid production from soybean meal as the renewable carbon source by Pseudomonas aeruginosa (PTCC 1074) under solid state fermentation was studied for the first time. A two-step experimental design procedure using FFD and RSM was used to screen and optimize different nutritional and environmental parameters to improve the efficiency.
The production kinetics of rhamnolipids
In the first step, the production kinetics of rhamnolipids from Pseudomonas aeruginosa PTCC 1074 on soybean meal with an initial humidity of 70% inoculated by 1 mL of inoculum at the temperature of 30 °C was investigated. As can be observed in Fig. 1A, rhamnolipid production linearly increased with the increase in process time to 10 days and reached 14.63 (g/kg substrate), after which it remained constant. Therefore, the duration of 10 days was considered for the completion of fermentation process.
A Kinetics of rhamnolipids production: non-optimal conditions (solid line—▲— and optimal conditions (dash line–––), B Surface tension values (mN/m) versus rhamnolipid concentration (mg/mL)
Screening of significant variables by FFD
The fractional factorial design enables the identification of variables which have a significant role in rhamnolipid production. The FFD experimental findings (Table 2) showed a broad range in biosurfactant generation. Since these variations made interpretation of the findings difficult, screening the key process parameters to achieve higher production was important. A first-order model represented rhamnolipid production as the response variable (y) in terms of coded factors as a function of eight independent variables which were pH (A), concentration of glycerol (B), amount of MgSO4.7H2O (C), humidity (D), temperature (E), size of substrate (F), inoculum size (G) and amount of NaNO3 (H).
$$y^{{}} = 13.07 + 0.38A + 0.91B - 0.02C + 1.75D + 1.84E + 0.032F + 1.67G - 0.070H$$
Table 2 FFD design matrix for variables with coded values along with the experimental and predicted responses
The adequacy of regression models obtained to fit the experimental data was tested by analyzing variance (ANOVA), and the coefficient of determination (R2). ANOVA showed that the obtained model from FFD with F-value of 37.67and P-value < 0.0001 is significant. The value of 0.9320 for R2 ensured a good agreement of the first- order model and the experimental data. The humidity, temperature, and inoculum size with P-values < 0.0001 and glycerol concentration with P-value of 0.0333 strongly affected rhamnolipid production. Meanwhile, all other variables with P-values > 0.05 had no significant effect on response. By eliminating the parameters which have no significant effect, the model took the form bellow:
$$y^{{}} = 13.07 + 0.91B + 1.75D + 1.84E + 1.67G$$
The method of steepest ascent for determining optimum region
The coefficient of significant variables (B, D, E, and G) was positive because of the first-order model shown in Eq. (6); this indicates that increasing glycerol concentration, humidity, temperature, and inoculum size had a positive impact on the formation of rhamnolipids. The steepest ascent experimental design and corresponding responses were presented in Table 3. The mean particle size of soybean meal was 1.5 mm and the levels of other parameters were fixed at the center of FFD, as shown in Table 3. To move away from the center of FFD as the origin point of the steepest ascent path, the basic step sizes of variables were considered 0.25, 0.33, 0.4, and 0.3 units of B, D, E, and G, respectively. The maximum production of rhamnolipids was seen at the run three (Table 3).
Table 3 Experimental design and results of the steepest ascent path
Central composite design and response surface methodology
Following the optimization with the steepest ascent method, RSM using CCD was used to determine the actual optimum levels of significant factors and study the interactions. A total of 30 experiments were performed in duplicate; the levels of independent variables and experimental plans were given in Table 4.
Table 4 CCD design matrix with coded values along with the experimental and predicted rhamnolipid production
Empirical data were fitted by following second order polynomial equation:
$$\begin{aligned} {\text{Y}} = & - {19}.{62} + 0.{4}0{\text{A}} + 0.{\text{47B}} + 0.{\text{68C}} + 0.0{\text{24D}} - {9}.{\text{37E}}00{\text{3AB}} + 0.{\text{52AC}} - 0.{\text{48AD}} - 0.{\text{2BC}} + {1}.0{\text{2BD}} \\ & - 0.0{\text{44CD}} - {1}.{\text{36A}}^{{2}} - 0.{\text{72B}}^{{2}} - 0.{\text{31C}}^{{2}} - {1}.{\text{13D}}^{{2}} \\ \end{aligned}$$
where Y was predicted response, A, B, C and D were coded values of process temperature, inoculum size, initial humidity, and concentration of glycerol, respectively.
Analysis of Variance (ANOVA) was used to analyze the results and determine the significance of factors affecting the rhamnolipid production process. ANOVA for obtained response surface model was presented in Table 5. The model F-value of 52.80 and the P-value of < 0.0001 indicated model significance. The coefficient of determination (R2) was 0.9801 indicated that presented model appropriately fit the empirical data. The predicted R2 (0.9034) and adjusted R2 (0.9615) confirmed the model's suitability to predict the rhamnolipid amount produced as a function of model parameters. The lack of fit P-value of 0.2499 implied the lack of fit was not significant relative to the pure error. The greater significance of the component is shown by a lower P-value [31]. Among independent variables, A (temperature), B (inoculum size), and C (humidity) had significant effects on rhamnolipid production as their P-values were lower than 0.05. The quadratic term of four factors and the interactions between B and C, B and D also C and D were significant.
Table 5 ANOVA for the response surface quadratic model
Effect of significant factors on rhamnolipid production
The three-dimensional (3D) graphs of response surfaces were plotted to explain factors' interactions and determine their optimum values to achieve the maximum biosurfactant production. Each figure illustrates two independent variables' effect while the other variables were held at their central values. Convex natures of 3D response surface curves represented in Fig. 2, indicate that the optimum conditions were well defined. Figure 2 showed that the interactions between temperature and inoculum size, humidity, and inoculum size, humidity, and the glycerol percentage were not significant. In contrast, the other two-factor interactions had significant effects.
Response surface plots for rhamnolipid production: A interaction of temperature and inoculum size, B interaction of temperature and humidity, C interaction of inoculum size and humidity, D interaction of temperature and glycerol concentration, E interaction of inoculum size and humidity, F interaction of humidity and glycerol concentration
In contrast to glycerol content, the ANOVA and 3D curve findings showed that the substrate's temperature, inoculum size, and starting humidity were crucial factors in the production of the biosurfactant. One of the most significant and deciding factors in bioprocesses is temperature, and each bacterial species has a different preferred temperature range. Due to Fig. 2A and B, the highest rhamnolipid production by Pseudomonas aeruginosa PTCC 1074 was obtained at the range of 34–35 °C. Since the temperature affects the biochemical reactions in microorganism cells, rhamnolipids production considerably changed at different temperatures [32].
Inoculum size plays an essential role to produce the biosurfactants in SSF. ANOVA results indicated that inoculum size had a considerable effect on the rhamnolipid production. The maximum production of rhamnolipid was obtained when 1.4 mL inoculum of P. aeruginosa was used. Regarding Fig. 2A, C, and E, rhamnolipids production was low for small inoculum sizes because a small number of bacterial cells in culture medium required more time to grow to the optimum number to use the substrate and formation of product. In general, increasing the inoculum size up to a certain point enhances the development of microorganisms and consequently growth-related activities, whereas growing it further decreases microbial activity since there are only so many nutrients available [33].
The humidity of substrate is other critical parameter in SSF to produce the value-added products. This factor is important because microbes grow and produce products on or near the surface of solid substrates containing moisture. At low substrate humidity levels, nutrient solubility reduces, whereas, at high humidity levels, substrate porosity or air content may decrease [34]. Under both conditions, biosurfactant production would decline compared to the optimal level of humidity. The optimal humidity of substrate for the biosurfactant production was 80%.
Since glycerol can act as a co-carbon source, rhamnolipid production improved when glycerol content increased up to 5% (v/v) in saline solution, it declined at higher glycerol contents. This result is in line with those of previous studies [14].
Experimental validation of the optimized condition
The model predicted that the optimal values of significant variables were the temperature of 34.5 °C, inoculum size of 1.4 mL, the humidity of 80%, and glycerol content of 5% (v/v), which were obtained by solving the regression equation using the numerical optimization function in the Design-Expert software. Under optimal circumstances, three further tests were conducted to assess the accuracy of the model in predicting the maximal rhamnolipids production. The rhamnolipid output's mean value was 19.68 g/kg dry substrate, which was in good agreement with the model predicted value (20.13 g/kg dry substrate). Since the kinetics of biosurfactant production was studied in non-optimal conditions, rhamnolipid production under optimal conditions was measured once again for different incubation periods (Fig. 1A). The results showed that the biosurfactant production was increased by 34% in the optimized, when compared to that in unoptimized conditions and proved that the model was appropriate.
Structural characterization
The detection of TLC plate with iodine confirmed the presence of di and mono- rhamnolipid with the Rf values of 0.35 and 0.76. A similar Rf value of 0.38 for di-rhamnolipid and 0.85 for mono-rhamnolipids was observed by Nalini and Parthasarathi [27].
Figure 3 illustrates FT–IR spectrum of purified biosurfactant from P. aeruginosa. The broad peak at 3409 cm−1 revealed the presence of O–H stretching vibrations. The absorption band at 2922 cm−1 showed the asymmetric C–H stretch of CH2 and CH3 groups of aliphatic chains. The corresponding symmetric stretch was seen at 2853 cm−1. The peaks next to 1650 cm−1 were assigned to C=O stretching in protein structure. The fingerprint areas between 400 cm–1 and 1500 cm–1 indicated the C–H deformations at 1453 and 1238, the C–OH deformation at 1386 cm–1, and the symmetric band at 1048 cm–1. The spectrum showed α-pyranyl II sorption band at 832 cm−1, which confirmed the presence of di-rhamnolipid in the mixture. The results of IR spectra indicated the presence of rhamnose rings and hydrocarbon chains in the chemical structure of obtained biosurfactants and are identical with the report of Guo et al. [35].
FT-IR spectrum of rhamnolipid showing the following vibrations: O–H stretching (3409 cm−1), C–H stretching asym. (2922 and 28,563 cm−1), C=O stretching (1650 cm−1), C–H deformations (1453, 1238 and 808 cm−1), C–H/O–H deformation (1386 cm−1), C–O stretching (1048 cm−1), α-pyranyl II sorption band (832 cm.−1)
According to the results of 1H-NMR spectrum (Fig. 4), the characteristic chemical shifts confirmed that the fermentation product was a mixture of two forms of rhamnolipids and had functional groups, bonds and structures which are presence in rhamnolipid type biosurfactants. The chemical shifts at 0.88 ppm indicated the presence of CH3 and characteristic chemical shifts at 1.38, 2.48, 4.99, and 5.57 ppm showed the presence of –(CH2)n–, –CH2COO–, –OCH–, and –COO-CH– groups respectively. The results were comparable to the previous reports [36, 37].
1H-NMR spectrum of produced biosurfactant by Pseudomonas aeruginosa PTCC 1047
Properties of the produced biosurfactant
Determination of critical micelle concentration (CMC)
Commonly used to define the surfactant efficiency is CMC, a crucial parameter of any surface-active chemical. Because less surfactant is needed for saturating surfaces and the production of micelles, efficient surfactants have low CMC values. The surface tension values were measured in a wide range of rhamnolipid concentrations (0–100 mg/L), and results (Fig. 1B) showed that by increasing rhamnolipid concentration up to 70 mg/L, surface tension was reduced from 71.8 ± 0.4 to 32.2 ± 0.2 mN/m and then remained relatively constant. Hence, the value of 70 mg/L was determined for the CMC of produced rhamnolipid, which is very lower than the CMC of sodium dodecyl sulphate, a prevalent synthetic surfactant (2347 mg/L) [38, 39].
CMC value obtained in this study is in conformity with the results of previous reports using agro-industrial residues to produce biosurfactants [40].
Stability analysis
Biosurfactants' stability in different environmental conditions is one of the most important factors for their application in various fields. The biosurfactants' stability at different temperatures, pH values, and salt concentrations was evaluated using %EI24. Due to the results (Fig. 5A), the highest extent of emulsification was observed at 30 °C. Increasing temperature to 70 °C had no significant effect on rhamnolipid yields. Thermal stability in the range of 30–70 °C is a valuable attribute of generated rhamnolipid in these industries, since thermal processing is employed in these sectors to establish sterile conditions [41]. Regarding pH effect, the highest %EI24 was obtained at pH 7, and no significant decrease was observed in %EI24 at pH 6–10, but the index significantly decreased at pH 3–4 (Fig. 5B). In other words, the rhamnolipid biosurfactant was more stable at basic pH than at acidic pH. It can be attributed to the precipitation of anionic biosurfactants such as rhamnolipid at low pH values and the greater stability of fatty acid surfactant micelles at high pH values. Similar results were reported in previous studies [41, 42].
Stability studies of rhamnolipid under different temperature (A), pH (B), and salinity (C)
Evaluating the biosurfactant stability at different sodium chloride concentrations, presented in Fig. 5C, showed that the highest stability pertained to the concentration of 1% (w/v), and increasing the concentration to 6% (w/v) did not cause a significant change in %EI24. It indicated that the biosurfactant was stable in a suitable concentration range. Due to these results, the produced rhamnolipid could be used in different fields [43].
Results showed the potential use of soybean meal as substrate for the production of rhamnolipids biosurfactant using Pseudomonas aeruginosa PTCC 1074 under SSF. Screening the key nutritional and environmental parameters indicated that glycerol concentration, humidity, temperature, and inoculum size strongly affected rhamnolipid production. The biosurfactant production was increased by 34% in the optimized, when compared to that in unoptimized conditions. The quadratic model's suitability and accuracy were established by validation trials, and the findings demonstrated that the projected values and experimental data agreed well. Rhamnolipids were found in the biosurfactant after being characterized by TLC, FT-IR, and 1H-NMR.Rhamnolipid biosurfactant represented high surface activity and good stability over a wide range of temperatures, pH, and sodium chloride concentrations, making it a potential candidate for use in different applications.
Pseudomonas aeruginosa (PTCC 1074), a potent biosurfactant producer, was obtained from Persian Type Culture Collection (PTCC). The strain was maintained on nutrient agar slants at 4 °C and subculture before using inoculums for biosurfactant production. Then, a loop of cells was transferred into 50 mL LB broth in 250 mL Erlenmeyer flask and incubated at 30 °C until the growth of Pseudomonas aeruginosa achieved mid-exponential phase at an optical density of 0.6 to 0.8 at 600 nm. Then, this culture was utilized as inoculum for SSF.
Soybean meal was obtained from the local market and grinned in a mixer grinder and passed through the standard sieves No.35 and No.10 with a mean particle size of 0.5 and 1.5 mm, respectively, washed, dried, and stored until further use.
Production of biosurfactant by SSF
Fermentation experiments were carried out in a 250 mL Erlenmeyer flask containing 5 g of soybean meal. To do it, a salt solution and various amounts of water were added to obtain the desired humidity. The salt solution consisted of (g/l): KH2PO4 3, K2HPO4 7, different amount of MgSO4.7H2O, NaNO3, and glycerol (v/v %). Then, flask was sterilized in an autoclave for 15 min at 121 °C, and after cooling to room temperature, different amounts of inoculum of Pseudomonas aeruginosa were added. The inoculated flasks were incubated at various temperatures due to the designed experiment runs for 240 h.
Biosurfactant extraction
Acid precipitation and liquid–liquid extraction methods were used to extract biosurfactants from SSF. Each SSF flask received 50 mL of distilled water and was agitated for 1 h at 200 rpm at 30 °C. Then, obtained suspension has passed through cheesecloth, and excess liquid was squeezed out. This procedure was carried out triplicate, and the extract was then centrifuged for 15 min at 10,000 × g. The supernatants' pH was adjusted to approximately 2 with 2 N HCl, and biosurfactants were extracted three times with chloroform–methanol (2:1, v/v). The organic phase was concentrated in a vacuum evaporator, and the obtained biosurfactant was stored for further analysis.
Quantification of rhamnolipids
The concentration of extracellular rhamnolipids was evaluated in triplicate by quantifying rhamnose concentration using the orcinol method. The extracted biosurfactant was dissolved in water, and 50 μL of this sample, was mixed with 450 μL of orcinol reagent (0.19% orcinol in 53% sulphuric acid). The formed mixture was heated at 80 °C for 30 min and cooled to room temperature. The rhamnose content was determined by measuring the mixture's absorbance at 421 nm and comparing the data with a standard curve prepared using different L-rhamnose [44]. The rhamnose moiety constitutes only part of rhamnolipid molecule; therefore, rhamnolipid concentration is obtained by multiplying rhamnose content by a correction factor ranging from 3.0 to 3.4 [45, 46]. An average value of 3.2 was considered.
Design of experiment
Two-level fractional factorial design
At the first optimization step, a two-level fractional factorial design was employed to identify which process parameter significantly affects rhamnolipid production. Eight major factors pH, concentration of glycerol (%), amount of MgSO4.7H2O (g), humidity (%), temperature (°C), size of substrate (nm), inoculum size (mL) and amount of NaNO3 (g)); were studied at two levels, high (+ 1) and low ( − 1), using 28–4 fractional factorial design. Table 2 lists the factors and their levels in experimental design. A total of 16 experimental runs were performed in duplicates to complete the design, and biosurfactant's production was measured as a response variable. Table 2 also shows the fractional factorial design and corresponding observed and predicted results. Considering regression analysis, factors with a P-value lower than 0.05 statistically have a significant effect on biosurfactant production. P-values were used as a suitable method to examine the importance of model parameters, which was required to comprehend the pattern of reciprocal interactions among the most important elements.
Path of the steepest ascent
As FFD can't predict the actual optimum values of variables, the method of steepest ascent was employed to move rapidly to the region of optimum operating conditions. In the steepest ascent experiments, the main variables moved in the directions of maximum increase in the response. In this way, the steps through the steepest ascent path were proportionate to regression coefficients gained from FFD and the experiments were performed until no further increases in the response were observed. This point is close to the optimal point which could be considered a center point for optimization [47].
Central composite designs (CCD) or RSM
Due to the screening results with FFD, the factors with significant effects on biosurfactant production and their interaction effects were analyzed and optimized by RSM using a CCD. CCD was widely used as an effective method for fitting multivariate nonlinear equations to optimize process variables [30]. For four factors, CCD was made up 24 runs at factorial points, consisting of possible combinations of + 1 and − 1 levels of the factor, augmented with six replicate runs at the center point and eight runs at axial points, which have one factor at an axial distance (α) from the center. In contrast, the other factor is at level 0. To obtain a rotatable design, a value of 2 was considered for axial distance (α). The response surface regression procedure was applied to analyze the experimental results. A second-order polynomial equation for correlation among independent variables which the response can be presented as follows:
$$\begin{aligned} {\text{Y}} = & \, \beta_{0} + \, \beta_{{1}} {\text{A }} + \, \beta_{{2}} {\text{B }} + \, \beta_{{3}} {\text{C }} + \, \beta_{{4}} {\text{D }} + \, \beta_{{{11}}} {\text{A}}^{{2}} + \, \beta_{{{22}}} {\text{B}}^{{2}} + \, \beta_{{{33}}} {\text{C}}^{{2}} + \, \beta_{{{44}}} {\text{D}}^{{2}} \\ & + \, \beta_{{{12}}} {\text{AB }} + \, \beta_{{{13}}} {\text{AC }} + \, \beta_{{{14}}} {\text{AD }} + \, \beta_{{{23}}} {\text{BC }} + \beta_{{{24}}} {\text{BD }} + \, \beta_{{{34}}} {\text{CD}} \\ \end{aligned}$$
where Y is predicted response, β0 is offset term; β1, β2, β3, and β4 are linear coefficients; β11, β22, β33, and β44 are quadratic coefficients; β12, β13, β14, β23, β24 and β34 are interaction coefficients; A, B, C and D are independent variables. The "Design Expert 7.0" software was applied to experimental data analysis and to obtain response surface curves to optimize the variables.
Characterization of the biosurfactant
Thin layer chromatography (TLC)
The obtained biosurfactant was analyzed by TLC using silica gel 60 G (Merck) and a solvent mixture of chloroform–methanol–water (65:15:2, v/v/v). The spots were detected by iodine reagent, and Rf value of each macromolecule was noted using the following formula [48]:
$$R_{f} = {\text{ Distance}}\;{\text{ travelled}}\;{\text{ by }}\;{\text{substance}}/{\text{Distance}}\,{\text{ travelled}}\;{\text{ by}}\;{\text{ the}}\;{\text{ solvent}}.$$
Fourier transform infrared spectroscopy (FTIR)
The FT-IR spectrum of crude biosurfactant was recorded using KBr pellet as a background reference in JASCO 4600 FTIR spectrophotometer. IR spectra were reported in 500–4000 wave numbers (cm−1).
Nuclear magnetic resonance spectroscopy (NMR)
For the NMR analysis, the produced biosurfactant was re-dissolved in deuterated chloroform, and 1H spectra was measured using a Bruker Avance DRX 500 MHz spectrometer.
Properties of produced biosurfactant
Du-Nouy ring method was used to measure CMC by Tensiometer (Kruss K6, Germany) [49]. In this method, a platinum ring is submerged in the liquid and then slowly removed. The force required to remove the ring from liquid surface is considered as the surface tension. Surface tension was measured at different biosurfactant concentrations, and the plot of surface tension vs. concentration was obtained. CMC represents the concentration at which micelles start, and a sudden drop in surface tension is observed [41].
Stability study
Stability studies were evaluated regarding environmental conditions using the Emulsification Index (% EI24). To measure % EI24, 2 mL of cell-free supernatant was added to 2 mL of kerosene, and the mixture vortexed for 2 min. After 24 h, % EI24 was calculated using following equation [50]: Eq. (6)
$${\text{\% EI}}_{24} = \frac{{\text{Height of emulsified layer}}}{{{\text{Height of total liquid }}\left( {{\text{sum of aqueous}},{\text{ kerosene and emulsified layer}}} \right)}} \times 100$$
To investigate the thermal stability of produced biosurfactant, cell-free supernatant was stored at constant temperatures of 20, 30, 40, 50, 60, 70, and 80 °C for 30 min. Then, it was cooled to the room temperature (25 °C), and % EI24 was calculated. To determine biosurfactant stability at different pH values, the cell-free supernatant's pH was adjusted in the range of 3–12 using 1 N NaOH and 1 N HCl; % EI24 was measured after 30 min. To study the effect of sodium chloride on rhamnolipid biosurfactant, the purified biosurfactant was dissolved in distilled water containing various concentrations of NaCl (% w/v); % EI24 was determined after 30 min [51].
EOR:
SSF:
Solid state fermentation
FFD:
Fractional factorial design
RSM:
SMF:
Submerged fermentation
PTCC:
Persian type culture collection
Central composite designs
TLC:
FTIR:
NMR:
Nuclear magnetic resonance spectroscopy
CMC:
Critical micelle concentration
% EI24 :
Emulsification index
ANOVA:
Analysis of variance
Chandrasekaran EV, BeMiller JN, Song-Chiau DL. Isolation, partial characterization, and biological properties of polysaccharides from crude papain. Carbohydr Res. 1987;860:105–15.
Desai JD, Banat IM. Microbial production of surfactants and their commercial potential. Microbiol Mol Biol R. 1997;61:47–64.
Zambry NS, Rusly NS, Awang MS, Noh NAM, Yahya ARM. Production of lipopeptide biosurfactant in batch and fed-batch Streptomyces sp. PBD-410L cultures growing on palm oil. Bioprocess Biosyst Eng. 2021. https://doi.org/10.1007/s00449-021-02543-5.
Kiran GS, Hema TA, Gandhimathi R, Selvin J, Thomasa TA, Ravji TR, Natarajaseenivasan K. Optimization and production of a biosurfactant from the sponge-ssociated marine fungus Aspergillus ustus MSF3. Colloids Surf B. 2009. https://doi.org/10.1016/j.colsurfb.2009.05.025.
Saharan BS, Sahu RK, Sharma D. A review on biosurfactants: fermentation. Current developments and perspectives. J Genet Eng Biotechnol. 2011;29:1–39.
Batista SB, Mounteer AH, Amorim FR, Totola MR. Isolation and characterization of biosurfactant/bioemulsifier-producing bacteria from petroleum contaminated sites. Bioresour Technol. 2006. https://doi.org/10.1016/j.biortech.2005.04.020.
Hoskova M, Schreiberova O, Jezdik R, Chudoba J, Masak J, Sigler K, Rezanka T. Characterization of rhamnolipids produced by non-pathogenic Acinetobacter and Enterobacter bacteria. Bioresour Technol. 2013. https://doi.org/10.1016/j.biortech.2012.12.085.
Moya Ramírez I, Tsaousi K, Rudden M, Marchant R, Jurado-Alameda E, García Román M, Banat IM. Rhamnolipid and surfactin production from olive oil mill waste as sole carbon source. Bioresour Technol. 2015. https://doi.org/10.1016/j.biortech.2015.09.012.
Vasconcellos SP, Dellagnezze BM, Wieland A, Klock JH, Santos Neto EV, Marsaioli AJ, Oliveira VM, Michaelis W. The potential for hydrocarbon biodegradation and production of extracellular polymeric substances by aerobic bacteria isolated from a Brazilian petroleum reservoir. W J Microbiol Biotechnol. 2011. https://doi.org/10.1007/s11274-010-0581-6.
Thakur P, Saini NK, Thakur VK, Gupta VK, Saini RV, Saini AK. Rhamnolipid the glycolipid biosurfactant: emerging trends and promising strategies in the feld of biotechnology and biomedicine. Microb Cell Fact. 2021. https://doi.org/10.1186/s12934-020-01497-9.
Cameotra SS, Singh P. Synthesis of rhamnolipid biosurfactant and mode of hexadecane uptake by Pseudomonas species. Microb Cell Fact. 2009. https://doi.org/10.1186/1475-2859-8-16.
Tiso T, Thies S, Müller M, Tsvetanova L, Carraresi L, Bröring S, Jaeger KE, Blank LM. Rhamnolipids: production, performance, and application. In: Lee SY, editor. Consequences of microbial interactions with hydrocarbons, oils, and lipids: production of fuels and chemicals. Cham, Switzerland: Springer International Publishing; 2017.
Müller MM, Kügler JH, Henkel M, Gerlitzki M, Hörmann B, Pöhnlein M, Syldatk C, Hausmann R. Rhamnolipids—Next generation surfactants? J Biotechnol. 2012. https://doi.org/10.1016/j.jbiotec.2012.05.022.
Neto DC, Bugay C, Santana-Filho AP, Joslin T, Souza LM, Sassaki GL, Mitchell DA, Krieger N. Production of rhamnolipids in solid-state cultivation using a mixture of sugarcane bagasse and corn bran supplemented with glycerol and soybean oil. Appl Microbiol Biotechnol. 2011. https://doi.org/10.1007/s00253-010-2987-3.
Vera ECS, Azevedo POS, Domínguez JM, Oliveir RPS. Optimization of biosurfactant and bacteriocin-like inhibitory substance (BLIS) production by Lactococcus lactis CECT-4434 from agro industrial waste. Biochem Eng J. 2018. https://doi.org/10.1016/j.bej.2018.02.011.
Maass D, Ramírez IM, Román MG, Alameda EJ, Ulson de Souza AA, Valle JAB, Vaz DA. Two-phase olive mill waste (alpeorujo) as carbon source for biosurfactant production. J Chem Technol Biotechnol. 2016. https://doi.org/10.1002/jctb.4790.
Mnif I, Ellouze-Chaabouni S, Ghribi D. Economic production of Bacillus subtilis SPB1 biosurfactant using local agro-industrial wastes and its application in enhancing solubility of diesel. J Chem Technol Biotechnol. 2013. https://doi.org/10.1002/jctb.3894.
Lotfabad TB, Ebadipour N, RoostaAzad R. Evaluation of a recycling bioreactor for biosurfactant production by Pseudomonas aeruginosa MR01 using soybean oil waste. J Chem Technol Biotechnol. 2016. https://doi.org/10.1002/jctb.4733.
George S, Jayachandran K. Analysis of rhamnolipid biosurfactants produced through submerged fermentation using orange fruit peelings as sole carbon source. Appl Biochem Biotech. 2009. https://doi.org/10.1007/s12010-008-8337-6.
Rocha MVP, Souza MC, Benedicto SC, Bezerra MS, Macedo GR, Pinto GA, Gonçalves LRB. Production of biosurfactant by Pseudomonas aeruginosa grown on cashew apple juice. Appl Biochem Biotech. 2007. https://doi.org/10.1007/s12010-007-9050-6.
Lee B, Kim EK. Lipopeptide production from Bacillus sp. GB16 using a novel oxygenation method. Enzyme Microb Technol. 2004. https://doi.org/10.1016/j.enzmictec.2004.08.017.
Yeh MS, Wei TH, Chang JS. Bioreactor design for enhanced carrier-assisted surfactin production with Bacillus subtilis. Process Biochem. 2006. https://doi.org/10.1016/j.procbio.2006.03.027.
Holker U, Lenz J. Solid-state fermentation—are there any biotechnological advantages? Curr Opin Microbiol. 2005. https://doi.org/10.1016/j.mib.2005.04.006.
Pandey A. Solid-state fermentation. Biochem Eng J. 2003;13:81–4. https://doi.org/10.1016/S1369-703X(02)00121-3.
Neto DC, Meira JA, Araújo JM, Mitchell DA, Krieger N. Optimization of the production of rhamnolipids by Pseudomonas aeruginosa UFPEDA 614 in solid-state culture. Appl Microbiol Biotechnol. 2008. https://doi.org/10.1007/s00253-008-1663-3.
El-Housseiny GS, Aboshanab KM, Aboulwafa MM, Hassouna NA. Rhamnolipid production by a gamma ray-induced Pseudomonas aeruginosa mutant under solid state fermentation. AMB Exp. 2019;9:7.
Nalini S, Parthasarathi R. Production and characterization of rhamnolipids produced by Serratia rubidaea SNAU02 under solid-state fermentation and its application as biocontrol agent. Bioresour Technol. 2014. https://doi.org/10.1016/j.biortech.2014.09.051.
Chang SH, Teng TT, Ismail N. Screening of factors influencing Cu (II) extraction by soybean oil-based organic solvents using fractional factorial design. J Environ Manage. 2011. https://doi.org/10.1016/j.jenvman.2011.05.025.
Heidary Vinche M, Khanahmadi M, Ataei SA, Danafar F. Optimization of process variables for production of beta-glucanase by Aspergillus niger CCUG33991 in solid-state fermentation using wheat bran. Waste Biomass Valor. 2021. https://doi.org/10.1007/s12649-020-01177-0.
Rodrigues L, Teixeira J, Oliveira R, Mei H. Response surface optimization of the medium components for the production of biosurfactants by probiotic bacteria. Process Biochem. 2006. https://doi.org/10.1016/j.procbio.2005.01.030.
Pal MP, Vaidya BK, Desai KM, Joshi RM, Nene SN, Kulkarn BD. Media optimization for biosurfactant production by Rhodococcus erythropolis MTCC 2794: artificial intelligence versus a statistical approach. J Ind Microbiol Biotech. 2009. https://doi.org/10.1007/s10295-009-0547-6.
Najafi AR, Rahimpour MR, Jahanmiri AH, Roostaazad R, Arabian D, Soleimani M, Jamshidnejad Z. Interactive optimization of biosurfactant production by Paenibacillus alvei ARN63 isolated from an Iranian oil well. Colloids Surf B Biointerfaces. 2011. https://doi.org/10.1016/j.colsurfb.2010.08.010.
Kashyap P, Sabu A, Pandey A, Szakacs G, Soccol CR. Extra-cellular Lglutaminase production by Zygosaccharomyces rouxii under solid-state fermentation. Process Biochem. 2002. https://doi.org/10.1016/S0032-9592(02)00060-2.
Zadrazil F, Brunnert H. Investigation of physical parameters important for the SSF of straw by white rot fungi. Eur J Appl Microbiol Biotech. 1981. https://doi.org/10.1007/BF00511259.
Guo YP, Hu YY, Gu RR, Lin H. Characterization and micellization of rhamnolipidic fractions and crude extracts produced by Pseudomonas aeruginosa mutant MIG-N146. J Colloid Interface Sci. 2009. https://doi.org/10.1016/j.jcis.2008.11.039.
Lotfabad TB, Abassib H, Ahmadkhaniha R, Roostaazada R, Masoomi F, Zahiri HS, Ahmadian G, Vali H, Noghabi KA. Structural characterization of a rhamnolipid-type biosurfactant produced by Pseudomonas aeruginosa MR01: enhancement of di-rhamnolipid proportion using gamma irradiation. Colloids Surf, B. 2010. https://doi.org/10.1016/j.colsurfb.2010.06.026.
Moussa TAA, Mohamed MS, Samak N. Production and characterization of di-rhamnolipid produced by Pseudomonas aeruginosa TMN. Braz J Chem Eng. 2014. https://doi.org/10.1590/0104-6632.20140314s00002473.
Datta P, Tiwaria P, Pandey LM. Isolation and characterization of biosurfactant producing and oil degrading Bacillus subtilis MG495086 from formation water of Assam oil reservoir and its suitability for enhanced oil recovery. Bioresour Technol. 2018. https://doi.org/10.1016/j.biortech.2018.09.047.
Velioglu Z, Ozturk UR. Optimization of cultural conditions for biosurfactant production by Pleurotus djamor in solid state fermentation. J Biosci Bioeng. 2015. https://doi.org/10.1016/j.jbiosc.2015.03.007.
Borah SN, Sen S, Goswami L, Bora A, Pakshirajan K, Deka S. Rice based distillers dried grains with soluble as a low cost substrate for the production of a novel rhamnolipid biosurfactant having anti-biofilm activity against Candida tropicalis. Colloids Surf B. 2019. https://doi.org/10.1016/j.colsurfb.2019.110358.
Manivasagan P, Sivasankar P, Venkatesan J, Sivakumar K, Kim SK. Optimization, production and characterization of glycolipid biosurfactant from the marine actinobacterium, Streptomyces sp. MAB36. Bioprocess Biosyst Eng. 2014. https://doi.org/10.1007/s00449-013-1048-6.
Prieto L, Michelon M, Burkert J, Kalil S, Burkert C. The production of rhamnolipid by a Pseudomonas aeruginosa strain isolated from a southern coastal zone in Brazil. Chemosphere. 2009. https://doi.org/10.1016/j.chemosphere.2008.01.003.
Lovaglioa RB, Santos FJ, Junior MJ, Contiero J. Rhamnolipid emulsifying activity and emulsion stability: pH rules. Colloids Surf B. 2011. https://doi.org/10.1016/j.colsurfb.2011.03.001.
Candrasekaran EV, Bemiller JN. Constituent analyses of glycosamino-glycans. Methods Carbohydr Chem. 1980;8:89–96.
Itoh S, Honda H, Tomita F, Suzuki T. Rhamnolipids produced by Pseudomonas aeruginosa grown on n-paraffin (mixture of C12, C13 and C14 fractions). J Antibiot. 1971. https://doi.org/10.7164/antibiotics.24.855.
Benincasa M, Contiero J, Manresa MA, Moraes IO. Rhamnolipid production by Pseudomonas aeruginosa LBI growing on soapstock as the sole carbon source. J Food Eng. 2002. https://doi.org/10.1016/S0260-8774(01)00214-X.
Chen XC, Bai JX, Cao JM, Li ZJ, Xiong J, Zhang L, Hong Y, Ying HJ. Medium optimization for the production of cyclic adenosine 3',5'-monophosphate by Microbacterium sp. no. 205 using response surface methodology. Bioresour Technol. 2009. https://doi.org/10.1016/j.biortech.2008.07.062.
Anna Joice P, Parthasarathi R. Optimization and Production of biosurfactant from Pseudomonas aeruginosa PBSC1. Int J Curr Microbiol App Sci. 2014;3(9):140–51.
Pavitran S, Balasubramanian S, Kumar P, Bisen PS. Emulsification and utilization of high-speed diesel by a Brevibacterium species isolated from hydraulic oil. World J Microbiol Biotech. 2004. https://doi.org/10.1007/s11274-004-8714-4.
Cooper DG, Goldenberg BG. Surface-active agents from two Bacilllus species. Appl Environ Microbiol. 1987;53:224.
Ghojavand H, Vahabzadeh F, Roayaei E, Shahraki AK. Production and properties of a biosurfactant obtained from a member of the Bacillus subtilis group (PTCC 1696). J Colloid Interface Sci. 2008. https://doi.org/10.1016/j.jcis.2008.05.001.
The authors wish to express their appreciation to the Shahid Bahonar University of Kerman and Chabahar Maritime University for their partial support of this research.
The authors did not receive support from any organization for the submitted work.
Department of Chemical Engineerig, Faculty of Engineering, Shahid Bahonar University of Kerman, Kerman, Iran
Shima Dabaghi & Seyed Ahmad Ataei
Fisheries Department, Faculty of Marine Sciences, Chabahar Maritime University, Chabahar, Iran
Ali Taheri
Shima Dabaghi
Seyed Ahmad Ataei
ShD carried out this research work for her Ph. D. degree and SAA and AT supervised the study, conceived of the study, and participated in its design and coordination. All authors read and approved the final manuscript.
Correspondence to Seyed Ahmad Ataei.
The authors have no competing interests to declare that are relevant to the content of this article.
Dabaghi, S., Ataei, S.A. & Taheri, A. Production of rhamnolipid biosurfactants in solid-state fermentation: process optimization and characterization studies. BMC Biotechnol 23, 2 (2023). https://doi.org/10.1186/s12896-022-00772-4
Biosurfactant
Solid-state fermentation
Agro-industrial residues
|
CommonCrawl
|
Getting deformation energy from DIC (in progress)
Lanning, W. R. and Muhlstein, C. L. (In preparation) The energetic interpretation of strain fields around a propagating crack in ductile thin sheets
Digital image correlation (DIC) can measure the surface deformation of materials from sequences of images, essentially telling is when and where material is changing shape. However, our ability to measure force is limited since we can only place sensors at the specimen boundaries. Thus, we are faced with a problem: can we compute the energy that is stored, absorbed, and released during fracture with from nothing but experimentally measured data (no simulations or constitutive laws)? Spoiler: we can!
Error propagation through DIC analysis (in progress)
Collins, J. G., Lanning, W. R., and Muhlstein, C. L. (In preparation) A Monte Carlo strain field mining methodology to identify and minimize image boundary-induced errors in interpolated fields
Digital image correlation (DIC) can be a very flashy technique for measuring strain. It produces attractive color-coded spatial distributions of surface strains that jazz up any conference presentation. But when it comes time to analyze that data and use it as the basis of further computations (stress, stress intensity factor, work density, etc.), how far can we trust the output?
This leads us to an interesting problem: the interpolation strategies we use to fill in the strains between discrete measurements can have unexpected consequences. If we apply anything higher-order than linear interpolation, such as a parabolic or cubic scheme, we get much prettier displays of the data, but this can also introduce errors and artifacts.
The main thrust of this manuscript is James Collins' work probing the propagation of random errors through DIC strain analyses. He then used those error metrics to design analyses which were less sensitive to random errors. Unfortunately, I can't divulge exactly how and why his approaches worked until this goes to press, but it is very clever! However, I can talk a little more about my contributions to error propagation through DIC analyses.
My contributions to this paper relate to mathematical fundamentals underlying interpolation schemes. When we pick a particular fitting method, we are implicitly assigning a model to the data, and that model requires a certain amount of input. It takes two points to define a line, three to define a parabola, four to define a cubic function, and so on. Adding more points than required lets us build up confidence statistics in the fitted model. Thus, we can look at the features of an interpolated strain field from DIC and immediately know which features we can trust most – those that cover enough area to encompass many data points. Those features amount to calculations which have multiple measurements to back them up.
If you ever want to be a pest at a conference, you can seek out DIC-based talks and see if they know how to tell analysis artifacts from actual material deformation. Look at any features of a strain field (maxima, minima, periodic strains, etc.) and compare them to the spatial distribution of the points which were tracked by the DIC analysis. If the strain falls below a certain threshold, it is very likely to be an artifact of the interpolation scheme. Here is a (very) rough calculation to find out of a strain is more likely to come from the propagation of error through the analysis than due to actual material deformation:
$$ \epsilon < \frac{m_{interp order}}{n_{trackedpoints}} \times \frac{\Delta_{tracking resolution}}{ d_{\epsilon feature} } $$
Also, beware any claim that "DIC measures strain at the resolution of the image." The fact that image resolution is measured in length units while strain is unitless should be a dead giveaway how wrong that notion is. DIC might be able to measure displacement at the resolution of the image, or even at sub-pixel levels with a good tracking algorithm. However, strain is derived from some kind of fit to the data, and strain resolution should be at least judged in terms of the quality of the fit (i.e. the distribution of residuals).
A novel way of looking at plastic zones
Javaid S.S., Lanning, W.R., and Muhlstein, C.L. (2019) The development of zones of active plasticity during mode I steady-state crack growth in thin aluminum sheets, Engineering Fracture Mechanics, Volume 218
Traditionally, the "plastic zone" around a crack tip is the greatest extent of plastic deformation, a high-water mark of irreversible material flow. This makes sense in the context of high-strength materials since the plastic zone in such materials grows in a very limited fashion around a crack tip which moves very little before the specimen fractures. But in highly ductile materials, where crack growth occurs in a continuous process rather than a single burst, the story is different.
Thin ductile sheets exhibit stable crack growth in Mode I loading. The crack propagates as the specimen is pulled apart perpendicular to the crack face. In such a case, analyses such as the essential work of fracture (EWF), can compute the average energy needed to propagate a crack. However, EWF also applies the traditional concept of a static plastic zone while the crack tip is moving. It makes more sense to use a definition of the plastic zone that accounts for a moving crack tip and changing distribution of strain.
If the plastic zone (PZ) describes the maximum extent of plasticity over an entire process, we can define the zone of active plasticity (ZAP) as the region where plastic deformation is occurring at any moment in time. In this study, S.S. Javaid applied the ZAP concept from my dissertation to double-edge notch tensile specimens like those used in EWF experiments. He was able to measure the changing ZAP extent and shape over the course of the experiments using DICTograFer, the digital image correlation (DIC) suite I developed at Georgia Tech.
A new way to measure crack growth Resistance in Thin Ductile Sheets
Lanning, W. R., Johnson, C., Javaid, S. S., and Muhlstein, C. L. (2019) Mode I steady-state crack propagation through a fully-yielded ligament in thin ductile metal foils, Theoretical and Applied Fracture Mechanics, Volume 101, Pages 141-151
Many publications report shockingly low fracture toughnesses in ultra-thin metal specimens. However, those reports are based on fracture toughness measurements which attempt to apply conventional deformation models to the ultra-thin systems while simultaneously blaming exotic deformation mechanisms for the unusual results. By strategically comparing different specimen geometries on correlation plots of crack growth driving force vs. normalized crack length, I demonstrated that LEFM analyses produce plausible-looking, but entirely artificial fracture toughness measurements. Crack propagation in ductile thin sheets is actually controlled by critical stress (alternatively viewed as a work density gradient), which indicates that thin ductile sheets converge to steady-state crack propagation after the initiation and transition stages of crack growth.
In this publication, I was still calling my approach a "plastic collapse" analysis because it used the same axes and parameters as its namesake in Broek's excellent fracture mechanics book. But I moved on from that nomenclature in later work because it was a bit misleading. The classic plastic collapse analysis used the critical net section stresses from a population of notched specimens while my analysis was an in-situ measurement of the stress as a crack propagated through a single specimen.
A cautionary tale about applying fracture mechanics in thin sheets
Lanning, W. R., Johnson, C., Javaid, S. S., and Muhlstein, C. L. (2017) Reconciling fracture toughness parameter contradictions in thin ductile metal sheets. Fatigue Fract Engng Mater Struct, 40: 1809–1824
Be careful using linear-elastic fracture mechanics on ductile metals! Conventionally, one might use a K to measure fracture toughness then use that value of K along with the yield strength to determine whether the measurement was valid. But low K measurements in thin sheets do not necessarily mean the plastic zone was small, and this has major ramifications for recent reports of ultra-low fracture toughness thin films and nanowires.
Using DIC to predict bonded joint strength
Collins, J. G., Dillon, G. P., Strauch, E. C., Lanning, W. R., and Muhlstein, C. L. (2016) Correlating bonded joint deformation with failure using a free surface strain field mining methodology. Fatigue Fract Engng Mater Struct, 39: 1124–1137
My mentor, James Collins, discovered that the surface strain maps of bonded joints could be used to predict the strength of the joint even if you were not looking at the part of the joint that failed!
I developed the math related to screening the strain field features used in the analysis. Look for the portion of the paper where the mean value theorem is invoked. The core concept is that our ability to confidently calculate strain increases with the number of displacement measurements and the length scale over which those measurements were taken. A small number of measurements taken close together can only be used to find large strains. A large number of measurements spread over a large area could be used to find smaller strains. The exact scaling effect can be computed from a principle similar to the standard error, modified for spatial measurements.
How do the layers in MLCCs affect their strength?
Lanning, W. R. and Muhlstein, C. L. (2013), Strengthening mechanisms in MLCCs: residual stress versus crack tip shielding. J. Am. Ceram. Soc., 97: 283-289
Some previous publications reported that multilayer ceramic capacitors (MLCCs) with denser electrode arrays were more resistant to cracking than those with fewer electrodes. They attributed the strengthening effect to direct interactions between the crack tip and electrodes. However, I found that the capacitors were actually strengthened by residual stresses resulting from the tape-casting and sintering process used to manufacture MLCCs. This suggests that other composite systems could be strengthened by strategic use of thermal expansion mismatch between the metal reinforcement and ceramic matrix.
This work also addresses a common belief that because the dielectric breakdown strength and mechanical strengths of MLCCs can have similar distributions, both strengths are controlled by similar flaw populations. However, we found that mechanical failures initiate at the specimen surfaces. Electrical breakdown must occur between the electrodes in the interior of the device. So the flaw populations may be similar. I.e. perturbations at the electrode/dielectric interface initiate dielectric breakdown and surface pores initiate mechanical fracture. But the failures are segregated to different regions of the device, so the failure-initiating flaw populations are segregated spatially, and the electrical and mechanical failures cannot be controlled by the same flaw populations.
|
CommonCrawl
|
A fusion optimization algorithm of network element layout for indoor positioning
Xiao-min Yu1,2,
Hui-qiang Wang1,
Hong-wu Lv1,
Xiu-bing Liu1 &
Jin-qiu Wu1
The indoor scene has the characteristics of complexity and Non-Line of Sight (NLOS). Therefore, in the application of cellular network positioning, the layout of the base station has a significant influence on the positioning accuracy. In three-dimensional indoor positioning, the layout of the base station only focuses on the network capacity and the quality of positioning signal. At present, the influence of the coverage and positioning accuracy has not been considered. Therefore, a network element layout optimization algorithm based on improved Adaptive Simulated Annealing and Genetic Algorithm (ASA-GA) is proposed in this paper. Firstly, a three-dimensional positioning signal coverage model and a base station layout model are established. Then, the ASA-GA algorithm is proposed for optimizing the base station layout scheme. Experimental results show that the proposed ASA-GA algorithm has a faster convergence speed, which is 16.7% higher than the AG-AC (Adaptive Genetic Combining Ant Colony) algorithm. It takes about 25 generations to achieve full coverage. At the same time, the proposed algorithm has better coverage capability. After optimization of the layout of the network element, the effective coverage rate is increased from 89.77 to 100% and the average location error decreased from 2.874 to 0.983 m, which is about 16% lower than the AG-AC algorithm and 22% lower than the AGA (Adaptive Genetic Algorithm) algorithm.
Statistics show that about 80% of people's living and working environment is indoors. The location service in the indoor environment plays an extremely important role in the control of objects in industrial production lines, location and navigation in public places, the care of young and old people, and intelligent entertainment [1, 2]. In order to improve the effective coverage of the indoor positioning signal and the indoor positioning accuracy, a reasonable base station layout is particularly important, which can improve the positioning accuracy while reducing the deployment cost.
Due to the NLOS of the indoor environment and the complexity of the indoor structure, the Global Positioning System (GPS) is not available for indoor positioning. Because indoor environment is complex, all sorts of electromagnetic wave can produce the change of signal characteristic because of reflex, refraction, and diffraction. We can use this kind of characteristic change to realize indoor communication, indoor position, etc. Electromagnetic (EM) waves with helical wave front carry orbital angular momentum (OAM), which is associated with the azimuthal phase of the complex electric field. OAM is a new degree of freedom in EM waves and is promising for channel multiplexing in the communication system [3, 4]. Currently, positioning technologies based on Bluetooth, WiFi [5], and UWB [6] can achieve good positioning effects, but lack a unified wide-area positioning network. In the coming 5G era, the integration of communication and navigation networks of heterogeneous cellular networks is an important trend in indoor positioning [7, 8], which can provide communication and positioning services at the same time avoiding additional resource overhead.
However, the complex of the indoor environment structure leads to signal loss, reflection, refraction, and diffraction and even the positioning terminal cannot be covered by multiple base stations at the same time, resulting in an increasement in positioning error or failure to provide positioning requirements. In the 3D positioning, the positioning terminal must ensure that the coverage of at least four base stations is received at the same time to satisfy the positioning condition. By properly arranging the layout of the base station, the number of the first path received by the terminal can be effectively improved, thereby improving the positioning accuracy.
The main contributions of this paper are:
Presents an indoor positioning base station layout optimization method based on ASA-GA, which taking the two factors—positioning signal coverage ratio and positioning accuracy—into consideration to optimize the base station layout scheme.
Takes the improved adaptive genetic algorithm as the main body of the algorithm, and integrates the improved simulated annealing mechanism to further adjust and optimize the population, so can improve the convergence speed and optimization quality of the algorithm.
The rest of this paper is organized as follows. Related work is presented in Section 2. Section 3 describes network element optimization layout model and ASA-GA algorithm. In Section 4, the simulation scenes are described. In Section 5, the performance evaluation of the ASA-GA algorithms in terms of signal coverage, positioning error, and iteration number is given. Finally, Section 6 gives conclusions and outlines the future work.
The positioning accuracy can be improved from two aspects. The first one is to place the base station so that each point of the positioning area is covered by at least 4 base stations at the same time. The second is to reduce the GDoP (Geometric Dilution of Precision), thereby reducing the average positioning error of the space [9]. The location selection of base station in space is always regarded as NP-hard problem [10]. Finding the best base station layout scheme is still challenging, even if the search space is roughly represented, the enumeration search is invalid [9]. Therefore, this type of problem only solves approximate or suboptimal solutions [11]. Heuristic algorithm can improve the search speed [12]. The existing base station layout algorithm can be divided into two base station layout optimization methods based on random geometry and heuristic search.
In terms of random geometry, Bais et al. [9] laid out indoor base stations in a square shape, which solved the problem of signal coverage and improved the positioning error. However, the irregularity of buildings makes all base stations have a square layout difficulty. In literatures [13,14,15,16], many existing methods consider the localization performance of one or several specific points. Andrews et al. [17] model the layout of a cellular network using a homogeneous Poisson point process. The scenario modeling of the base station location in the cellular network means that the deployed base stations are completely independent of each other. The work of Zhou et al. [18] is an extension of these methods. They studied placing four base stations in a rectangular area, and research on positioning performance and effect. A solution based on Monte Carlo simulation is proposed for the difficulty of problem analysis. It is also confirmed by Chen et al. [19] that the optimal placement of the four base stations is rectangular.
Base station layout intelligent algorithm based on heuristic search is more adaptable and easily to model [20]. Zhang et al. [21] proposed a solution based on Simulated Annealing (SA) algorithm, but the initial value of "temperature" and the rate of decline in the simulated annealing algorithm need to be repeated several times to determine. Pereira et al. [22] used the particle swarm optimization algorithm based on the idea of group intelligent optimization to apply to the base station optimization problem, which is easy to modify the objective function, and can be implemented in parallel with good scalability. However, because the population loses more diversity information in the search space, it is easy to fall into the local optimal solution. Meng et al. [23] proposed the introduction of the Pareto optimal domain based on the traditional genetic algorithm (GA) layout scheme, and proposed a high-performance NSGA-II algorithm. This algorithm is a heuristic search algorithm and is also easily rewritten as a parallel processing version.
The single algorithm has its own performance defects, which leads to unsatisfactory optimization results. Therefore, domestic and foreign scholars put forward the improved fusion optimization algorithm strategy. Literature [24] also proposed the base station optimization scheme based on Genetic Algorithm. However, all of them have the characteristics of weak global search ability and easy to fall into the local optimal solution. Wang et al. [25] optimized the base station layout using adaptive genetic algorithm and ant colony algorithm (AG-AC). Firstly, the cross and mutation probabilities in the traditional genetic algorithm are adjusted to make them constantly change with the iteration of the algorithm, so can achieve the purpose of self-adaptation and generate the primary network element layout. Then, adaptive ant colony algorithm is applied on the primary network element layout to change the pheromone of traditional ant colony algorithm into a variable that changes constantly with the iteration of the algorithm, so as to reduce the risk of ant colony algorithm falling into local optimal solution and then generate the final network element layout. The average error after two steps' optimization is significantly improved compared with that before fusion. But this approach is simply a splicing of two algorithms, and the convergence speed of the algorithm is not significantly improved. Gharghan et al. [26] proposed hybrid Particle Swarm Optimization-Artificial Neural Network (PSO-ANN). This algorithm adopts the feedforward neural network model and uses the Levenberg-marquardt training algorithm to estimate the distance between the moving node and the anchor node. Although the positioning accuracy is improved, the training of feedforward neural network needs a lot of samples; otherwise, it cannot converge to the global minimum or the local minimum with good enough. In terms of continuous optimization, Ying Gao et al. [27] first introduced the idea of annealing particle swarm optimization. This algorithm combines the advantages of PSO algorithm, such as global optimization ability, fast calculation speed and simple implementation, and the simulated annealing algorithm's ability to jump out of local optimal solution. It avoids the disadvantage of PSO falling into local extremum and improves the convergence speed of PSO at the later stage of evolution. Zhang et al. [28] proposed a hybrid simulated annealing genetic optimization algorithm in order to improve the convergence speed of the genetic algorithm. In the early stage, the standard genetic algorithm was adopted for optimization, and the optimized results of the genetic algorithm were annealed. Although the algorithm improved the positioning accuracy, it cannot converge to the extreme point in the later stage, which makes the algorithm unstable.
According to the needs of positioning, this paper optimizes the location of multiple network elements in space to ensure the coverage of positioning signals and improve the positioning stability and accuracy of terminals. Firstly, the optimization model of the network element is established, and the optimization problem of the network element layout is transformed into simple discrete optimization problem. Then, according to the model and the disadvantages and advantages of the single algorithm, an improved adaptive genetic annealing fusion optimization algorithm is proposed. This algorithm takes the improved AGA as the main body of the algorithm, and integrates the improved simulated annealing mechanism to further adjust and optimize the population, so can improve the convergence speed and optimization quality of the algorithm.
Network element optimization layout model
Signal coverage in indoor location is an important indicator. In the three-dimensional (3D) indoor positioning, the to-be-positioned point receives at least four network element transmission signals, which can be regarded as effective coverage. The 3D indoor space is modeled using the probing model and K-coverage [11, 29], and the spatial positioning signal coverage is finally calculated. The Euclidean distance is used to solve the positioning error of each terminal, and the minimum positioning error is taken as the objective function of optimization.
3D coverage rate model
Detecting Model
There are N network elements, and the coordinates of the ith network element are set to Ai(xi, yi, zi), i = (1, 2, ..., N). Set the detection radius of the network element Ai(xi, yi, zi) as ri. Then, the detection area of the network element is the spherical area with the radius ri, where the location of the network element Ai is located at (xi, yi, zi).
$$ {V}_i:{\left(x-{x}_i\right)}^2+{\left(y-{y}_i\right)}^2+{\left(z-{z}_i\right)}^2\le {r_i}^2 $$
Vi is the coverage detection region of the network element Ai, that is, the effective region is denoted as Vei. The region other than Vi is the undetectable region of the network element Ai. Set the target 3D space region as V, then the effective region Vei = Vi ∩ V.
Let network element layout is S, then S = (A1, A2, A3, ..., AN). The detection area of each network element is Vi, i = (1, 2, ..., N), and the total detection area of N network elements is \( \underset{i=1}{\overset{N}{\cup }}{V}_i \). The effective detection area of each network element is set to Vei, i = (1, 2, ..., N), then Vei = Vi ∩ V. Set the effective total detection region of N network element as \( \underset{i=1}{\overset{N}{\cup }}V{e}_i \).
K-coverage method
The ranging-based 3D positioning algorithm receives at least four network element signals at the same time. In this case, K-coverage effective positioning point is used, that is, K ≥ 4 is the effective coverage; otherwise, there is coverage vulnerability. For the irregularities of complex and diverse indoor space shapes, this paper uses the cube segmentation method to segment the location region.
Set the indoor positioning space area as V and the side length of the cube is l.The region V is divided into M small cube regions, i.e., \( M=\frac{V}{l^3} \). Set the body centered coordinates of each small cube region asBj(xj, yj, zj), j = (1, 2, ..., M), and replace the small cube area with the cube center. Therefore, the coverage of the network element for each small cube can be approximated as the coverage of the cube center Bj(xj, yj, zj). S is used for a network element layout, S = (A1, A2, A3, ..., AN) and the network element coordinates are Ai(xi, yi, zi), i = (1, 2, ..., N).When (xi − xj)2 + (yi − yj)2 + (zi − zj)2 ≤ ri2, Bj is considered to be covered by network element Ai. Let the variable Kij denote the case where Bj is covered by the network element Ai, where j = (1, 2, ..., M), i = (1, 2, ..., N). Then, the expression Kij is shown in (2):
$$ {K}_{ij}=\Big\{{\displaystyle \begin{array}{cc}1& {\left({x}_i-{x}_j\right)}^2+{\left({y}_i-{y}_j\right)}^2+{\left({z}_i-{z}_j\right)}^2\le {r_i}^2\\ {}0& {\left({x}_i-{x}_j\right)}^2+{\left({y}_i-{y}_j\right)}^2+{\left({z}_i-{z}_j\right)}^2>{r_i}^2\end{array}} $$
According to the definition of K-coverage, the coverage number of the target node Bj is Kj, and the target node Bj is overwritten by the network element Kj. The expression of Kj is as shown in Formula (3):
$$ {K}_j=\sum \limits_{i=1}^M{K}_{ij} $$
In indoor positioning, each node to be located should be covered by at least four network elements. Let, the variable Ej indicate whether point Bj is effectively covered. While the value of Ej is equal to 1, that means the point Bj is effectively covered, vice versa. The expression of Ej is shown in (4):
$$ {E}_j=\Big\{{\displaystyle \begin{array}{cc}1& {K}_j\ge 4\\ {}0& {K}_j<4\end{array}} $$
The area coverage expression under the network element layout S is shown in formula (5):
$$ {f}_c(S)=\frac{\sum \limits_{j=1}^{n^3}{E}_j}{n^3} $$
Network element layout
In the network element layout process, Ai(xi, yi, zi) is used for the coordinates of the ith network element, where i = 1, 2, ..., N. M nodes are collected to represent the users in the indoor environment, the coordinate of the jth user is Bj(xj, yj, zj) and the measurement coordinate of the jth user is \( \overset{\wedge }{B_j}\left(\overset{\wedge }{x_j},\overset{\wedge }{y_j},\overset{\wedge }{z_j}\right) \). It is assumed that the positioning probability of each positioning node is the same. DefineS″, P, and \( \hat{P} \) as (A1, A2, A3, ..., AN),(B1, B2, ..., BM), and \( \left({\hat{B}}_1,{\hat{B}}_2,...,{\hat{B}}_M\right) \). S represents the layout scheme of N network elements, P denotes the position of all points of M users in the indoor environment while \( \hat{P} \) is the measurement position.
In the case of network element layout S, the average positioning error of M users in the indoor environment is:
$$ f(S)=\frac{1}{M}\sum \limits_{j=1}^M{\mathrm{Erro}}_j(S) $$
f(S) is used for the average positioning error. In the indoor positioning, it is determined whether each positioning point receives signals of at least 4 network elements at the same time by determining whether the value of E that corresponds to each user is 1. When the user's E value is 1, the positioning accuracy is maximized, where Erroj(S) is used for the positioning error of the jth user [22].
By measuring the distance between the network element and the user, the positioning accuracy is calculated, and the layout of the network element is evaluated, finally obtain the optimized layout results.
ASA-GA algorithm design
Design of the adaptive simulated annealing (ASA) algorithm
Simulated annealing algorithm is a common heuristic algorithm whose performance largely depends on its components and parameters. It mainly includes the methods of generating new states, the design of cooling control function, and the termination conditions of the algorithm [30]. The traditional simulated annealing algorithm is used in the open-loop control mode, so the neighborhood search results have no feedback effect on the annealing process. This paper presents a fast adaptive simulated annealing algorithm, which adopts closed-loop feedback control to combine the neighborhood search and temperature control by selecting appropriate methods and algorithm termination conditions for generating new states. The algorithm can dynamically determine the temperature parameters and the changes in the number of different neighborhood searches. The specific process after improvement is as follows:
In space V, the base station layout S represents a feasible solution, and the energy function is the average error of the positioning, also called the objective function of the algorithm optimization. As shown in Eq. (6), the minimization of the objective function is the optimal solution of the layout. As for the cooling progress function, this paper uses the exponential cooling strategy to control, expressed as:
$$ T(k)={T}_0{\alpha}^{k^{1/2}} $$
where T0 is the initial temperature, and k is the temperature drop coefficient.
Let P be the state transition probability, indicating the probability of going from one base station S to another base station layout S′, related to the current temperature parameter Ti, where Ti representing the temperature of the ith iteration, expressed as:
$$ P=\Big\{{\displaystyle \begin{array}{ll}1,& f\left(S\hbox{'}\right)\le f(S)\\ {}\exp \left[f(S)-f\left(S\hbox{'}\right)/{T}_i\right],& f\left(S\hbox{'}\right)>f(S)\end{array}} $$
Since using the simulated annealing algorithm solves the new solution in the neighborhood area of the current solution, to ensure that the individual of the new solution does not exceed the boundary, the boundary station number q' in the new solution is converted as follows:
$$ q\hbox{'}=\Big\{{\displaystyle \begin{array}{ll}q+\left({q}_{\mathrm{right}}-q\right)\delta \left({T}_i\right)\varepsilon, & U\left(0,1\right)=0\\ {}q-\left(q-{q}_{\mathrm{left}}\right)\delta \left({T}_i\right)\varepsilon, & U\left(0,1\right)=1\end{array}} $$
where qleft and qright are respectively the minimum and maximum of the base station number, ε is the random number between (0, 1). U(0, 1) is the control value of randomly selecting 0 or 1, δ(Ti) is the disturbance quantity, which decreases with the decrease of Ti, and finally with δ(Ti) → 0 the algorithm converges.
Design of the adaptive genetic algorithm with annealing thought
Fitness function
The goal of the network element optimization layout is to improve the positioning accuracy, that is, the positioning error of the point to be located is the smallest. Therefore, the population fitness function can be expressed as Eq. (6).
Selecting operation
In the optimization of network element layout based on adaptive genetic algorithm, the selection operation is to select a good individual from the population, where the probability of individuals being selected is expressed as:
$$ {P}_k=1-\frac{f_k}{\sum \limits_{i=1}^n{f}_i} $$
In which, Pk is the probability that the population individual Sk is selected, and fi is the fitness function of population individual Sk.
Adaptive selection of crossover operators
This paper makes the following adaptive improvements for crossover probabilities:
$$ {p}_c=\Big\{{\displaystyle \begin{array}{c}{k}_1\cdot \frac{f^{\prime }-{f}_{min}}{f_{avg}-{f}_{min}}\kern0.5em ,\kern0.5em {f}^{\prime}\le {f}_{\mathrm{avg}}\\ {}{k}_2\kern0.5em ,\kern0.5em {f}^{\prime }>{f}_{\mathrm{avg}}\end{array}} $$
where favg represents the average fitness value of all individuals in the population, and fmin represents the minimum fitness value of individuals in the population, that is, the minimum positioning error under the network element layout. f′ is used for the fitness value of the current network element layout, that is, the average positioning error under the current network element layout. It can be seen from formula (11) that the smaller the value of favg − fmin is, the closer favg is to fmin, and the more the layout of network elements is to the optimal solution. According to the network element optimization scenario, network element layout individuals with lower fitness are assigned lower pc, is conducive to the preservation of good individuals. On the contrary, network element layout individuals with higher fitness are assigned larger pc. The crossover probability of individuals is not only determined by favg − fmin, the closer favg is to fmin, the smaller the error caused by network element layout. As shown in algorithm 1. The variable pop represents population, pop_size represents population size, chromo represents chromosome while chromo_size represents chromosome length.
Adaptive selection of mutation operators
According to the adaptability of the current network element layout and the average fitness of the entire population, the adaptive mutation probability is selected. If the current layout fitness is greater than the average fitness, the mutation probability is smaller. On the contrary, the mutation probability is larger. The adaptive mutation probability formula is as follows:
$$ {p}_m=\Big\{{\displaystyle \begin{array}{c}{k}_3\frac{f^{\prime }-{f}_{min}}{f_{avg}-{f}_{min}}\kern0.5em ,\kern0.5em {f}^{\prime}\le {f}_{avg}\\ {}\begin{array}{ccc}{k}_4&, & {f}^{\prime }>{f}_{avg}\end{array}\end{array}} $$
Similar as Formula (11), when the fitness of the individual obtained by the variation is greater than the fitness of the current individual layout, the variation result is accepted. What is different is that annealing steps are added into the adaptive mutation operation here. When the individual mutates, the same as formula (8), the mutation operation is accepted with a probability exp[(f″ − f′) × (1/G)], where f″ is used for the fitness value of the individual after the variation, that is the average positioning error. G is used for the number of current generations.
The purpose of improving the adaptive genetic algorithm is to prevent GA from falling into the local optimal solution. The algorithm uses the network element layout below the average fitness to find the optimal solution in the indoor positioning space. The network elements layout in this situation needs to be completely disrupted, so the value of k4 is set as 0.5, similarly, the value of k3 is also set as 0.5. Set k1 = k2 = 1.0 at the same time.
ASA-GA algorithm
The genetic algorithm can find an optimal solution from the whole, but it may fall into a local optimum, which can be avoided by the simulated annealing algorithm. In view of the shortcomings of slow convergence and poor quality of single intelligent optimization algorithm, a network layout ASA-GA optimization algorithm is proposed. The algorithm firstly adaptively improves the simulated annealing algorithm and the genetic algorithm. Secondly, the adaptive genetic algorithm is used as the optimization body of the ASA-GA algorithm. In addition, annealing mechanism is added into the adaptive mutation operation. The specific steps of ASA-GA algorithm fusion are as follows:
Set the initial value of the algorithm's parameters, including population size n, the number of reproductive generations G, crossover probability pc and mutation probability pm, and the T0 and k value of simulated annealing algorithm, and set an generation number Tg.
Calculate the fitness of the individuals in the generated population, and record the optimal individuals, select the paired individuals according to Eq. (10) Perform the adaptive crossover and mutation operations on the individuals using Eqs. (11) and (12). The fitness of each individual in the newly generated population was calculated, and the individual with high fitness value was selected for the transfer step III to conduct annealing optimization gene operation.
Let us make the cycle counter of simulated annealing algorithm t, the simulated annealing operation is performed on the individuals with high fitness in the new population and optimize individual genes. Calculate the disturbance value according to (8). The probability is accepted according to the formula (9). If accepted, t = t + 1, otherwise, t remains unchanged. The individuals with the least fitness in the population were replaced with the layout results after annealing.
If the reproductive generation number Tg is less than G, then Tg = Tg + 1, return II, otherwise, end the optimization process and output the result of base station layout.
Simulation experiment
In the simulation experiment of network element layout, the experimental scene was set as a "double L" region, with a total length of 24.1 m and a width of 17.8 m. According to the actual situation, for the sake of simplicity, in the layout of the network element, select 3 m from the ground height to divide the 1 m × 1 m grid. Due to the large number of multipath generated in the deployment of ceiling elements, the positioning error is greatly reduced. Therefore, the paper select a location near the wall as the deployment of base station, a total of 102 locations for network deployment. The selected position of the terminal is 1.2 m away from the ground, and the 3D grid of 1 m × 1 m × 1 m is divided. Two hundred sixty-four terminals are located at the center of gravity of the grid. The test scenario is shown in Fig. 1. The yellow dot indicates the network element location to be installed and the green dot indicates the user terminal location.
Simulation optimization scenario. In the simulation experiment of network element layout, the experimental scene was set as a "double L" region, with a total length of 24.1 m and a width of 17.8 m. According to the actual situation, select 3 m from the ground height to divide the 1 m × 1 m grid. A location near the wall was selected as the deployment of base station, a total of 102 locations for network deployment. The selected position of the terminal is 1.2 m away from the ground, and the 3D grid of 1 m × 1 m × 1 m is divided. Two hundred sixty-four terminals are located at the center of gravity of the grid. The yellow dot indicates the network element location to be installed and the green dot indicates the user terminal location
Before the simulation analysis, the parameters of the ASA-GA fusion algorithm are initialized. In the experimental simulation, eight network elements are pre-installed in the positioning area. Both the simulated annealing algorithm and the ant colony algorithm are iterated for 50 times. The population of genetic algorithm is set as 50, the genetic times are set as 50, and the number of ants of ant colony algorithm is set as 50. The algorithm parameter settings are shown in Table 1.
Table 1 Algorithm parameter settings
In the above positioning scenario, there are 8 optimal deployment positions number of network elements, namely, 16, 29, 33, 43, 81, 88, 95, and 98. The first diameter coverage of the positioning area after the layout optimization of network elements is shown in Fig. 2.
The first diameter coverage. In the above positioning scenario, there are 8 optimal deployment positions number of network elements, namely, 16, 29, 33, 43, 81, 88, 95, 98. The number of the first diameter that the user terminal received is represented by the digit of corresponding location
According to the coverage of the first diameter in Fig. 2, the number of the first diameter is mainly 5~7. Compared with the positioning signal coverage before the layout optimization of the network element, the coverage rate statistical diagram is shown in Fig. 3.
The first diameter coverage statistics chart. The brown columns represent the positioning signal coverage before the layout optimization of the network element. The blue columns represent the positioning signal coverage after the layout optimization of the network element
It can be seen from the above figure that before the network element layout is optimized, the effective positioning signal coverage rate of that the first diameter (i.e., direct arrival signals) number is greater than or equal to 4 is about 90%, and about 10% of the signal coverage holes exist. After adopting the ASA-GA fusion optimization algorithm proposed in this paper, the positioning signal coverage rate reaches 100% coverage, and the number of first diameter is mainly distributed from 5 to 7. Through the experiment, the change of positioning accuracy before and after the layout of the network element is obtained, and the positioning accuracy is obviously improved with the increase of the coverage rate. The changes in coverage rate and positioning accuracy before and after optimization are shown in Table 2.
Table 2 Changes before and after optimization of network element layout
After the optimization of network elements layout, the effective positioning signal coverage rate increased from 89.77 to 100%, the coverage rate increased by 10.23%, and the average positioning error decreased from 2.874 to 0.983 m. According to the cumulative distribution of positioning errors before and after network element layout optimization shown in Fig. 4, the number of terminals with positioning errors within 1 m accounts for about 68% of the total, and the number of terminals with positioning errors within 2 m accounts for about 85%. The ASA-GA algorithm's positioning accuracy and coverage rate have been significantly improved, which proves the effectiveness of the ASA-GA hybrid optimization algorithm proposed in this paper.
Error accumulation distribution diagram before and after optimization of network elements layout. The curve with squares represents before network element layout optimization and the curve with stars represents after ASA-GA algorithm optimization
Comparing the proposed ASA-GA fusion optimization algorithm with the AGA and AG-AC joint optimization algorithm proposed in [25], the cumulative error distribution of the three algorithms is shown in Fig. 5. After the optimization of ASA-GA fusion algorithm, the positioning error of target nodes within 3 m accounts for about 94% of the total, within 2 m accounts for about 85%, and within 1 m accounts for about 68%, which is about 16% better than that of AG-AC algorithm and 22% better than AGA algorithm.
Three optimization algorithms positioning error accumulation distribution diagram. The curve with circles represents after AG-AC algorithm optimization, the curve with stars represents after ASA-GA algorithm optimization, and the curve with crosses represents after AGA algorithm optimization
The variation of positioning signal coverage during the optimization of the three algorithms is shown in Fig. 6. Experimental results show that ASA-GA fusion algorithm needs to be inherited for 25 generations, AG-AC algorithm needs to be inherited for 30 generations, and AGA needs to be propagated for 38 generations to achieve full coverage.
Comparison of convergence of the three algorithms. The green curve with squares represents after AG-AC algorithm optimization, the curve with stars represents after ASA-GA algorithm optimization, and curve with diamonds represents after AGA algorithm optimization
Due to the complexity and the characteristics of non-line-of-sight of indoor scene, in the application of cellular network positioning, the influence of indoor three-dimensional positioning signal coverage and positioning accuracy has not been considered in the base station deployment. This paper proposes an adaptive genetic algorithm based on fusion annealing indoor positioning base station layout method. This algorithm adopts an adaptive way to control crossover and mutation probability, avoids sharp rise and fall, and makes the crossover and mutation operations tend to be stable, thus ensuring the stability of the algorithm, improving the convergence speed, and the simulating annealing mechanism is integrated into the internal genetic algorithm, which overcomes the shortcomings of falling into the local optimal solution. Through simulation experiments, the positioning accuracy of the ASA-GA algorithm is more than 80%. Compared with AG-AC and AGA algorithms, within 2 m, the performance is improved by about 12% and 21%, respectively. The convergence rate is significantly improved compared with the above algorithms, and the localization signal coverage rate is significantly improved compared with AGA algorithm.
In the future, we will further study the influence of multipath in positioning, minimize the multipath effect in combination with multi-objective optimization, properly compensate the attenuated positioning signal in positioning, and analyze the specific factors that influence the positioning accuracy caused by multipath effect.
The datasets used and analyzed during the current study are available from the corresponding author on a reasonable request.
NLOS:
Non-line of sight
ASA-GA:
Adaptive Simulated Annealing and Genetic Algorithm
AG-AC:
Adaptive Genetic Combining Ant Colony
AGA:
Adaptive Genetic Algorithm
OAM:
Orbital angular momentum
Wireless Fidelity
UWB:
Ultra wide band
3D:
GDoP:
Geometric Dilution of Precision
NP:
Non-deterministic polynomial
PSO-ANN:
Particle Swarm Optimization - Artificial Neural Network
Zhou M, Zhang Q, Tian Z, et al. IMLours: Indoor mapping and localization using time-stamped WLAN received signal strength// Wireless Communications and NETWORKING Conference. IEEE, 2015:1817-1822.
M. Terán, J. Aranda, H. Carrillo, et al., IoT-based system for indoor location using bluetooth low energy//Communications and Computing (COLCOM), 2017 IEEE Colombian Conference on. IEEE, 1–6 (2017)
M. Chen, L. Jiang, W. Sha, Ultrathin complementary metasurface for orbital angular momentum generation at microwave frequencies[J]. IEEE Trans Antennas Propagation 65(1), 396–400 (2017)
Chen M , Jiang L J , Sha W. Detection of orbital angular momentum with metasurface at microwave band. IEEE Antennas Wireless Propagation Lett, 2017:1-1.
X. Zeng, W. Lin, A kind of improved fingerprinting indoor location method based on WiFi//AIP Conference Proceedings. AIP Publishing 1864(1), 020052 (2017)
C. Yara, Y. Noriduki, S. Ioroi, et al., Design and implementation of map system for indoor navigation - an example of an application of a platform which collects and provides indoor positions// IEEE International Symposium on Inertial Sensors and Systems. IEEE, 1–4 (2015)
Z. Yifan, Z. Zhifeng, Towards 5G: heterogeneous cellular network architecture design based on intelligent SDN paradigm. Telecommun Sci 32(6), 28 (2016)
J. Wu, Q. Gang, K. Pengbin, Emerging 5G multi-carrier chaotic sequence spread spectrum technology for underwater acoustic communication. Complexity (2018)
A. Bais, H. Kiwan, Y. Morgan, On optimal placement of short range base stations for indoor position estimation[J]. J. Appl. Res. Technol 12(5), 886–897 (2014)
S. Yang, F. Dai, M. Cardei, et al., On connected multiple point coverage in wireless sensor networks[J]. Int. J. Wireless Inf. Networks 13(4), 289–301 (2006)
M. Hefeeda, M. Bagheri, Randomized k-coverage algorithms for dense sensor networks//INFOCOM 2007. 26th IEEE International Conference on Computer Communications. IEEE, 2376–2380 (2007)
Kalantari E, Yanikomeroglu H, Yongacoglu A. On the number and 3-D placement of drone base stations in wireless cellular networks//Vehicular Technology Conference (VTC-Fall), 2016 IEEE 84th. IEEE, 2016: 1-6.
I. Sharp et al., "Eanalysis for positioning system design," IEEE Trans. Vehicular Technol 58(7), 3371–3382 (2009)
N. Levanon, "Lowest GDOP in 2-D scenarios," IEE Proc. Radar Sonar Novigation 147(3), 149–155 (2000)
M.A. Spirito, On the accuracy of cellular mobile station location estimation. IEEE Trans Vehicular Technol 50(3), 674–685 (May 2001)
C.-H. Chen, An arrival time prediction method for bus system. IEEE Internet Things J. 10(5), 4231–4232 (2018)
J.G. Andrews, A.K. Gupta, H.S. Dhillon, A primer on cellular network analysis using stochastic geometry. arXiv preprint arXiv 1604, 03183 (2016)
J. Zhou et al., Landmark placement for wireless localization in rectangular-shaped industrial facilities. IEEE Trans Vehicular Technol 59(6), 3081–3090 (Jul 2010)
Y. Chen et al., in Thirrd Annual IEEE Communications Society Conference on Sensor and Ad Hoc Communications and Networks. A practical approach to landmark deployment for indoor localization (2006), pp. 365–373
C. Shijun, H. Wang, C. Dawei, et al., Base Station layout optimization algorithm based on improved tabu search[J]. Comp Eng Sci 40(02), 341–347 (2018)
H. Zhang, S. Zhang, W. Bu, A clustering routing protocol for energy balance of wireless sensor network based on simulated annealing and genetic algorithm. Int J Hybrid Inf Technol 7(2), 71–82 (2014)
M.B. Pereira, F.R.P. Cavalcanti, T.F. Maciel, Particle swarm optimization for base station placement//Telecommunications Symposium (ITS), 2014 International. IEEE, 1–5 (2014)
Meng H, Long F, Guo L, et al. Cooperating base station location optimization using genetic algorithm//Control and Decision Conference (CCDC), 2016 Chinese. IEEE, 2016: 4820-4824.
J. Munyaneza, A. Kurien, Optimization of antenna placement in 3G networks using genetic algorithm[J]. Commun Inf Technol 36(5), 70–80 (2009)
H. Wang, L. Xiubing, L. Hongwu, et al., Method of diamond supplement for indoor micro base station placement. J Beijing Univ Posts Telecommun 41(1), 51–58, 87 (2018)
S.K. Gharghan, R. Nordin, M. Ismail, et al., Accurate wireless sensor localization technique based on hybrid PSO-ANN algorithm for indoor and outdoor track cycling[J]. IEEE Sensors J. 16(2), 529–541 (2016)
Y. Gao, S.L. Xie, Particle swarm optimization algorithms based on simulated annealing. Comput Eng Appl 40, 47–50 (2004)
Zhang Q, Wang J, Jin C, et al. Localization algorithm for wireless sensor network based on genetic simulated annealing algorithm//Wireless Communications, Networking and Mobile Computing, 2008. WiCOM'08. 4th International Conference on. IEEE, 2008: 1-5.
C. Liu, G. Cao, Spatial-temporal coverage optimization in wireless sensor networks[J]. IEEE Trans. Mob. Comput. 10(4), 465–478 (2011)
Z. Hao, T. Ran, L. Zhi-yong, et al., A feature selection method based on adaptive simulated annealing genetic algorithm [J]. Acta Armam 30(1), 81–85 (2009)
This research was funded by 3 foundation items: National Natural Science Foundation of China (No. 61872104 and 61901134), the National Science and Technology Major Project of China (No. 2016ZX03001023-005), and Basic business project in education department of Heilongjiang province of China (No. 135109243).
College of Computer Science and Technology, Harbin Engineering University, Harbin, Heilongjiang province, China
Xiao-min Yu, Hui-qiang Wang, Hong-wu Lv, Xiu-bing Liu & Jin-qiu Wu
College of Computer and Control Engineering, Qiqihar University, Qiqihar, Heilongjiang province, China
Xiao-min Yu
Hui-qiang Wang
Hong-wu Lv
Xiu-bing Liu
Jin-qiu Wu
XMY contributed in investigation, methodology, draft manuscript writing, manuscript reviewing, and editing. HQW contributed in the overall design and network element optimization layout model. HWL contributed in the design of models and algorithms, reviewing and editing the manuscript, and funding acquisitions. XBL contributed in software and hardware development, simulations, result analysis and reviewing and editing the manuscript. JQW contributed in reviewing and editing the manuscript. All authors read and approved the final manuscript.
Correspondence to Hui-qiang Wang.
Competing interest
Yu, Xm., Wang, Hq., Lv, Hw. et al. A fusion optimization algorithm of network element layout for indoor positioning. J Wireless Com Network 2019, 284 (2019). https://doi.org/10.1186/s13638-019-1597-8
Non-Line of Sight (NLOS)
K-Coverage
Fusion algorithm
Base station layout
Convergence rate
|
CommonCrawl
|
The linear mass density of a nonuniform wire under constant tension decreases gradually along the wire so that an incident wave is transmitted without reflection. The wire is uniform for $-\infty < x < 0$. In this region, a transverse wave has the form $$y(x, t) = 0.003 \cos (25x — 50t)$$ where $y$ and $x$ are in meters and $t$ is in seconds. In the region $0 < x < 20$ the linéar mass density decreases gradually from $\mu$ to $\frac14 \mu$. For $20 < x < +\infty$ the linear mass density is $\frac14 \mu$. Then prove that for $x > 20$ the wave equation is $$ y(x,t) = 0.0042 \cos (12.5 x — 50 t)$$
edited May 15 by sammy gerbil
Good so far. What about amplitude? There is no reflection because the change in mass density is continuous. Therefore the power transmitted by the wave must be the same in both regions $x<0$ and $x>20$. See http://hyperphysics.phy-astr.gsu.edu/hbase/Waves/powstr.html.
commented May 15 by sammy gerbil (28,466 points)
|
CommonCrawl
|
Session 7-A: Communication in Challenging Environments Session 8-A: Localization I Session 9-A: IoT II Session 10-A: Localization II
Session 7-B: Network Modeling Session 8-B: Trusted Systems Session 9-B: Data Management Session 10-B: Adaptive Algorithms
Session 7-C: Security III Session 8-C: Security IV Session 9-C: Security V Session 10-C: Security VI
Session 7-D: Network Intelligence V Session 8-D: Video Streaming Session 9-D: Privacy II Session 10-D: Network Intelligence VI
Session 7-E: Network Economics Session 8-E: Load Balancing Session 9-E: Routing Session 10-E: Cloud Computing
Session 7-F: UAV II Session 8-F: Wireless Charging Session 9-F: LoRa Session 10-F: WiFi and Wireless Sensing
Session 7-G: SDN III Session 8-G: Edge Computing II Session 9-G: SDN IV Session 10-G: Edge Computing III
Communication in Challenging Environments
Jul 9 Thu, 9:00 AM — 10:30 AM EDT
MAGIC: Magnetic Resonance Coupling for Intra-body Communications
Stella Banou, Kai Li and Kaushik Chowdhury (Northeastern University, USA)
This paper proposes MAGIC that uses magnetic resonant (MR) coupling for intra-body communication between implants and wearables. MAGIC includes not only the hardware-software design of the coupled coils and methods of manipulating the magnetic field for relaying information, but also the ability to raise immediate emergency-related alerts. MR coupling makes the design of the transmission link robust to channel-related parameters, as the magnetic permeability of skin and muscle is close to that of air. Thus, changes in tissue moisture content and thickness does not impact the design, which is a persistent problem in other approaches for implant communications like RF, ultrasound and galvanic coupling (GC). The paper makes three main contributions: It develops the theory leading to the design of the information relaying coils in MAGIC. It proposes a systems-level design of a communication link that extends up to 50cm with a low expected BER of 10^-4. Finally, the paper includes an experimental setup demonstrating how MAGIC operates in air and muscle tissue, as well as a comparison with alternative implant communication technologies, such as classical RF and GC. Results reveal that MAGIC offers instantaneous alerts with up to 5 times lower power consumption compared to other forms of communication.
Dynamically Adaptive Cooperation Transmission among Satellite-Ground Integrated Networks
Feilong Tang (Shanghai Jiao Tong University, China)
It is desirable goal to fuse satellite and ground integrated networks (SGINs) to improve the resource utilization efficiency. However, existing work did not consider how to integrate them as a whole network because they lack of function-configurable network management and efficient cooperation among satellite and ground networks. In this paper, we firstly propose a SDN based network architecture that manages and schedules SGIN resources in the layered and on-demand way. Then, we formulate the dynamical cooperation transmission in SGINs as an optimization problem and prove its NP hardness. Finally, we propose a Satellite-Ground Cooperative Transmission (SGCT) algorithm based on dynamical cooperation among satellite and ground networks, which is network-aware and workload-driven. Comprehensive experiment results demonstrate that our approach outperforms related schemes in terms of network throughput, end-to-end delay, transmission quality and load balancing.
Synergetic Denial-of-Service Attacks and Defense in Underwater Named Data Networking
Yue Li and Yingjian Liu (Ocean University of China, China); Yu Wang (Temple University, USA); Zhongwen Guo, Haoyu Yin and Hao Teng (Ocean University of China, China)
Due to the harsh environment and energy limitation, maintaining efficient communication is crucial to the lifetime of Underwater Sensor Networks (UWSN). Named Data Networking (NDN), one of future network architectures, begins to be applied to UWSN. Although Underwater Named Data Networking (UNDN) performs well in data transmission, it still faces some security threats, such as the Denial-of-Service (DoS) attacks caused by Interest Flooding Attacks (IFAs). In this paper, we present a new type of DoS, named as Synergetic Denial-of-Service (SDoS). Attackers synergize with each other, taking turns to reply to malicious Interests as late as possible. SDoS attacks will damage the Pending Interest Table (PIT), Content Store (CS), and Forwarding Information Base (FIB) in routers with high concealment. Simulation results demonstrate that the SDoS attacks quadruple the increased network traffic compared with normal IFAs and the only currently existing IFA detection algorithm in UNDN is completely invalid to SDoS attacks. In addition, we analyze the infection problem in UNDN and propose Trident: a defense method with adaptive threshold, burst traffic judgment and attacker identification. Simulation experiments illustrate that Trident can effectively detect and resist both SDoS attacks and normal IFAs. Meanwhile, Trident can robustly undertake burst traffic and congestion.
An Energy Efficiency Multi-Level Transmission Strategy based on underwater multimodal communication in UWSNs
Zhao Zhao, Chunfeng Liu, Wenyu Qu and Tao Yu (Tianjin University, China)
This paper concerns the data transmission strategy based on underwater multimodal communication for marine applications in underwater wireless sensor networks (UWSNs). Underwater data required by various applications have different values of information (VoI) depending on event type and event timeliness. These data should be transmitted in different time latency according to their VoI for accommodating both application requirements and network performance. Our objective is to design a multi-level transmission strategy by using underwater multimodal communication system so that multiple paths with transmission delay and energy consumption are provided for underwater data in UWSNs. For this purpose, we first define a minimum cost flow (MCF) model for the design of transmission strategy that considers time latency, energy efficiency, and transfer load. Then a distributed multi-level transmission strategy EMTS is given based on time backoff method for large-scale UWSNs. Finally we compared transmission latency, energy efficiency and network lifetime obtained by our EMTS and the optimum solution of the MCF model, a transmission algorithm based on greedy strategy. Although the latency of EMTS is slightly higher than that of other algorithms, our average network lifetime can reach 88.7% of that of the optimum solution of the MCF model.
Lan Wang (University of Memphis)
Network Modeling
Bound-based Network Tomography for Inferring Interesting Link Metrics
Huikang Li, Yi Gao, Wei Dong and Chun Chen (Zhejiang University, China)
Network tomography is an attractive methodology for inferring internal network states from accumulated path measurements between pairs of monitors. Motivated by previous results that identifying all link metrics can require a large number of monitors, we focus on calculating the performance bounds of a set of interesting links, i.e., bound-based network tomography. We develop an efficient solution to obtain the tightest upper bounds and lower bounds of all interesting links in an arbitrary network with a given set of end-to-end path measurements. Based on this solution, we further propose an algorithm to place new monitors over existing ones such that the bounds of interesting links can be maximally tightened. We theoretically prove the effectiveness of the proposed algorithms. We implement the algorithms and conduct extensive experiments based on real network topologies. Compared with state-of-the-art approaches, our algorithms can achieve up to 2.2-3.1 times more reduction on the bound intervals for all interesting links and reduce the number of placed monitors significantly in various network settings.
ProTO: Proactive Topology Obfuscation Against Adversarial Network Topology Inference
Tao Hou and Zhe Qu (University of South Florida, USA); Tao Wang (New Mexico State University, USA); Zhuo Lu and Yao Liu (University of South Florida, USA)
The topology of a network is fundamental for building network infrastructure functionalities. In many scenarios, enterprise networks may have no desire to disclose their topology information. In this paper, we aim at preventing attacks that use adversarial, active end-to-end topology inference to obtain the topology information of a target network. To this end, we propose a Proactive Topology Obfuscation (ProTO) system that adopts a detect-then-obfuscate framework: (i) a lightweight probing behavior identification mechanism based on machine learning is designed to detect any probing behavior, and then (ii) a topology obfuscation design is developed to proactively delay all identified probe packets in a way such that the attacker will obtain a structurally accurate yet fake network topology based on the measurements of these delayed probe packets, therefore deceiving the attacker and decreasing its appetency for future inference.
SpreadSketch: Toward Invertible and Network-Wide Detection of Superspreaders
Lu Tang (The Chinese University of Hong Kong, Hong Kong); Qun Huang (Peking University, China); Patrick Pak-Ching Lee (The Chinese University of Hong Kong, Hong Kong)
Superspreaders (i.e., hosts with numerous distinct connections) remain severe threats to production networks. How to accurately detect superspreaders in real-time at scale remains a non-trivial yet challenging issue. We present SpreadSketch, an invertible sketch data structure for network-wide superspreader detection with the theoretical guarantees on memory space, performance, and accuracy. SpreadSketch tracks candidate superspreaders and embeds estimated fan-outs in binary hash strings inside small and static memory space, such that multiple SpreadSketch instances can be merged to provide a network-wide measurement view for recovering superspreaders and their estimated fan-outs. We present formal theoretical analysis on SpreadSketch in terms of space and time complexities as well as error bounds. Trace-driven evaluation shows that SpreadSketch achieves higher accuracy and performance over state-of-the-art sketches. Furthermore, we prototype SpreadSketch in P4 and show its feasible deployment in commodity hardware switches.
Variational Information Diffusion for Probabilistic Cascades Prediction
Fan Zhou and Xovee Xu (University of Electronic Science and Technology of China, China); Kunpeng Zhang (University of Maryland, USA); Goce Trajcevski (Iowa State University, USA); Ting Zhong (University of Electronic Science and Technology of China, China)
Understanding in-network information diffusion is a fundamental problem in many application domains and one of the primary challenges is to predict the size of the information cascade. Most of the existing models rely either on hypothesized point process (e.g., Poisson and Hawkes process), or simply predict the information propagation via deep neural networks. However, they fail to simultaneously capture the underlying structure of a cascade graph and the propagation of uncertainty in the diffusion, which may result in unsatisfactory prediction performance.
To address these, in this work we propose a novel probabilistic cascade prediction framework: Variational Cascade (VaCas) graph learning networks. VaCas allows a non-linear information diffusion inference and models the information diffusion process by learning the latent representation of both the structural and temporal information. It is a pattern-agnostic model leveraging variational inference to learn the node-level and cascade-level latent factors in an unsupervised manner. In addition, VaCas is capable of capturing both the cascade representation uncertainty and node infection uncertainty, while enabling hierarchical pattern learning of information diffusion. Extensive experiments conducted on real-world datasets demonstrate that VaCas significantly improves the prediction accuracy, compared to state-of-the-art approaches, while also enabling interpretability.
Wei Bao (The University of Sydney)
Security III
A Dynamic Mechanism for Security Management in Multi-Agent Networked Systems
Shiva Navabi and Ashutosh Nayyar (University of Southern California, USA)
We study the problem of designing a dynamic mechanism for security management in an interconnected multi-agent system with N strategic agents and one coordinator. The system is modeled as a network of N vertices. Each agent resides in one of the vertices of the network and has a privately known security state that describes its safety level at each time. The evolution of an agent's security state depends on its own state, the states of its neighbors in the network and on actions taken by a network coordinator. Each agent's utility at time instant t depends on its own state, the states of its neighbors in the network and on actions taken by a network coordinator. The objective of the coordinator is to take security actions to maximize the long-term expected social surplus. Being strategic, agents need to be incentivized to reveal their private security state information. This results in a dynamic mechanism design problem for the coordinator. We leverage the inter-temporal correlations between the agents' security states to identify sufficient conditions under which an incentive compatible expected social surplus maximizing mechanism can be constructed. We describe construction of the desired mechanism in two special cases of our formulation.
KV-Fresh: Freshness Authentication for Outsourced Multi-Version Key-Value Stores
Yidan Hu and Rui Zhang (University of Delaware, USA); Yanchao Zhang (Arizona State University, USA)
Data outsourcing is a promising technical paradigm to facilitate cost-effective realtime data storage, processing, and dissemination. In such a system, a data owner proactively pushes a stream of data records to a third-party cloud service provider (CSP) for storage, which in turn processes various types of queries from end users on the data owner's behalf. This paper considers outsourced multi-version key-value stores that have gained increasing popularity in recent years, where a critical security challenge is to ensure the CSP return both authentic and fresh data in response to end users' queries. Despite several recent attempts on authenticating data freshness in outsourced key-value stores, they either incur excessively high communication cost or can only offer very limited real-time guarantee. To fill this gap, this paper introduces KV-Fresh, a novel freshness authentication scheme for outsourced key-value stores that offers strong real-time guarantee. KV-Fresh is designed based on a novel data structure, Linked Key Span Merkle Hash Tree, which enables highly efficient freshness proof by embedding chaining relationship among records generated at different times. Detailed simulation studies using real datasets confirm the efficacy and efficiency of KV-Fresh.
Modeling the Impact of Network Connectivity on Consensus Security of Proof-of-Work Blockchain
Yang Xiao (Virginia Tech, USA); Ning Zhang (Washington University in St. Louis, USA); Wenjing Lou and Thomas Hou (Virginia Tech, USA)
Popularized by Bitcoin, proof-of-work (PoW) blockchain is one of the most widely deployed distributed consensus systems nowadays. Driven by incentives, PoW-based blockchain allows for democratized consensus making with correctness guarantee, as long as majority of the participants in the network are honest and rational. However, such elegant game theoretical security model falls apart when it is deployed on systems with potentially adversarial components and network conditions. For distributed consensus protocol used in blockchain, network connectivity plays a crucial role in the overall security of the system. A well-connected adversary with a communication advantage over honest nodes has a higher chance of winning blockchain forks and harvesting higher-than-usual mining revenue. In this paper we evaluate the impact of network connectivity on PoW blockchain consensus security via modeling analysis. Specifically, we perform the analysis on two adversarial scenarios: 1) honest-but-potentially-colluding, 2) selfish mining. For each scenario we evaluate communication capability of networked nodes from the heterogeneous network connectivity pattern and analyze its impact on consensus security of the underlying blockchain. Our analysis serves as a paradigm for future endeavors that seek to link blockchain security with network connectivity.
Scheduling DDoS Cloud Scrubbing in ISP Networks via Randomized Online Auctions
Wencong You, Lei Jiao and Jun Li (University of Oregon, USA); Ruiting Zhou (Wuhan University, China)
While both Internet Service Providers (ISPs) and third-party Security Service Providers (SSPs) offer Distributed Denial-of-Service (DDoS) mitigation services through cloud-based scrubbing centers, it is often beneficial for ISPs to outsource part of the traffic scrubbing to SSPs to achieve less economic cost and better network performance. To explore this potential, we design an online auction mechanism, featured by the challenge of the switching cost of using different winning bids over time. Formulating the social cost minimization as a nonconvex integer program, we firstly relax it and design an online algorithm that breaks it into a series of modified single-shot problems and solves each of them in polynomial time, without requiring knowledge of future inputs; then, we design a randomized rounding algorithm to convert the fractional decisions into integers without violating any constraints; and finally, we design the payment for each bid based on its winning probability. We rigorously prove that our mechanism achieves a parameterized constant competitive ratio for the long-term social cost, plus truthfulness and individual rationality in expectation. We also exhibit its superior practical performance via evaluations driven by real-world data traces.
Ruozhou Yu (North Carolina State University)
Network Intelligence V
Automating Cloud Deployment for Deep Learning Inference of Real-time Online Services
Yang Li (Tsinghua University, China); Zhenhua Han (University of Science and Technology of China, China); Quanlu Zhang (MSRA, China); Zhenhua Li (Tsinghua University, China); Haisheng Tan (University of Science and Technology of China, China)
Real-time online services using pre-trained deep neural network (DNN) models, e.g., Siri and Instagram, require low-latency and cost-efficiency for quality-of-service and commercial competitiveness. When deployed in a cloud environment, such services call for an appropriate selection of cloud configurations (i.e., specific types of VM instances), as well as a considerate device placement plan that places the operations of a DNN model to multiple computation devices like GPUs and CPUs. Currently, the deployment mainly relies on service providers' manual efforts, which is not only onerous but also far from satisfactory oftentimes (for a same service, a poor deployment can incur significantly more costs by tens of times). In this paper, we attempt to automate the cloud deployment for real-time online DNN inference with minimum costs under the constraint of acceptably low latency. This attempt is enabled by jointly leveraging the Bayesian Optimization and Deep Reinforcement Learning to adaptively unearth the (nearly) optimal cloud configuration and device placement with limited search time. We implement a prototype system of our solution based on TensorFlow and conduct extensive experiments on top of Microsoft Azure. The results show that our solution essentially outperforms the non-trivial baselines in terms of inference speed and cost-efficiency.
Geryon: Accelerating Distributed CNN Training by Network-Level Flow Scheduling
Shuai Wang, Dan Li and Jinkun Geng (Tsinghua University, China)
Increasingly rich data sets and complicated models make distributed machine learning more and more important. However, the cost of extensive and frequent parameter synchronizations can easily diminish the benefits of distributed training across multiple machines. In this paper, we present Geryon, a network-level flow scheduling scheme to accelerate distributed Convolutional Neural Network (CNN) training. Geryon leverages multiple flows with different priorities to transfer parameters of different urgency levels, which can naturally coordinate multiple parameter servers and prioritize the urgent parameter transfers in the entire network fabric. Geryon requires no modification in CNN models and does not affect the training accuracy. Based on the experimental results of four representative CNN models on a testbed of 8 GPU (NVIDIA K40) servers, Geryon achieves up to 95.7% scaling efficiency even with 10GbE bandwidth. In contrast, for most models, the scaling efficiency of vanilla TensorFlow is no more than 37% and that of TensorFlow with parameter partition and slicing is around 80%. In terms of training throughput, Geryon enhanced with parameter partition and slicing achieves up to 4.37x speedup, where the flow scheduling algorithm itself achieves up to 1.2x speedup over parameter partition and slicing.
Neural Tensor Completion for Accurate Network Monitoring
Kun Xie (Hunan University, USA); Huali Lu (Hunan University, China); Xin Wang (Stony Brook University, USA); Gaogang Xie (Institute of Computing Technology, Chinese Academy of Sciences, China); Yong Ding (Guilin University of Electronic Technology, China); Dongliang Xie (State University of New York at Stony Brook, USA); Jigang Wen (Chinese Academy of Science & Institute of Computing Technology, China); Dafang Zhang (Hunan University, China)
Monitoring the performance of a large network is very costly. Instead, a subset of paths or time intervals of the network can be measured while inferring the remaining network data by leveraging their spatio-temporal correlations. The quality of missing data recovery highly relies on the inference algorithms. Tensor completion has attracted some recent attentions with its capability of exploiting the multi-dimensional data structure for more accurate missing data inference. However, current tensor completion algorithms only model the three-order interaction of data features through the inner product, which is insufficient to capture the high-order, nonlinear correlations across different feature dimensions. In this paper, we propose a novel Neural Tensor Completion (NTC) scheme to effectively model three-order interaction among data features with the outer product and build a 3D interaction map. Based on which, we apply 3D convolution to learn features of high-order interaction from the local range to the global range. We demonstrate theoretically this will lead to good learning ability. We further conduct extensive experiments on two real-world network monitoring datasets, Abilene and WS-DREAM, to demonstrate that NTC can significantly reduce the error in missing data recovery.
Optimizing Federated Learning on Non-IID Data with Reinforcement Learning
Hao Wang and Zakhary Kaplan (University of Toronto, Canada); Di Niu (University of Alberta, Canada); Baochun Li (University of Toronto, Canada)
In this paper, we propose Favor, an experience-driven federated learning framework that actively selects client devices for training to deal with non-IID data. With both empirical studies and mathematical analysis, we found an implicit connection between the distribution of training data and the weights of model trained on the data. Favor is able to profile data distribution on each device using this implicit connection, without access to the raw data. In Favor, we propose a new mechanism based on reinforcement learning that learns to construct a specific subset of client devices in each communication round. Updated with aggregation of model weights generated by this subset of devices, the global model obtained using federated learning can counterbalance the bias introduced by non-IID data. With our extensive array of experiments using PyTorch, our experimental results show that communication rounds can be reduced the number of communication rounds by up to 49% on the MNIST, up to 23% on FashionMNIST and up to 42% on CIFAR-10, as compared to the Federated Averaging algorithm.
Ruidong Li (National Institute of Information and Communications Technology (NICT))
Network Economics
A Lightweight Auction Framework for Spectrum Allocation with Strong Security Guarantees
Ke Cheng (Xidian University, China); Liangmin Wang (Jiangsu University, China); Yulong Shen and Yangyang Liu (Xidian University, China); Yongzhi Wang (Park University, USA); Lele Zheng (Xidian University, Xi'an, Shaanxi, China)
Auction is an effective mechanism to distribute spectrum resources. Although many privacy-preserving auction schemes for spectrum allocation have been proposed, none of them is able to perform practical spectrum auctions while ensuring enough security for bidders' private information, such as geo-locations, bid values, and data access patterns. To address this problem, we propose SLISA, a lightweight auction framework which enables an efficient spectrum allocation without revealing anything but the auction outcome, i.e., the winning bidders and their clearing prices. We present contributions on two fronts. First, as a foundation of our design, we adopt a Shuffle-then-Compute strategy to build a series of secure sub-protocols based on lightweight cryptographic primitives (e.g., additive secret sharing and basic garbled circuits). Second, we improve an advanced spectrum auction mechanism to make it data-oblivious, such that data access patterns can be hidden. Meanwhile, the modified protocols adapt to our elaborate building blocks without affecting its validity and security. We formally prove the security of all protocols under a semi-honest adversary model, and demonstrate performance improvements compared with state-of-the-art works through extensive experiments.
Fair and Protected Profit Sharing for Data Trading in Pervasive Edge Computing Environments
Yaodong Huang, Yiming Zeng, Fan Ye and Yuanyuan Yang (Stony Brook University, USA)
Innovative edge devices (e.g., smartphones, IoT devices) are becoming much more pervasive in our daily lives. With powerful sensing and computing capabilities, users can generate massive amounts of data. A new business model has emerged where data producers can sell their data to consumers directly to make money. However, how to protect the profit of the data producer from rogue consumers that may resell without authorization remains challenging. In this paper, we propose a smart-contract based protocol to protect the profit of the data producer while allowing consumers to resell the data legitimately. The protocol ensures the revenue is shared with the data producer over authorized reselling, and detects any unauthorized reselling. We formulate a fair revenue sharing problem to maximize the profit of both the data producer and resellers. We formulate the problem into a two-stage Stackelberg game and determine a ratio to share the reselling revenue between the data producer and resellers. Extensive simulations show that with resellers, our mechanism can achieve up to 97.8% higher profit for the data producer and resellers.
Secure Balance Planning of Off-blockchain Payment Channel Networks
Peng Li and Toshiaki Miyazaki (The University of Aizu, Japan); Wanlei Zhou (University of Technology Sydney, Australia)
Off-blockchain payment channels can significantly improve blockchain scalability by enabling a large number of micro-payments between two blockchain nodes, without committing every single payment to the blockchain. Multiple payment channels form a payment network, so that two nodes without direct channel connection can still make payments. A critical challenge in payment network construction is to decide how many funds should be deposited into payment channels as initial balances, which seriously influences the performance of payment networks, but has been seldom studied by existing work. In this paper, we address this challenge by designing PnP, a balance planning service for payment networks. Given estimated payment demands among nodes, PnP can decide channel balances to satisfy these demands with a high probability. It does not rely on any trusted third-parties, and can provide strong protection from malicious attacks with low overhead. It obtains these benefits with two novel designs, the cryptographic sortition and the chance-constrained balance planning algorithm. Experimental results on a testbed of 30 nodes show that PnP can enable 30% more payments than other designs.
Travel with Your Mobile Data Plan: A Location-Flexible Data Service
Zhiyuan Wang (The Chinese University of Hong Kong, Hong Kong); Lin Gao (Harbin Institute of Technology (Shenzhen), China); Jianwei Huang (The Chinese University of Hong Kong, Hong Kong)
Mobile Network Operators (MNOs) provide wireless data services based on a tariff data plan with a month data cap. Traditionally, the data cap is only valid for domestic data consumption and users have to pay extra roaming fees for overseas data consumption. A recent location-flexible service allows the user to access the domestic data cap in overseas locations (by configuring location-flexibility with a daily fee). This paper studies the economic effect of the location-flexibility on the overseas market. The overseas market comprises users who travel overseas within the month, thus is monthly variant. Each user decides his joint flexibility configuration and data consumption (J-FCDC) every day. The user's J-FCDC problem is an on-line payoff maximization. We analyze its off-line problem (which is NP-hard) and design an on-line strategy with provable performance. Moreover, we propose a pricing policy for the location-flexible service without relying on the market statistic information. We find that the location-flexibility induces users to consume more data in low-valuation days, and the MNO benefits from stimulating users' data consumption through an appropriate pricing. Numerical results based on empirical data show that the location-flexibility improves the MNO's revenue by 18% and the users' payoffs by 12% on average.
Murat Yuksel (University of Central Florida)
UAV II
Distributed Collaborative 3D-Deployment of UAV Base Stations for On-Demand Coverage
Tatsuaki Kimura and Masaki Ogura (Osaka University, Japan)
Use of unmanned aerial vehicles (UAVs) as flying base stations (BSs) has been gaining significant attention because they can provide connections to ground users efficiently during temporary events (e.g., sports events) owing to their flexible 3D-mobility. However, the complicated air-to-ground channel characteristics and interference among UAVs hinder the dynamic optimization of 3D-deployment of UAVs for spatially and temporally varying users. In this paper, we propose a novel distributed 3D-deployment method for UAV-BSs in a downlink millimeter-wave network for on-demand coverage. Our method consists mainly of two parts: sensing-aided crowd density estimation part; and distributed push-sum algorithm part. Since it is unrealistic to obtain all the specific positions users, the first part estimates the user density based on partial information obtained from on-ground sensors that can detect ground users around them. With the second part, each UAV dynamically updates its 3D-position by collaborating with its neighbors so that the total coverage of users is maximized. By employing a distributed push-sum protocol framework, we also prove the convergence of our algorithm. Simulation results demonstrate that our method can improve the coverage with a limited number of sensors and is applicable to a dynamic network.
Looking Before Crossing: An Optimal Algorithm to Minimize UAV Energy by Speed Scheduling with A Practical Flight Energy Model
Feng Shan, Luo Junzhou, Runqun Xiong, Wenjia Wu and Jiashuo Li (Southeast University, China)
Unmanned aerial vehicles (UAVs) is being widely used in wireless communication, e.g., data collection from ground nodes (GNs), and energy is critical. Existing works combine speed scheduling with trajectory design for UAVs, which is complicated to be optimally solved and lose trace of the fundamental nature of speed scheduling. We focus on speed scheduling by considering straight line flights, having applications in monitoring power transmission lines, roads, pipes or rivers/coasts. By real-world flight tests, we disclose a speed-related flight energy consumption model, distinct from typical distance-related or duration-related models. Based on such practical energy model, we develop the 'look before cross' (LBC) algorithm: on the time-distance diagram, we construct rooms representing GNs, and the goal is to design a room crossing walking trajectory, uniquely mapping to a speed scheduling. Such trajectory is determined by looking before crossing rooms. It is proved to be optimal for the offline scenario, in which information about GNs is available before scheduling. For the online scenario, we proposed a heuristic based on LBC. Simulation shows it performs close to the optimal offline solution. Our study on the speed scheduling and practical flight energy model shed light on a new direction on UAV aided wireless communication.
SwarmControl: An Automated Distributed Control Framework for Self-Optimizing Drone Networks
Lorenzo Bertizzolo and Salvatore D'Oro (Northeastern University, USA); Ludovico Ferranti (Northeastern University, USA & Sapienza University of Rome, Italy); Leonardo Bonati and Emrecan Demirors (Northeastern University, USA); Zhangyu Guan (University at Buffalo, USA); Tommaso Melodia (Northeastern University, USA); Scott M Pudlewski (Georgia Tech Research Institute, USA)
Networks of Unmanned Aerial Vehicles will take a vital role in future Internet of Things and 5G networks. However, how to control UAV networks in an automated and scalable fashion in distributed, interference-prone, and potentially adversarial environments is still an open research problem.
We introduce SwarmControl, a new software-defined control framework for UAV wireless networks based on distributed optimization principles. In essence, SwarmControl provides the Network Operator (NO) with a unified centralized abstraction of the networking and flight control functionalities. High-level control directives are then automatically decomposed and converted into distributed network control actions that are executed through programmable software-radio protocol stacks. SwarmControl (i) constructs a network control problem representation of the directives of the NO; (ii) decomposes it into a set of distributed sub-problems; and (iii) automatically generates numerical solution algorithms to be executed at individual UAVs.
We present a prototype of an SDR-based, fully reconfigurable UAV network platform that implements the proposed control framework, based on which we assess the effectiveness and flexibility of SwarmControl with extensive flight experiments. Results indicate that the SwarmControl framework enables swift reconfiguration of the network control functionalities, and it can achieve an average throughput gain of \(159%\) compared to the state-of-the-art solutions.
WBF-PS: WiGig Beam Fingerprinting for UAV Positioning System in GPS-denied Environments
Pei-Yuan Hong, Chi-Yu Li, Hong-Rong Chang, YuanHao Hsueh and Kuochen Wang (National Chiao Tung University, Taiwan)
Unmanned aerial vehicles (UAVs) are being investigated to substitute for labor in many indoor applications, e.g., asset tracking and surveillance, where the global positioning system (GPS) is not available. Emerging autonomous UAVs are also expected to land in indoor or canopied aprons automatically. Such GPS-denied environments require alternative non-GPS positioning methods. Though there have been some vision-based solutions for UAVs, they perform poorly in the scenes with bad illumination conditions, or estimate only relative locations but not global positions. Other common indoor localization methods do not cover UAV factors, such as low power and flying behaviors. To this end, we propose a practical non-GPS positioning system for UAVs, named WPF-PS, using low-power, off-the-shelf WiGig devices. We formulate a 3-dimensional beam fingerprint by leveraging the diversity of available TX/RX beams and the link quality. To augment accuracy, we use the weighted k-nearest neighbors algorithm to overcome partial fingerprint inaccuracy, and applies the particle filtering technique into considering the UAV motion. We prototype the WBF-PS on our UAV platform, and it yields a 90th percentile positioning error of below 1m with both small and large velocity estimation errors.
Enrico Natalizio (University of Lorraine/Loria)
SDN III
AudiSDN: Automated Detection of Network Policy Inconsistencies in Software-Defined Networks
Seungsoo Lee (KAIST, Korea (South)); Seungwon Woo (ETRI, Korea (South)); Jinwoo Kim (KAIST, Korea (South)); Vinod Yegneswaran and Phillip A Porras (SRI International, USA); Seungwon Shin (KAIST, Korea (South))
At the foundation of every network security architecture lies the premise that formulated network flow policies are reliably deployed and enforced by the network infrastructure. However, software-defined networks (SDNs) add a particular challenge to satisfying this premise, as for SDNs the flow policy implementation spans multiple applications and abstraction layers across the SDN stack. In this paper, we focus on the question of how to automatically identify cases in which the SDN stack fails to prevent policy inconsistencies from arising among these components. This question is rather essential, as when such inconsistencies arise the implications to the security and reliability of the network are devastating. We present AudiSDN, an automated fuzz-testing framework designed to formulate test cases in which policy inconsistencies can arise in OpenFlow networks, the most prevalent SDN protocol used today. We also present results from applying AudiSDN to two widely used SDN controllers, Floodlight and ONOS. In fact, our test results have led to the filing of 3 separate CVE reports. We believe that the approach presented in this paper is applicable to the breadth of OpenFlow platforms used today, and that its broader usage will help to address a serious but yet understudied pragmatic concern.
Inferring Firewall Rules by Cache Side-channel Analysis in Network Function Virtualization
Youngjoo Shin (Kwangwoon University, Korea (South)); Dongyoung Koo (Hansung University, Korea (South)); Junbeom Hur (Korea University, Korea (South))
Network function virtualization takes advantage of virtualization technology to achieve flexibility in network service provisioning. However, it comes at the cost of security risks caused by cache side-channel attacks on virtual machines. In this study, we investigate the security impact of these attacks on virtualized network functions. In particular, we propose a novel cache-based reconnaissance technique against virtualized Linux-based firewalls. The proposed technique has significant advantages in the perspective of attackers. First, it enhances evasiveness against intrusion detection owing to the ability of source spoofing. Second, it allows inference on a wide variety of filtering rules. During experiment in VyOS, the proposed method could infer the firewall rules with an accuracy of more than 90% by using only a few dozen packets. We also present countermeasures to mitigate cache-based attacks on virtualized network functions.
Multicast Traffic Engineering with Segment Trees in Software-Defined Networks
Chih-Hang Wang and Sheng-Hao Chiang (Academia Sinica, Taiwan); Shan-Hsiang Shen (National Taiwan University of Science and Technology, Taiwan); De-Nian Yang (Academia Sinica, Taiwan); Wen-Tsuen Chen (National Tsing Hua University, Taiwan)
Previous research on Segment Routing (SR) mostly focused on unicast, whereas online SDN multicast with segment trees supporting IETF dynamic group membership has not been explored. Compared with traditional unicast SR, online SDN multicast with segment trees is more challenging since finding an appropriate size, shape, and location for each segment tree is crucial to deploy it in more multicast trees. In this paper, we explore Multi-tree Multicast Segment Routing (MMSR) to jointly minimize the bandwidth consumption and forwarding rule updates over time by leveraging segment trees. We prove MMSR is NP-hard and design a competitive algorithm, STRUS to achieve the tightest bound. STRUS includes STR and STP to merge smaller overlapping subtrees into segment trees, and then tailor them to serve more multicast trees. We design Stability Indicator and Reusage Indicator to carefully construct segment trees at the backbone of multicast trees and reroutes multicast trees to span more segment trees. Simulation and implementation on real SDNs with YouTube traffic manifest that STRUS outperforms the state-of-the-art algorithms regarding the total cost and TCAM usage. Moreover, the running time of STRUS is no more than 1 second for massive networks with thousands of nodes and therefore is practical for SDN.
SDN-based Order-aware Live Migration of Virtual Machines
Dinuni Fernando, Ping Yang and Hui Lu (Binghamton University, USA)
Live migration is a key technique to transfer virtual machines (VMs) from one machine to another. Often multiple VMs need to be migrated in response to events such as server maintenance, load balancing, and impending failures. However, VM migration is a resource intensive operation, which pressures the CPU, memory, and network resources of the source and destination hosts as well as intermediate network links. The live migration mechanism ends up contending for finite resources with the VMs that it needs to migrate, which prolongs the total migration time and worsens the performance of applications running inside the VMs. In this paper, we propose SOLive, a new approach to reduce resource contention between the migration process and the VMs being migrated. First, by considering the nature of VM workloads, SOLive manages the migration order to significantly reduce the total migration time. Secondly, to reduce network contention between the migration process and the VMs, SOLive uses a combination of software-defined networking-based resource reservation and scatter gather-based VM migration to quickly deprovision the source host. A prototype implementation of our approach in KVM/QEMU platform shows that SOLive quickly evicts VMs from the source host with low impact on VMs' performance.
Jin Zhao (Fudan University)
Jul 9 Thu, 10:30 AM — 11:00 AM EDT
Localization I
Jul 9 Thu, 11:00 AM — 12:30 PM EDT
Edge Assisted Mobile Semantic Visual SLAM
Jingao Xu, Hao Cao, Danyang Li and Kehong Huang (Tsinghua University, China); Chen Qian (Dalian University of Technology, China); Longfei Shangguan (Princeton University, USA); Zheng Yang (Tsinghua University, China)
Localization and navigation play a key role in many location-based services and have attracted numerous research efforts from both academic and industrial community. In recent years, visual SLAM has been prevailing for robots and autonomous driving cars. However, the ever-growing computation resource demanded by SLAM impedes its application to resource-constrained mobile devices. In this paper, we present the design, implementation, and evaluation of edgeSLAM, an edge assisted real-time semantic visual SLAM service running on mobile devices. edgeSLAM leverages the state-of-the-art semantic segmentation algorithm to enhance localization and mapping accuracy and speeds up the computation-intensive SLAM and semantic segmentation algorithms by computation offloading. The key innovations of edgeSLAM include an efficient computation offloading strategy, an opportunistic data sharing mechanism, and an adaptive task scheduling algorithm. We fully implement edgeSLAM on an edge server and different types of mobile devices. Extensive experiments are conducted under 3 data sets, and the results show that edgeSLAM is able to run on mobile devices at 35fps frame rate and achieves a 5cm localization accuracy, outperforming existing solutions by more than 15%. To the best of our knowledge, edgeSLAM is the first real-time semantic visual SLAM for mobile devices.
POLAR: Passive object localization with IEEE 802.11ad using phased antenna arrays
Dolores Garcia (Imdea Networks, Spain); Jesús O. Lacruz (IMDEA Networks Institute, Spain); Pablo Jimenez Mateo (IMDEA Networks, Spain); Joerg Widmer (IMDEA Networks Institute, Spain)
Millimeter-wave systems not only provide high data rates and low latency, but the very large bandwidth also allows for highly accurate environment sensing. Such properties are extremely useful for smart factory scenarios. At the same time, reusing existing communication links for passive object localization is significantly more challenging than radar-based approaches due to the sparsity of the millimeter-wave multi-path environment and the weakness of the reflected paths compared to the line-of-sight path.
In this paper we explore the localization accuracy that can be achieved with IEEE 802.11ad devices. We use commercial APs while for the stations we design a full-bandwidth 802.11ad compatible FPGA-based platform with phased antenna array. The stations exploit the preamble of the beam training packets of the APs to obtain CIR measurements for all antenna patterns. With this, we determine distance and angle information for the different multi-path components in the environment to passively localize a mobile object. We evaluate our system with multiple APs and a moving robot with metallic surface. Despite the strong limitations of the hardware, our system operates in real-time and achieves 30 cm mean error accuracy and sub-meter accuracy in 98% of the cases.
Towards Single Source based Acoustic Localization
Linsong Cheng, Zhao Wang, Yunting2 Zhang, Weiyi Wang, Weimin Xu and Jiliang Wang (Tsinghua University, China)
Acoustic based tracking has been shown promising in many applications like Virtual Reality, smart home, video gaming, etc. Its real life deployments, however, face fundamental limitations.Existing approaches generally need three sound sources, while most COTS devices (e.g., TVs) and speakers have only two sound sources.Most tracking approaches require periodical localization to bootstrap and alleviate accumulated tracking error.
We present AcouRadar, an acoustic-based localization system with single sound source. In the heart of AcouRadar we adopt a general new model which quantifies signal properties to different frequencies, distances and angles to the source. We verify the model and show that signal from a single source can provide features for localization.To address practical challenges, (1) we design an online model adaption method to address model deviation from real signal, (2) we design pulse modulated signals to alleviate the impact of environment such as multipath effect, and (3) to address signal dynamics over time, we derive relatively stable amplitude ratio between different frequencies. We implement AcouRadar on Android and evaluate its performance for different COTS speakers in different environments. The results show that AcouRadar achieves single source localization with average error less than 5 cm.
When FTM Discovered MUSIC: Accurate WiFi-based Ranging in the Presence of Multipath
Kevin Jiokeng and Gentian Jakllari (University of Toulouse, France); Alain Tchana (ENS Lyon, France); André-Luc Beylot (University of Toulouse, France)
The recent standardization by IEEE of Fine Time Measurement (FTM), a time-of-flight based approach for ranging has the potential to be a turning point in bridging the gap between the rich literature on indoor localization and the so-far tepid market adoption. However, experiments with the first WiFi cards supporting FTM show that while it offers meter-level ranging in clear line-of-sight settings (LOS), its accuracy can collapse in non-line-of-sight (NLOS) scenarios.
We present FUSIC, the first approach that extends FTM's LOS accuracy to NLOS settings, without requiring any changes to the standard. To accomplish this, FUSIC leverages the results from FTM and MUSIC -- both erroneous in NLOS -- into solving the double challenge of 1) detecting when FTM returns an inaccurate value and 2) correcting the errors as necessary. Experiments in 4 different physical locations reveal that a) FUSIC extends FTM's LOS ranging accuracy to NLOS settings -- hence, achieving its stated goal; b) it significantly improves FTM's capability to offer room-level indoor positioning.
Hongzi Zhu (Shanghai Jiao Tong University)
An Adaptive and Fast Convergent Approach to Differentially Private Deep Learning
Zhiying Xu and Shuyu Shi (University of Nanjing, China); Alex X. Liu (Ant Financial Services Group, China); Jun Zhao (Nanyang Technological University, Singapore); Lin Chen (Yale University, USA)
With the advent of the era of big data, deep learning has become a prevalent building block in a variety of machine learning or data mining tasks, such as signal processing, network modeling and traffic analysis, to name a few. The massive user data crowdsourced plays a crucial role in the success of deep learning models. However, it has been shown that user data may be inferred from trained neural models and thereby exposed to potential adversaries, which raises information security and privacy concerns. To address this issue, recent studies leverage the technique of differential privacy to design private-preserving deep learning algorithms. Albeit successful at privacy protection, differential privacy degrades the performance of neural models. In this paper, we develop ADADP, an adaptive and fast convergent learning algorithm with a provable privacy guarantee. ADADP significantly reduces the privacy cost by improving the convergence speed with an adaptive learning rate and mitigates the negative effect of differential privacy upon the model accuracy by introducing adaptive noise. The performance of ADADP is evaluated on real-world datasets. Experiment results show that it outperforms state-of-the-art differentially private approaches in terms of both privacy cost and model accuracy.
Enabling Execution Assurance of Federated Learning at Untrusted Participants
XiaoLi Zhang, Fengting Li, Zeyu Zhang and Qi Li (Tsinghua University, China); Cong Wang (City University of Hong Kong, Hong Kong); Jianping Wu (Tsinghua University, China)
Federated learning (FL), as the privacy-preserving machine learning framework, draws growing attention in both industry and academia. It can obtain a jointly accurate model by distributing training tasks into data owners without centralized data collection. However, FL faces new security problems, as it losses direct control to training processes. Thus, one fundamental demand is to ensure whether participants execute training tasks as intended.
In this paper, we propose TrustFL, a practical scheme to build assurance of participants' training execution with high confidence. We employ Trusted Execution Environments (TEEs) to attest to the correct execution. Particularly, instead of performing all training processes inside TEE, we use TEE to randomly check a small fraction of training processes with tunable levels of assurance. All processes are executed on the co-located faster processor, e.g., GPU, for efficiency. Besides, we adopt a commitment scheme and devise a specific data selection method, so as to prevent cheating like only processing TEE-requested computation or uploading old results. We prototype TrustFL using GPU and SGX, and evaluate its performance. The results show that TrustFL can achieve one/two orders of magnitude speedups compared with purely training with SGX, while assuring the correct training with a confidence level of 99%.
EncELC: Hardening and Enriching Ethereum Light Clients with Trusted Enclaves
Chengjun Cai (City University of Hong Kong, Hong Kong); Lei Xu (City University of Hong Kong, China & Nanjing University of Science and Technology, Hong Kong); Zhou Anxin, Ruochen Wang and Cong Wang (City University of Hong Kong, Hong Kong); Qian Wang (Wuhan University, China)
The rapid growth of Ethereum blockchain has brought extremely heavy overhead for coin owners or developers to bootstrap and access transactions on Ethereum. To address this, light client is enabled, which only stores a small fraction of blockchain data and relies on bootstrapped full nodes for transaction retrievals. However, because the retrieval requests are outsourced, it raises several severe concerns about the integrity of returned results and the leakage of sensitive blockchain access histories, largely hindering the wider adoption of this important lightweight design. From another perspective, the continuously increasing blockchain storage also urges for more effective query functionalities for the Ethereum blockchain, so as to allow flexible and precise transaction retrievals.
In this paper, we propose EncELC, a new Ethereum light client design that enforces full-fledged protections for clients and enables rich queries over the Ethereum blockchain. EncELC leverages trusted hardware (e.g., Intel SGX) as a starting point for building efficient yet secure processing, and further crafts several crucial performance and security refinement designs to boost query efficiency and conceal leakages inside and outside SGX enclave. We implement a prototype of EncELC and test its performance in several real settings, and the results have confirmed the practicality of EncELC.
Mneme: A Mobile Distributed Ledger
Dimitris Chatzopoulos (Hong Kong University of Science and Technology, Hong Kong); Sujit Gujar (International Institute of Information Technology, Hyderabad, India); Boi Faltings (Swiss Federal Institute of Technology (EPFL), Switzerland); Pan Hui (Hong Kong University of Science and Technology & University of Helsinki, Hong Kong)
Advances in mobile computing have paved the way for new types of distributed applications that can be executed solely by mobile devices on device-to-device (D2D) ecosystems (e.g., crowdsensing). More sophisticated applications, like cryptocurrencies, need distributed ledgers to function. Distributed ledgers, such as blockchains and directed acyclic graphs (DAGs), employ consensus protocols to add data in the form of blocks. However such protocols are designed for resourceful devices that are interconnected via the Internet. Moreover, existing distributed ledgers are not deployable to D2D ecosystems since their storage needs are continuously increasing. In this work, we introduce Mneme, a DAG-based distributed ledger that can be maintained solely by mobile devices and operates via two consensus protocols: Proof-of-Context (PoC) and Proof-of-Equivalence (PoE). PoC employs users' context to add data on Mneme. PoE is executed periodically to summarize data and produce equivalent blocks that require less storage. We analyze the security of Mneme and justify the ability of PoC and PoE to guarantee the characteristics of distributed ledgers: persistence and liveness. Furthermore, we analyze potential attacks from malicious users and prove that the probability of a successful attack is inversely proportional to the square of the number of mobile users who maintain Mneme.
Kai Zeng (George Mason University)
Security IV
DRAMD: Detect Advanced DRAM-based Stealthy Communication Channels with Neural Networks
Zhiyuan Lv and Youjian Zhao (Tsinghua University, China); Chao Zhang (Institute for Network Sciences and Cyberspace, Tsinghua University, China); Haibin Li (Tsinghua University, China)
Shared resources facilitate stealthy communication channels, including side channels and covert channels, which greatly endanger the information security, even in cloud environments. As a commonly shared resource, DRAM memory also serves as a source of stealthy channels. Existing solutions rely on two common features of DRAM-based channels, i.e., high cache miss and high bank locality, to detect the existence of such channels. However, such solutions could be defeated. In this paper, we point out the weakness of existing detection solutions by demonstrating a new advanced DRAM-based channel, which utilizes the hardware Intel SGX to conceal cache miss and bank locality. Further, we propose a novel neural network based solution DRAMD to detect such advanced stealthy channels. DRAMD uses hardware performance counters to track not only cache miss events that are used by existing solutions, but also counts of branches and instructions executed, as well as branch misses. Then DRAMD utilizes neural networks to model the access patterns of different applications and therefore detects potential stealthy communication channels. Our evaluation shows that DRAMD achieves up to 99% precision with 100% recall. Furthermore, DRAMD introduces less than 5% performance overheads and negligible impacts on legacy applications.
PPGPass: Nonintrusive and Secure Mobile Two-Factor Authentication via Wearables
Yetong Cao (Beijing Institute of Technology, China); Qian Zhang (Tsinghua University, China); Fan Li and Song Yang (Beijing Institute of Technology, China); Yu Wang (Temple University, USA)
Mobile devices are promising to apply two-factor authentication in order to improve system security and enhance user privacy-preserving. Existing solutions usually have certain limits of requiring some form of user effort, which might seriously affect user experience and delay authentication time. In this paper, we propose PPGPass, a novel mobile two-factor authentication system, which leverages Photoplethysmography (PPG) sensors in wrist-worn wearables to extract individual characteristics of PPG signals. In order to realize both nonintrusive and secure, we design a two-stage algorithm to separate clean heartbeat signals from PPG signals contaminated by motion artifacts, which allows verifying users without intentionally staying still during the process of authentication. In addition, to deal with noncancelable issues when biometrics are compromised, we design a repeatable and non-invertible method to generate cancelable feature templates as alternative credentials, which enables to defense against man-in-the-middle attacks and replay attacks. To the best of our knowledge, PPGPass is the first nonintrusive and secure mobile two-factor authentication based on PPG sensors in wearables. We build a prototype of PPGPass and conduct the system with comprehensive experiments involving multiple participants. PPGPass can achieve an average F1 score of 95.3%, which confirms its high effectiveness, security, and usability.
ROBin: Known-Plaintext Attack Resistant Orthogonal Blinding via Channel Randomization
Yanjun Pan (University of Arizona, USA); Yao Zheng (University of Hawai'i at Mānoa, USA); Ming Li (University of Arizona, USA)
Orthogonal blinding based schemes for wireless physical layer security aim to achieve secure communication by injecting noise into channels orthogonal to the main channel and corrupting the eavesdropper's signal reception. These methods, albeit practical, have been proven vulnerable against multi-antenna eavesdroppers who can filter the message from the noise. The venerability is rooted in the fact that the main channel state remains stasis in spite of the noise injection. Our proposed scheme leverages a reconfigurable antenna for Alice to rapidly change the channel state during transmission and a compressive sensing based algorithm for her to predict and cancel the changing effects for Bob. As a result, the communication between Alice and Bob remains clear, whereas randomized channel state prevent Eve from launching the known-plaintext attack. We formally analyze the security of the scheme against both single and multi-antenna eavesdroppers and identify its unique anti-eavesdropping properties due to the artificially created fast changing channel. We conduct extensive simulations and real-world experiments to evaluate its performance. Empirical results show that our scheme can suppress Eve's attack success rate to the level of random guessing, even if she knows all the symbols transmitted through other antenna modes.
Setting the Yardstick: A Quantitative Metric for Effectively Measuring Tactile Internet
Joseph Verburg (Delft University of Technology, The Netherlands); Kroep Kees (TU Delft, The Netherlands); Vineet Gokhale and Venkatesha Prasad (Delft University of Technology, The Netherlands); Vijay S Rao (Cognizant Technology Solutions & Delft University of Technology, The Netherlands)
The next frontier in communications is the transmission of touch over the Internet -- popularly termed as Tactile Internet (TI) - containing both tactile and kinesthetic feedback. While enormous efforts have been undertaken to design TI enablers, barely any emphasis is paid to contemplate and diagnose the (impaired) performance. Existing qualitative and quantitative performance metrics -- predominantly based on AV transmissions -- serve only as coarse-grained measures of the perceptual impairment, and hence are unsuitable for isolating performance bottlenecks. In this paper, we design quantitative metrics for measuring the quality of a TI session that is agnostic to haptic coders, any sophisticated algorithms and network parameters. As we need to compare transmitted and received haptic signals, we use Dynamic Time Warping from speech recognition literature and evolve two new quantitative metrics (a) Effective Time-Offset (ETO) and (b) Effective Value-Offset (EVO) that comprehensively characterize degradation in haptic signal profile on a finer scale. We clearly outline the mathematical foundation through rigorous TI experiments by incorporating network emulator and haptic devices. We demonstrate the effectiveness of our proposed metrics through practical measurements using a haptic device and we show 40-150x lesser delay adjustments for only 4%-17% increased RMSE compared to DTW.
Xinwen Fu (University of Massachusetts Lowell)
FastVA: Deep Learning Video Analytics Through Edge Processing and NPU in Mobile
Tianxiang Tan and Guohong Cao (The Pennsylvania State University, USA)
Many mobile applications have been developed to apply deep learning for video analytics. Although these advanced deep learning models can provide us with better results, they also suffer from the high computational overhead which means longer delay and more energy consumption when running on mobile devices. To address this issue, we propose a framework called FastVA, which supports deep learning video analytics through edge processing and Neural Processing Unit (NPU) in mobile. The major challenge is to determine when to offload the computation and when to use NPU. Based on the processing time and accuracy requirement of the mobile application, we study two problems: Max-Accuracy where the goal is to maximize the accuracy under some time constraints, and Max-Utility where the goal is to maximize the utility which is a weighted function of processing time and accuracy. We formulate them as integer programming problems and propose heuristics based solutions. We have implemented FastVA on smartphones and demonstrated its effectiveness through extensive evaluations.
Improving Quality of Experience by Adaptive Video Streaming with Super-Resolution
Yinjie Zhang (Peking University, China); Yuanxing Zhang (School of EECS, Peking University, China); Yi Wu, Yu Tao and Kaigui Bian (Peking University, China); Pan Zhou (Huazhong University of Science and Technology, China); Lingyang Song (Peking University, China); Hu Tuo (IQIYI Science & Technology Co., Ltd., China)
Given high-speed mobile Internet access today, audiences are expecting much higher video quality than before. Video service providers have deployed dynamic video bitrate adaptation services to fulfill such user demands. However, legacy video bitrate adaptation techniques are highly dependent on the estimation of dynamic bandwidth, and fail to integrate the video quality enhancement techniques, or consider the heterogeneous computing capabilities of client devices, leading to low quality of experience (QoE) for users. In this paper, we present a super-resolution based adaptive video streaming (SRAVS) framework, which applies a Reinforcement Learning (RL) model for integrating the video super-resolution (VSR) technique with the video streaming strategy. The VSR technique allows clients to download low bitrate video segments, reconstruct and enhance them to high-quality video segments while making the system less dependent on estimating dynamic bandwidth. The RL model investigates both the playback statistics and the distinguishing features related to the client-side computing capabilities. Trace-driven emulations over real-world videos and bandwidth traces verify that SRAVS can significantly improve the QoE for users compared to the state-of-the-art video streaming strategies with or without involving VSR techniques.
Stick: A Harmonious Fusion of Buffer-based and Learning-based Approach for Adaptive Streaming
Tianchi Huang (Tsinghua University, China); Chao Zhou (Beijing Kuaishou Technology Co., Ltd, China); Rui-Xiao Zhang, Chenglei Wu, Xin Yao and Lifeng Sun (Tsinghua University, China)
Existing off-the-shelf buffer-based approaches leverage a simple yet effective buffer-bound to control the adaptive bitrate~(ABR) streaming system. Nevertheless, such approaches with standard parameters fail to provide high quality of experience~(QoE) video streaming under all considered network conditions. Meanwhile, state-of-the-art learning-based ABR approach Pensieve outperforms existing schemes but is impractical to deploy. Therefore, how to harmoniously fuse the buffer-based and learning-based approach has become a key challenge for further enhancing ABR methods. In this paper, we propose \emph{Stick}, an ABR algorithm that fuses the deep learning method and traditional buffer-based method. Stick utilizes deep reinforcement learning~(DRL) method to train the neural network, which outputs the \emph{buffer-bound} to control the buffer-based approach for maximizing the QoE metric with different parameters. Trace-driven emulation illustrates that Stick betters Pensieve by 9.41% with an overhead reduction of 88%. Moreover, aiming to further reduce the computational costs while preserving the performances, we propose Trigger, a light-weighted neural network that \emph{determines} whether the buffer-bound should be adjusted. Experimental results show that Stick+Trigger rivals or outperforms existing schemes in average QoE by 1.7%-28%, and significantly reduces the Stick's computational overhead by 24%-61%. Extensive results on real-world evaluation also demonstrate the superiority of Stick over existing state-of-the-art approaches.
Streaming 360◦ Videos using Super-resolution
Mallesham Dasari (Stony Brook University, USA); Arani Bhattacharya (KTH Royal Institute of Technology, Sweden); Santiago Vargas, Pranjal Sahu, Aruna Balasubramanian and Samir R. Das (Stony Brook University, USA)
360 videos provide an immersive experience to users, but require considerably more bandwidth to stream compared to regular videos. State-of-the-art 360◦ video streaming systems use viewport prediction to reduce bandwidth requirement, that involves predicting which part of the video the user will view and only fetching that content. However, viewport prediction is error prone resulting in poor user QoE. We design PARSEC, a 360 video streaming system that reduces bandwidth requirement while improving video quality. PARSEC trades off bandwidth for more client compute to achieve its goals. PARSEC uses a compression technique based on super resolution, where the video is significantly compressed at the server and the client runs a deep learning model to enhance the video to a much higher quality. PARSEC addresses a set of challenges associated with using super resolution for 360 video streaming: large deep learning models, high inference latency, and variance in the quality of the enhanced videos. To this end, PARSEC trains small micro-models over shorter video segments, and then combines traditional video encoding with super resolution techniques to overcome the challenges. We evaluate PARSEC on a real WiFi network, over a broadband network trace released by FCC, and over a 4G/LTE network trace.
Zhenhua Li (Tsinghua University)
Classification of Load Balancing in the Internet
Rafael Almeida and Italo Cunha (Universidade Federal de Minas Gerais, Brazil); Renata Teixeira (Inria, France); Darryl Veitch (University of Technology Sydney, Australia); Christophe Diot (Google, USA)
Recent advances in programmable data planes, software-defined networking, and even the adoption of IPv6, support novel, more complex load balancing strategies. We introduce the Multipath Classification Algorithm (MCA), which extends existing formalism and techniques to consider that load balancers may use arbitrary combinations of bits in the packet header for load balancing. We propose optimizations to reduce probing cost that are applicable to MCA and existing load balancing measurement techniques. Through large-scale measurement campaigns, we characterize and study the evolution of load balancing on the IPv4 and IPv6 Internet performing measurement campaigns with multiple transport protocols. Our results show that load balancing is more prevalent and that load balancing strategies are more mature than previous characterizations have found.
Offloading Dependent Tasks in Mobile Edge Computing with Service Caching
Gongming Zhao and Hongli Xu (University of Science and Technology of China, China); Yangming Zhao and Chunming Qiao (University at Buffalo, USA); Liusheng Huang (University of Science and Technology of China, China)
In Mobile Edge Computing (MEC) applications, many tasks require specific service support for execution and in addition, have a dependent order of execution among the tasks. However, previous works often ignore the impact of having limited services cached at the edge nodes on (dependent) task offloading, thus may lead to an infeasible offloading decision or a longer completion time. To bridge the gap, this paper studies how to efficiently offload dependent tasks to edge nodes with limited (and predetermined) service caching. We denote such a problem whose objective is to minimize the makespan by ODT-MM, and prove that there exists no constant approximation algorithm for this hard problem. Then, we design an efficient convex programming based algorithm (CP) to solve this problem. Moreover, we study a special case with a homogeneous MEC and propose a favorite successor based algorithm (FS) to solve this special case with a competitive ratio of O(1). Extensive simulation results using Google data traces show that our proposed algorithms can significantly reduce applications' completion time by about 27-51% compared with other alternatives.
One More Config is Enough: Saving (DC)TCP for High-speed Extremely Shallow-buffered Datacenters
Wei Bai (Microsoft Research Asia, China); Shuihai Hu (The Hong Kong University of Science and Technology, China); Kai Chen (Hong Kong University of Science and Technology, China); Kun Tan (Huawei, China); Yongqiang Xiong (Microsoft Research Asia, China)
The link speed in production datacenters is growing fast, from 1Gbps to 40Gbps or even 100Gbps. However, the buffer size of commodity switches increases slowly, e.g., from 4MB at 1Gbps to 16MB at 100Gbps, thus significantly outpaced by the link speed. In such extremely shallow-buffered networks, today's TCP/ECN solutions, such as DCTCP, suffer from either excessive packet loss or substantial throughput degradation.
To this end, we present BCC, a simple yet effective solution that requires just one more ECN config over prior solutions. BCC operates based on real-time global buffer utilization. When available buffer suffices, BCC delivers both high throughput and low packet loss rate as prior work; Once it gets insufficient, BCC automatically triggers the shared buffer ECN to prevent packet loss at the cost of sacrificing little throughput. BCC is readily deployable with commodity switches. We validate BCC's feasibility in a small 100G testbed and evaluate its performance using large-scale simulations. Our results show that BCC maintains low packet loss rate while slightly degrading throughput when the buffer becomes insufficient. For example, compared to current practice, BCC achieves up to 94.4% lower 99th percentile FCT for small flows while degrading FCT for large flows by up to 3%.
TINA: A Fair Inter-datacenter Transmission Mechanism with Deadline Guarantee
Xiaodong Dong (Tianjin University, China); Wenxin Li (Hong Kong University of Science & Technology, Hong Kong); Xiaobo Zhou and Keqiu Li (Tianjin University, China); Heng Qi (Dalian University of Technology, China)
Geographically distributed cloud is a promising technique to achieve high performance for service providers. For inter-datacenter transfers, deadline guarantee and fairness are the two most important requirements. On the one hand, to ensure more transfers finish before their deadlines, preemptive scheduling policies are widely used, leading to the transfer starvation problem and is hence unfair. On the other hand, to ensure fairness, inter-datacenter bandwidth is fairly shared among transfers with per-flow bandwidth allocation, which leads to deadline missing problem. A mechanism that achieves these two seemingly conflicting objectives simultaneously is still missing. In this paper, we propose TINA to schedule network transfers fairly while providing deadline guarantee. TINA allows each transfer to compete freely with each other for bandwidth. More specifically, each transfer is assigned a probability to indicate whether to transmit or not. We formulate the competition among the transfers as an El Farol game while keeping the traffic load under a threshold to avoid congestion. We then prove that the Nash Equilibrium is the optimal strategy and propose a light-weight algorithm to derive it. Finally, both simulations and testbed experiments results show that TINA achieves superior performance than state-of-art methods in terms of fairness and deadline guarantee rate.
Mingkui Wei (Sam Houston State University)
An Effective Multi-node Charging Scheme for Wireless Rechargeable Sensor Networks
Tang Liu (Sichuan Normal University, China); BaiJun Wu (University of Louisiana at Lafayette, USA); Shihao Zhang, Jian Peng and Wenzheng Xu (Sichuan University, China)
With the maturation of wireless charging technology, Wireless Rechargeable Sensor Networks (WRSNs) has become a promising solution for prolong network lifetimes. Recently studies propose to employ a mobile charger (MC) to simultaneously charge multiple sensors within the same charging range, such that the charging performance can be improved. In this paper, we aim to jointly optimize the number of dead sensors and the energy usage effectiveness in such multi-node charging scenarios. We achieve this by introducing the partial charging mechanism, meaning that instead of following the conventional way that each sensor gets fully charged in one time step, our work allows MC to fully charge a sensor by multiple times. We show that the partial charging mechanism causes minimizing the number of dead sensors and maximizing the energy usage effectiveness to conflict with each other. We formulate this problem and develop a multi-node temporal spatial partial-charging algorithm (MTSPC) to solve it. The optimality of MTSPC is proved, and extensive simulations are carried out to demonstrate the effectiveness of MTSPC.
Energy Harvesting Long-Range Marine Communication
Ali Hosseini-Fahraji, Pedram Loghmannia, Kexiong (Curtis) Zeng and Xiaofan Li (Virginia Tech, USA); Sihan Yu (Clemson University, USA); Sihao Sun, Dong Wang, Yaling Yang, Majid Manteghi and Lei Zuo (Virginia Tech, USA)
This paper proposes a self-sustaining broadband long-range maritime communication as an alternative to the expensive and slow satellite communications in offshore areas. The proposed system, named Marinet, consists of many buoys. Each of the buoys has two units: an energy harvesting unit and a wireless communication unit. The energy harvesting unit extracts energy from ocean waves to support the operation of the wireless communication unit. The wireless communication unit at each buoy operates in a TV white space frequency band and connects to each other and wired high-speed gateways on land or islands to form a mesh network. The resulting mesh network provides wireless access services to marine users in their range. A prototype of the energy harvesting unit and the wireless communication unit are built and tested in the field. In addition, to ensure Marinet will maintain stable communications in rough sea states, an ocean-link-state prediction algorithm is designed. The algorithm predicts ocean link-states based on ocean wave movements. A realistic ocean simulator is designed and used to evaluate how such a link-state prediction algorithm can improve routing algorithm performance.
Maximizing Charging Utility with Obstacles through Fresnel Diffraction Model
Chi Lin and Feng Gao (Dalian University of Technology, China); Haipeng Dai (Nanjing University & State Key Laboratory for Novel Software Technology, China); Jiankang Ren, Lei Wang and Guowei WU (Dalian University of Technology, China)
Benefitting from the recent breakthrough of wireless power transfer technology, Wireless Rechargeable Sensor Networks (WRSNs) have become an important research topic. Most prior arts focus on system performance enhancement in an ideal environment that ignores impacts of obstacles. This contradicts with practical applications in which obstacles can be found almost anywhere and have dramatic impacts on energy transmission. In this paper, we concentrate on the problem of charging a practical WRSN in the presence of obstacles to maximize the charging utility under specific energy constraints. First, we propose a new theoretical charging model with obstacles based on Fresnel diffraction model, and conduct experiments to verify its effectiveness. Then, we propose a spatial discretization scheme to obtain a finite feasible charging position set for MC, which largely reduces computation overhead. Afterwards, we re-formalize charging utility maximization with energy constraints as a submodular function maximization problem and propose a cost-efficient algorithm with approximation ratio \(\frac{(e-1)}{2e}(1-\varepsilon)\) to solve it. Lastly, we demonstrate that our scheme outperforms other algorithms by at least \(14.8%\) in terms of charging utility through test-bed experiments and extensive simulations.
Placing Wireless Chargers with Limited Mobility
Haipeng Dai (Nanjing University & State Key Laboratory for Novel Software Technology, China); Chaofeng Wu, Xiaoyu Wang and Wanchun Dou (Nanjing University, China); Yunhuai Liu (Peking University, China)
This paper studies the problem of Placing directional wIreless chargers with Limited mObiliTy (PILOT), that is, given a budget of mobile directional wireless chargers and a set of static rechargeable devices on a 2D plane, determine deployment positions, stop positions and orientations, and portions of time for all chargers such that overall charging utility of all devices can be maximized. To the best of our knowledge, we are the first to study placement of mobile chargers. To address PILOT, we propose a (1/2−ε)-approximation algorithm. First, we present a method to approximate nonlinear charging power of chargers, and further propose an approach to construct Maximal Covered Set uniform subareas to reduce the infinite continuous search space for stop positions and orientations to a finite discrete one. Second, we present geometrical techniques to further reduce the infinite solution space for candidate deployment positions to a finite one without performance loss, and transform PILOT to a mixed integer nonlinear programming problem. Finally, we propose a linear programming based greedy algorithm to address it. Simulation and experimental results show that our algorithm outperforms five comparison algorithms by 23.11% ∼ 281.10%.
Cong Wang (Old Dominion University)
Edge Computing II
Collaborate or Separate? Distributed Service Caching in Mobile Edge Clouds
Zichuan Xu and Lizhen Zhou (Dalian University of Technology, China); Sid Chi-Kin Chau (Australian National University, Australia); Weifa Liang (The Australian National University, Australia); Qiufen Xia (Dalian University of Technology, China); Pan Zhou (Huazhong University of Science and Technology, China)
With the development of 5G technology, mobile edge computing is emerging as an enabling technique to promote Quality of Service (QoS) of network services. Service providers are caching services to the edge cloud. In this paper, we study the problem of service caching in mobile edge network under a mobile service market with multiple network service providers completing for both computation and bandwidth resources of the edge cloud. For the problem without resource sharing among network service providers, we propose an Integer Linear Program (ILP) and a randomized approximation algorithm via randomized rounding. For the problem with resource sharing, we devise a distributed and stable game-theoretical mechanism with the aim to minimize the social cost of all service providers, by introducing a novel cost sharing model and a coalition formation game. We analyze the performance of the mechanism by showing a good guaranteed gap between the solution obtained and the social optimum, i.e., Strong Price of Anarchy (SPoA).
Cooperative Service Caching and Workload Scheduling in Mobile Edge Computing
Xiao Ma (Beijing University of Posts and Telecommunications, China); Ao Zhou (Beijing University of Posts & Telecommunications, China); Shan Zhang (Beihang University, China); Shangguang Wang (Beijing University of Posts and Telecommunications, China)
Mobile edge computing is beneficial to reduce service response time and core network traffic by pushing cloud functionalities to network edge. Equipped with storage and computation capacities, edge nodes can cache services of resource-intensive and delay-sensitive mobile applications and process the corresponding computation tasks without outsourcing to central clouds. However, the heterogeneity of edge resource capacities and inconsistence of edge storage and computation capacities make it difficult to jointly fully utilize the storage and computation capacities when there is no cooperation among edge nodes. To address this issue, we consider cooperation among edge nodes and investigate cooperative service caching and workload scheduling in mobile edge computing. This problem can be formulated as a mixed integer nonlinear programming problem, which has non-polynomial computation complexity. To overcome the challenges of subproblem coupling, computation-communication tradeoff, and edge node heterogeneity, we develop an iterative algorithm called ICE. This algorithm is designed based on Gibbs sampling, which has provably near-optimal results, and the idea of water filling, which has polynomial computation complexity. Simulations are conducted and the results demonstrate that our algorithm can jointly reduce the service response time and the outsourcing traffic compared with the benchmark algorithms.
Joint Optimization of Signal Design and Resource Allocation in Wireless D2D Edge Computing
Junghoon Kim (Purdue University, USA); Taejoon Kim and Morteza Hashemi (University of Kansas, USA); Christopher G. Brinton (Purdue University & Zoomi Inc., USA); David Love (Purdue University, USA)
In this paper, we study the distributed computational capabilities of device-to-device (D2D) networks. A key characteristic of D2D networks is that their topologies are reconfigurable to cope with network demands. For distributed computing, resource management is challenging due to limited network and communication resources, leading to inter-channel interference. To overcome this, recent research has addressed the problems of network scheduling, subchannel allocation, and power allocation, but has not considered them jointly. In this paper, unlike previous mobile edge computing (MEC) approaches, we propose a joint optimization of wireless signal design and network resource allocation to maximizing energy efficiency. Given that the resulting problem is a non-convex mixed integer program (MIP) which is infeasible to solve at scale, we decompose its solution into two parts: (i) a resource allocation subproblem, which optimizes the link selection and subchannel allocations, and (ii) signal design subproblem, which optimizes the transmit beamformer, transmit power, and receive combiner. Simulation results on wireless edge topologies show that our method yields substantial improvements in energy efficiency compared with cases of no offloading and partially optimized methods, and that the efficiency scales well with the size of the network.
INFOCOM 2020 Best Paper: Reducing the Service Function Chain Backup Cost over the Edge and Cloud by a Self-adapting Scheme
Xiaojun Shang, Yaodong Huang, Zhenhua Liu and Yuanyuan Yang (Stony Brook University, USA)
The fast development of virtual network functions (VNFs) brings new opportunities to network service deployment on edge networks. For complicated services, VNFs can chain up to form service function chains (SFCs). Despite the promises, it is still not clear how to backup VNFs to minimize the cost while meeting the SFC availability requirements in an online manner. In this paper, we propose a novel self-adapting scheme named SAB to efficiently backup VNFs over both the edge and the cloud. Specifically, SAB uses both static backups and dynamic ones created on the fly to accommodate the resource limitation of edge networks. For each VNF backup, SAB determines whether to place it on the edge or the cloud, and if on the edge, which edge server to use for load balancing. SAB does not assume failure rates of VNFs but instead strives to find the sweet point between the desired availability of SFCs and the backup cost. Both theoretical performance bounds and extensive simulation results highlight that SAB provides significantly higher availability with lower backup costs compared with existing baselines.
Jiangchuan Liu (Simon Fraser University)
Jul 9 Thu, 12:30 PM — 2:00 PM EDT
IoT II
Jul 9 Thu, 2:00 PM — 3:30 PM EDT
An Adaptive Robustness Evolution Algorithm with Self-Competition for Scale-free Internet of Things
Tie Qiu (Tianjin University, China); Zilong Lu (Dalian University of Technology, China); Keqiu Li (Tianjin University, China); Guoliang Xue (Arizona State University, USA); Dapeng Oliver Wu (University of Florida, USA)
Internet of Things (IoT) includes numerous sensing nodes that constitute a large scale-free network. Optimizing the network topology for increased resistance against malicious attacks is an NP-hard problem. Heuristic algorithms can effectively handle such problems, particularly genetic algorithms. However, conventional genetic algorithms are prone to falling into premature convergence owing to the lack of global search ability caused by the loss of population diversity during evolution. Although this can be alleviated by increasing population size, additional computational overhead will be incurred. Moreover, after crossover and mutation operations, individual changes in the population are mixed, and loss of optimal individuals may occur, which will slow down the evolution of the population. Therefore, we combine the population state with the evolutionary process and propose an adaptive robustness evolution algorithm (AREA) with self-competition for scale-free IoT topologies. In AREA, the crossover and mutation operations are dynamically adjusted according to population diversity index to ensure global search ability. Moreover, a self-competition operation is used to ensure convergence. The simulation results demonstrate that AREA is more effective in improving the robustness of scale-free IoT networks than several existing methods.
Bandwidth Part and Service Differentiation in Wireless Networks
Francois Baccelli (UT Austin & The University of Texas at Austin, USA); Sanket Sanjay Kalamkar (INRIA Paris, France)
This paper presents a stochastic geometry-based model for bandwidth part (BWP) in device-to-device wireless networks. BWP allows one to adapt the bandwidth allocated to users depending on their data rate needs. Specifically, in BWP, a wide bandwidth is divided into chunks of smaller bandwidths and the number of bandwidth chunks allocated to a user depends on its needs or type. The BWP model studied here is probabilistic in that the user locations are assumed to form a realization of a Poisson point process and each user decides independently to be of a certain type with some probability. This model allows one to quantify spectrum sharing and service differentiation in this context, namely to predict what performance a user gets depending on its type and the overall performance. This is based on exact representations of key performance metrics for each user type, namely its success probability, the meta distribution of its signal-to-interference ratio, and its Shannon throughput. We also show that, surprisingly, the higher traffic variability stemming from BWP is beneficial: when comparing two networks using BWP and having the same mean signal and the same mean interference powers, the network with higher traffic variability outperforms for all these performance metrics.
Low-Overhead Joint Beam-Selection and Random-Access Schemes for Massive Internet-of-Things with Non-Uniform Channel and Load
Yihan Zou, Kwang Taik Kim, Xiaojun Lin and Mung Chiang (Purdue University, USA); Zhi Ding (University of California at Davis, USA); Risto Wichman (Aalto University School of Electrical Engineering, Finland); Jyri Hämäläinen (Aalto University, Finland)
In this paper, we study low-overhead uplink multi-access algorithms for massive Internet-of-Things (IoT) that can exploit the MIMO performance gain. Although MIMO improves system capacity, it usually requires high overhead due to Channel State Information (CSI) feedback, which is unsuitable for IoT. Recently, a Pseudo-Random Beam-Forming (PRBF) scheme was proposed to exploit the MIMO performance gain for uplink IoT access with uniform channel and load, without collecting CSI at BS. For non-uniform channel and load, new adaptive beam-selection and random-access algorithms are needed to efficiently utilize the system capacity with low overhead. While most existing algorithms for a related multi-channel scheduling problem require each node to at least know some information of the queue length of all contending nodes, we propose a new Low-overhead Multi-Channel Joint Channel-Assignment and Random-Access (L-MC-JCARA) algorithm that reduces the overhead to be independent of the number of interfering nodes. A key novelty is to let the BS estimate the total backlog in each contention group by only observing the random-access events, so that no queue-length feedback is needed from IoT devices. We prove that L-MC-JCARA can achieve at least 0.24 of the capacity region of the optimal centralized scheduler for the corresponding multi-channel system.
Online Control of Preamble Groups with Priority in Cellular IoT Networks
Jie Liu (Hanyang University, Korea (South)); Mamta Agiwal (SejongUniversity, Korea (South)); Miao Qu and Hu Jin (Hanyang University, Korea (South))
Internet of Things (IoT) is the ongoing paradigm that offers a truly connected society by integrating several heterogeneous service and applications. The major transformation lies in the fact that the use cases of connected devices would not only become at par with people oriented connections but would ultimately exceed it and by volumes. Moreover, due to diversity in applications and requirements, the connected devices would manifest different priorities. With the variety of requirements, in terms of latency, payload size, number of connections, update frequency, reliability, excreta, the Random access process (RAP) would also require modification in cellular IoT. RAP is the first step to establish connection between devices and the base station. In order to prioritize the IoT devices in RAP, we propose a novel online algorithm with dynamic preamble distribution over multiple priorities. In the algorithm, we estimate the number of activated devices in each priority based on Bayesian rule to online control the number of preambles in each priority. Subsequently, we extend our proposal to incorporate access class baring (ACB) to optimize the algorithm. Extensive simulations show the effectiveness of proposed algorithm over multiple priorities.
Tony T. Luo (Missouri University of Science and Technology)
A Randomly Accessible Lossless Compression Scheme for Time-Series Data
Rasmus Vestergaard, Daniel E. Lucani and Qi Zhang (Aarhus University, Denmark)
We detail a practical compression scheme for lossless compression of time-series data, based on the emerging concept of generalized deduplication. As data is no longer stored for just archival purposes, but needs to be continuously accessed in many applications, the scheme is designed for low-cost random access to its compressed data, avoiding decompression. With this method, an arbitrary bit of the original data can be read by accessing only a few hundred bits in the worst case, several orders of magnitude fewer than state-of-the-art compression schemes. Subsequent retrieval of bits requires visiting at most a few tens of bits. A comprehensive evaluation of the compressor on eight real-life data sets from various domains is provided. The cost of this random access capability is a loss in compression ratio compared with the state-of-the-art compression schemes BZIP2 and 7z, which can be as low as 5% depending on the data set. Compared to GZIP, the proposed scheme has a better compression ratio for most of the data sets. Our method has massive potential for applications requiring frequent random accesses, as the only existing approach with comparable random access cost is to store the data without compression.
On the Optimal Repair-Scaling Trade-off in Locally Repairable Codes
Si Wu and Zhirong Shen (The Chinese University of Hong Kong, China); Patrick Pak-Ching Lee (The Chinese University of Hong Kong, Hong Kong)
How to improve the repair performance of erasure-coded storage is a critical issue for maintaining high reliability of modern large-scale storage systems. Locally repairable codes (LRC) are one popular family of repair-efficient erasure codes that mitigate the repair bandwidth and are deployed in practice. To adapt to the changing demands of access efficiency and fault tolerance, modern storage systems also conduct frequent scaling operations on erasure-coded data. In this paper, we analyze the optimal trade-off between the repair and scaling performance of LRC in clustered storage systems. Specifically, we design placement strategies that operate along the optimal repair-scaling trade-off curve subject to the fault tolerance constraints. We prototype and evaluate our placement strategies on a LAN testbed, and show that they outperform the conventional placement scheme in repair and scaling operations.
URSAL: Ultra-Efficient, Reliable, Scalable, and Available Block Storage at Low Cost
Huiba Li (NiceX Lab, China); Yiming Zhang (NUDT & NiceX Lab, China); Haonan Wang (NiceX Lab, China); Ping Zhong (CSU, China)
In this paper we design URSAL, an HDD-only block store which provides ultra efficiency, reliability, scalability and availability at low cost. First, we unveil that parallelism is harmful to the performance of HDDs, and thus URSAL designs its storage servers which conservatively performs parallel I/O on HDDs to reduce tail latency. Second, the effectiveness of HDD journals for backup writes varies over different workloads, and thus URSAL collaboratively performs direct block I/O on raw disks and transforms small writes into sequential journal appends. Third, software failure ratios are nontrivial in large-scale block stores, and thus URSAL designs the (software) fault-tolerance mechanism for high availability. Micro benchmarks show that URSAL significantly outperforms state-of-the-art systems for providing HDD-only block storage.
Working Set Theorems for Routing in Self-Adjusting Skip List Networks
Chen Avin (Ben-Gurion University of the Negev, Israel); Iosif Salem and Stefan Schmid (University of Vienna, Austria)
This paper explores the design of dynamic network topologies which adjust to the workload they serve, in a demand-aware and online manner. Such self-adjusting networks (SANs) are enabled by emerging optical technologies, and can be found, e.g., in datacenters. SANs can be used to reduce routing costs by moving frequently communicating nodes topologically closer. However, such reconfigurations also come at a cost, introducing a need for online algorithms which strike an optimal balance between the benefits and costs of reconfigurations.
This paper presents SANs which provide, for the first time, provable working set guarantees: the routing cost between node pairs is proportional to how recently these nodes communicated last time. Our SANs rely on a distributed implementation of skip lists (which serves as the topology) and provide additional interesting properties such as local routing. Our first contribution is SASL^2, which is a randomized and sequential SAN algorithm that achieves the working set property. Then we show how SASL^2 can be converted to a distributed algorithm that handles concurrent communication requests and maintains SASL^2's properties. Finally, we present deterministic SAN algorithms.
Chunsheng Xin (Old Dominion University)
Security V
Lightweight Sybil-Resilient Multi-Robot Networks by Multipath Manipulation
Yong Huang, Wei Wang, Yiyuan Wang and Tao Jiang (Huazhong University of Science and Technology, China); Qian Zhang (Hong Kong University of Science and Technology, Hong Kong)
Wireless networking opens up many opportunities to facilitate miniaturized robots in collaborative tasks, while the openness of wireless medium exposes robots to the threats of Sybil attackers, who can break the fundamental trust assumption in robotic collaboration by forging a large number of fictitious robots. Recent advances advocate the adoption of bulky multi-antenna systems to passively obtain fine-grained physical layer signatures, rendering them unaffordable to miniaturized robots. To overcome this conundrum, this paper presents ScatterID, a lightweight system that attaches featherlight and batteryless backscatter tags to single-antenna robots to defend against Sybil attacks. Instead of passively "observing" signatures, ScatterID actively "manipulates" multipath propagation by using backscatter tags to intentionally create rich multipath features obtainable to a single-antenna robot. These features are used to construct a distinct profile to detect the real signal source, even when the attacker is mobile and power-scaling. We implement ScatterID on the iRobot Create platform and evaluate it in typical indoor and outdoor environments. The experimental results show that our system achieves a high AUROC of 0.988 and an overall accuracy of 96.4% for identity verification.
RF-Rhythm: Secure and Usable Two-Factor RFID Authentication
Chuyu Wang (Nanjing University, China); Ang Li, Jiawei Li, Dianqi Han and Yan Zhang (Arizona State University, USA); Jinhang Zuo (Carnegie Mellon University, USA); Rui Zhang (University of Delaware, USA); Lei Xie (Nanjing University, China); Yanchao Zhang (Arizona State University, USA)
Passive RFID technology is widely used in user authentication and access control. We propose RF-Rhythm, a secure and usable two-factor RFID authentication system with strong resilience to lost/stolen/cloned RFID cards. In RF-Rhythm, each legitimate user performs a sequence of taps on his/her RFID card according to a self-chosen secret melody. Such rhythmic taps can induce phase changes in the backscattered signals, which the RFID reader can detect to recover the user's tapping rhythm. In addition to verifying the RFID card's identification information as usual, the backend server compares the extracted tapping rhythm with what it acquires in the user enrollment phase. The user passes authentication checks if and only if both verifications succeed. We also propose a novel phase-hopping protocol in which the RFID reader emits Continuous Wave (CW) with random phases for extracting the user's secret tapping rhythm. Our protocol can prevent a capable adversary from extracting and then replaying a legitimate tapping rhythm from sniffed RFID signals. Comprehensive user experiments confirm the high security and usability of RF-Rhythm with false-positive and false-negative rates close to zero.
SeVI: Boosting Secure Voice Interactions with Smart Devices
Xiao Wang and Hongzi Zhu (Shanghai Jiao Tong University, China); Shan Chang (Donghua University, China); Xudong Wang (Shanghai Jiao Tong University, China)
Voice interaction, as an emerging human-computer interaction method, has gained great popularity, especially on smart devices. However, due to the open nature of voice signals, voice interaction may cause privacy leakage. In this paper, we propose a novel scheme, called \emph{SeVI}, to protect voice interaction from being deliberately or unintentionally eavesdropped. SeVI actively generate jamming noise of superior characteristics, while a user is performing voice interaction with his/her device, so that attackers cannot obtain the voice contents of the user. Meanwhile, the device leverages the prior knowledge of the generated noise to adaptively cancel received noise, even when the device usage environment is changing due to movement, so that the user voice interactions are unaffected. SeVI relies on only normal microphone and speakers and can be implemented as light-weight software. We have implemented SeVI on a commercial off-the-shelf (COTS) smartphone and conducted extensive real-world experiments. The results demonstrate that SeVI can defend both online eavesdropping attacks and offline digital signal processing (DSP) analysis attacks.
Towards Context Address for Camera-to-Human Communication
Siyuan Cao, Habiba Farrukh and He Wang (Purdue University, USA)
Although existing surveillance cameras can identify people, their utility is limited by the unavailability of any direct camera-to-human communication. This paper proposes a real-time end-to-end system to solve the problem of digitally associating people in a camera view with their smartphones, without knowing the phones' IP/MAC addresses. The key idea is using a person's unique "context features", extracted from videos, as its sole address. The context address consists of: motion features, e.g. walking velocity; and ambiance features, e.g. magnetic trend and Wi-Fi signal strength. Once receiving a broadcast packet from the camera, a user's phone accepts it only if its context address matches the phone's sensor data. We highlight three novel components in our system: (1) definition of discriminative and noise-robust ambience features; (2) effortless ambient sensing map generation; (3) a context feature selection algorithm to dynamically choose lightweight yet effective features which are encoded into a fixed-length header. Real-world and simulated experiments are conducted for different applications. Our system achieves a sending ratio of 98.5%, an acceptance precision of 93.4%, and a recall of 98.3% with ten people. We believe this is a step towards direct camera-to-human communication and will become a generic underlay to various practical applications.
Ning Zhang (Washington University in St. Louis)
Privacy II
Analysis, Modeling, and Implementation of Publisher-side Ad Request Filtering
Liang Lv (Tsinghua, China); Ke Xu (Tsinghua University, China); Haiyang Wang (University of Minnesota at Duluth, USA); Meng Shen (Beijing Institute of Technology, China); Yi Zhao (Tsinghua University, China); Minghui Li, Guanhui Geng and Zhichao Liu (Baidu, China)
Online advertising has been a great driving force for the Internet industry. To maintain a steady growth of advertising revenue, advertisement (ad) publishers have made great efforts to increase the impressions as well as the conversion rate. However, we notice that the results of these efforts are not as good as expected. In detail, to show more ads to the consumers, publishers have to waste a significant amount of server resources to process the ad requests that do not result in consumers' clicks. On the other hand, the increasing ads are also reducing the browsing experience of the consumers.
In this paper, we explore the opportunity to improve publishers' overall utility by handling a selective number of requests on ad servers. Particularly, we propose a publisher-side proactive ad request filtration solution Win2. Upon receiving an ad request, Win2 estimates the probability that the consumer will click if serving it. The ad request will be served if the clicking probability is above a dynamic threshold. Otherwise, it will be filtered to reduce the publisher's resource cost and improve consumer experience. We implement Win2 in Baidu's large-scale ad serving system and the evaluation results confirm its effectiveness.
Differentially Private Range Counting in Planar Graphs for Spatial Sensing
Abhirup Ghosh (Imperial College London, United Kingdom (Great Britain)); Jiaxin Ding (Shanghai Jiao Tong University, China); Rik Sarkar (University of Edinburgh, United Kingdom (Great Britain)); Jie Gao (Rutgers University, USA)
This paper considers the problem of privately reporting counts of events recorded by devices in different regions of the plane. Unlike previous range query methods, our approach is not limited to rectangular ranges. We devise novel hierarchical data structures to answer queries over arbitrary planar graphs. This construction relies on balanced planar separators to represent shortest paths using \(O(\log n)\) number of canonical paths. Pre-computed sums along these canonical paths allow efficient computations of 1D counting range queries along any shortest path. We make use of differential forms together with the 1D mechanism to answer 2D queries in which a range is a union of faces in the planar graph. The methods are designed such that the range queries could be answered with differential privacy guarantee on any single event, with only a poly-logarithmic error. They also allow private range queries to be performed in a distributed setup. Experimental results confirm that the methods are efficient and accurate on real data.
Message Type Identification of Binary Network Protocols using Continuous Segment Similarity
Stephan Kleber, Rens Wouter van der Heijden and Frank Kargl (Ulm University, Germany)
Protocol reverse engineering based on traffic traces infers the behavior of unknown network protocols by analyzing observable network messages. To perform correct deduction of message semantics or behavior analysis, accurate message type identification is an essential first step. However, identifying message types is particularly difficult for binary protocols, whose structural features are hidden in their densely packed data representation. In this paper, we leverage the intrinsic structural features of binary protocols and propose an accurate method for discriminating message types. Our approach uses a continuous similarity measure by comparing feature vectors where vector elements correspond to the fields in a message, rather than discrete byte values. This enables a better recognition of structural patterns, which remain hidden when only exact value matches are considered. We combine Hirschberg alignment with DBSCAN as cluster algorithm to yield a novel inference mechanism. By applying novel autoconfiguration schemes, we do not require manually configured parameters for the analysis of an unknown protocol, as required by earlier approaches. Results of our evaluations show that our approach has considerable advantages in message type identification result quality but also execution performance over previous approaches.
Search Me in the Dark: Privacy-preserving Boolean Range Query over Encrypted Spatial Data
Xiangyu Wang and Jianfeng Ma (Xidian University, China); Ximeng Liu (Fuzhou University, China); Robert Deng (Singapore Management University, Singapore); Yinbin Miao, Dan Zhu and Zhuoran Ma (Xidian University, China)
With the increasing popularity of geo-positioning technologies and mobile Internet, spatial keyword data services have attracted growing interest from both the industrial and academic communities in recent years. Meanwhile, massive amount of data is increasingly being outsourced to cloud in the encrypted form for enjoying the advantages of cloud computing while without compromising data privacy. Most existing works primarily focus on the privacy-preserving schemes for either spatial or keyword queries, and they cannot be directly applied to solve the spatial keyword query problem over encrypted data. In this paper, for the first time, we study the challenging problem of Privacy-preserving Boolean Range Query (PBRQ) over encrypted spatial databases. In particular, we propose two novel PBRQ schemes. Firstly, we present a scheme with linear search complexity based on the space-filling curve code and Symmetric-key Hidden Vector Encryption (SHVE). Then, we use tree structures to achieve faster-than-linear search complexity. Thorough security analysis shows that data security and query privacy can be guaranteed during the query process. Experimental results using real-world datasets show that the proposed schemes are efficient and feasible for practical applications, which is at least 70 X faster than existing techniques in the literature.
Yaling Yang (Virginia Tech)
Cost Minimization in Multi-Path Communication under Throughput and Maximum Delay Constraints
Qingyu Liu and Haibo Zeng (Virginia Tech, USA); Minghua Chen (The City University of Hong Kong, Hong Kong); Lingjia Liu (Virginia Tech, USA)
We consider the scenario where a sender streams a flow at a fixed rate to a receiver across a multi-hop network. Transmission over a link incurs a cost and a delay, both of which are traffic-dependent. We study the problem of cost minimization in multi-path routing under a maximum delay constraint and a throughput requirement. The problem is important for leveraging the edge-cloud computing platform to support IoT applications, which are sensitive to three critical networking metrics, i.e., cost, maximum delay, and throughput. Our problem jointly considers the three metrics, while existing ones only account for one or two of them. Solving our problem is challenging, as (i) it is NP-complete even to find a feasible solution; (ii) directly extending existing solutions admits problem-dependent maximum delay violations that can be unbounded in certain instances. We design an approximation algorithm and an efficient heuristic. For each feasible instance, our approximation algorithm must achieve the optimal cost, violating constraints by constant ratios. Our heuristic can solve a large portion of feasible instances. The obtained solution must satisfy all constraints. We further characterize a condition under which the cost of our heuristic must be within a problem-dependent gap to the optimal.
Hop-by-Hop Multipath Routing: Choosing the Right Nexthop Set
Klaus Schneider and Beichuan Zhang (University of Arizona, USA); Lotfi Benmohamed (National Institute of Standards and Technology, USA)
The Internet can be made more efficient and robust with hop-by-hop multipath routing: Each router on the path can split packets between multiple nexthops in order to 1) avoid failed links and 2) reduce traffic on congested links. Before deciding how to split traffic, one first needs to decide which nexthops to allow at each step. In this paper, we investigate the requirements and trade-offs for making this choice. Most related work chooses the viable nexthops by applying the "Downward Criterion", i.e., only adding nexthops that lead closer to the destination; or more generally by creating a Directed Acyclic Graph (DAG) for each destination. We show that a DAG's nexthop options are necessarily limited, and that, by using certain links in both directions (per destination), we can add further nexthops while still avoiding loops. Our solution LFID (Loop- Free Inport-Dependent) routing, though having a slightly higher time complexity, leads to both a higher number of and shorter potential paths than related work. LFID thus protects against a higher percentage of single and multiple failures (or congestions) and comes close to the performance of arbitrary source routing.
Joint Power Routing and Current Scheduling in Multi-Relay Magnetic MIMO WPT System
Hao Zhou, Wenxiong Hua, Jialin Deng, Xiang Cui, Xiang-Yang Li and Panlong Yang (University of Science and Technology of China, China)
Magnetic resonant coupling wireless power transfer (MRC-WPT) enables convenient device-charging. When MIMO MRC-WPT system incorporated with multiple relay components, both relay \emph{On-Off} state (\ie, \emph{power routing}) and TX current (\ie, \emph{current scheduling}) could be adjusted for improving charging efficiency and distance. Previous approaches need the collaboration and feedback from the energy receiver (RX), achieved using side-channels, \eg, Bluetooth, which is time/energy-consuming. In this work we propose, design, and implement a multi-relay MIMO MRC-WPT system, and design an almost optimum joint optimization of power routing and current scheduling method, without relying on any feedback from RX. We carefully decompose the joint optimization problem into two subproblems without affecting the overall optimality of the combined solution. For current scheduling subproblem, we propose an almost-optimum RX-feedback independent solution. For power routing subproblem, we first design a greedy algorithm with \{\frac{1}{2}\} approximation ratio, and then design a DQN based method to further improve its effectiveness. We prototype our system and evaluate it with extensive experiments. Our results demonstrate the effectiveness of the proposed algorithms. The achieved power transfer efficiency (PTE) on average is \{3.2X\}, \{1.43X\}, \{1.34X\}, and \{7.3X} over the other four strategies: without relay, with nonadjustable relays, greed based, and shortest-path based ones.
Verifying Policy-based Routing at Internet Scale
Xiaozhe Shao and Lixin Gao (University of Massachusetts at Amherst, USA)
Routing policy configuration plays a crucial role in determining the path that network traffic takes to reach a destination. Network administrators/operators typically decide the routing policy for their networks/routers independently. The paths/routes resulted from these independently configured routing policies might not necessarily meet the intent of the network administrators/operators. Even the very basic network-wide properties of the routing policies such as reachability between a pair of nodes need to be verified.
In this paper, we propose a scheme that characterizes routing-policy verification problems into a Satisfiability Modulo Theories (SMT) problems. The key idea is to formulate the SMT model in a policy-aware manner so as to reduce/eliminate the mutual dependencies between variables as much as possible. Further, we reduce the size of the generated SMT model through pruning. We implement and evaluate the policy-aware model through an out-of-box SMT solver. The experimental results show that the policy-aware model can reduce the time it takes to perform verification by as much as 100x even under a modest topology size. It takes only a few minutes to answer a query for a topology containing tens of thousands of nodes.
Jie Wu (Temple University)
CoLoRa: Enable Muti-Packet Reception in LoRa
Shuai Tong, Zhenqiang Xu and Jiliang Wang (Tsinghua University, China)
Long Range (LoRa), more generically Low-Power Wide Area Network (LPWAN), is a promising platform to connect Internet of Things. It enables low-cost low-power communication at a few kbps over upto tens of kilometers with 10-year battery lifetime. However, practical LPWAN deployments suffer from collisions given the dense deployment of devices and wide coverage area.
We propose CoLoRa, a protocol to decompose large numbers of concurrent transmissions from one collision in LoRa networks. At the heart of CoLoRa, we utilize packet time offset to disentangle collided packets. CoLoRa incorporates several novel techniques to address practical challenges. (1) We translate time offset, which is difficult to measure, to frequency features that can be reliably measured. (2) We propose a method to cancel inter-packet interference and extract accurate feature from low SNR LoRa signal. (3) We address frequency shift incurred by CFO and time offset for LoRa decoding. We implement CoLoRa on USRP N210 and evaluate its performance in both indoor and outdoor networks. CoLoRa is implemented in software at the base station and it can work for COTS LoRa nodes. The evaluation results show that CoLoRa improves the network throughput by 3.4\(\times\) compared with Choir and by 14\(\times\) compared with LoRaWAN.
DyLoRa: Towards Energy Efficient Dynamic LoRa Transmission Control
Yinghui Li, Jing Yang and Jiliang Wang (Tsinghua University, China)
LoRa has been shown as a promising platform to provide low-power long-range communication with a low data rate for connecting IoT devices. LoRa can adjust transmission parameters including transmission power and spreading factor, leading to different noise resilience, transmission range and energy consumption. Existing LoRa transmission control approaches can hardly achieve optimal energy efficiency. This leads to a gap to the optimal solution. In this paper, we propose DyLoRa, a dynamic LoRa transmission control system to optimize energy efficiency. The main challenge is very limited data rate of LoRa, making it time- and energy-consuming to obtain link statistics. We show that the demodulation symbol error rate can be stable and thus derive the model for symbol error rate. We further derive the energy efficiency model based on the symbol error model. DyLoRa can derive parameter settings for optimal energy efficiency even from a single packet. We also adapt the model to different hardware to compensate the deviation. We implement DyLoRa based on LoRaWAN 1.0.2 with SX1276 LoRa node and SX1301 LoRa gateway. We evaluate DyLoRa with 11 real deployed nodes. The evaluation results show that DyLoRa improves the energy efficiency by up to 103% compared with the state-of-the-art LoRaWAN ADR.
LiteNap: Downclocking LoRa Reception
Xianjin Xia and Yuanqing Zheng (The Hong Kong Polytechnic University, Hong Kong); Tao Gu (RMIT University, Australia)
This paper presents LiteNap which improves the energy efficiency of LoRa by enabling LoRa nodes to operate in a downclocked `light sleep' mode for packet reception. A fundamental limit that prevents radio downclocking is the Nyquist sampling theorem which demands the clock-rate being at least twice the bandwidth of LoRa chirps. Our study reveals under-sampled LoRa chirps suffer frequency aliasing and cause ambiguity in symbol demodulation. LiteNap addresses the problem by leveraging an empirical observation that the hardware of LoRa radio can cause phase jitters on modulated chirps, which result in frequency leakage in the time domain. The timing information of phase jitters and frequency leakages can serve as physical fingerprints to uniquely identify modulated chirps. We propose a scheme to reliably extract the fingerprints from under-sampled chirps and resolve ambiguities in symbol demodulation. We implement LiteNap on a software defined radio platform and conduct trace-driven evaluation. Experiment results show that LiteNap can downclock LoRa nodes to sub-Nyquist rates for energy savings (\eg, 1/8 of Nyquist rate), without substantially affecting packet reception performance (\eg, $>$95% packet reception rate).
Online Concurrent Transmissions at LoRa Gateway
Zhe Wang, Linghe Kong and Kangjie Xu (Shanghai Jiao Tong University, China); Liang He (University of Colorado Denver, USA); Kaishun Wu (Shenzhen University, China); Guihai Chen (Shanghai Jiao Tong University, China)
Long Range (LoRa) communication, thanks to its wide network coverage and low energy operation, has attracted extensive attentions from both academia and industry. However, existing LoRa-based Wide Area Network (LoRaWAN) suffers from severe inter-network interference, due to the following two reasons. First, the densely-deployed LoRa ends usually share the same network configurations, such as spreading factor (SF), bandwidth (BW) and carrier frequency (CF), causing interference when operating in the vicinity. Second, LoRa is tailored for low-power devices, which excludes LoRaWAN from using the listen-before-talk (LBT) mechanisms- LoRaWAN has to use the duty-cycled medium access policy and thus being incapable of channel sensing or collision avoidance. To mitigate the inter-network interference, we propose a novel solution achieving the online concurrent transmissions at LoRa gateway, called OCT, which can be easily deployed at LoRa gateway. We have implemented/evaluated OCT on USRP platform and commodity LoRa ends, showing OCT achieves: (i) >90% packet reception rate (PRR), (ii) 3× 10−3 bit error rate (BER), (iii) 2x and 3x throughput in the scenarios of two- and three- packet collisions respectively, and (iv) reducing 67% latency compared with state-of-the-art.
Swarun Kumar (Carnegie Mellon University)
SDN IV
HiFi: Hybrid Rule Placement for Fine-Grained Flow Management in SDNs
Gongming Zhao and Hongli Xu (University of Science and Technology of China, China); Jingyuan Fan (State University of New York at Buffalo, USA); Liusheng Huang (University of Science and Technology of China, China); Chunming Qiao (University at Buffalo, USA)
Fine-grained flow management is very useful in many practical applications, e.g., resource allocation, anomaly detection and traffic engineering. However, it is difficult to provide fine-grained management for a large number of flows in SDNs due to switches' limited flow table capacity. While using wildcard rules can reduce the number of flow entries needed, it cannot fully ensure fine-grained management for all the flows without degrading application performance. In this paper, we design and implement HiFi, a system that achieves fine-grained management with a minimal number of flow entries. HiFi achieves the goal by taking a two-step approach: wildcard entry installment and application-specific exact-match entry installment. How to optimally install wildcard and exact-match flow entries, however, are intractable. Therefore, we design approximation algorithms with bounded factors to solve these problems. We consider how to achieve network-wide load balancing via fine-grained flow management as a case study. Both experimental and simulation results show that HiFi can reduce the number of required flow entries by about 45%-69% and reduce the control overhead by 28%-50% compared with the state-of-the-art approaches for achieving fine-grained flow management.
Homa: An Efficient Topology and Route Management Approach in SD-WAN Overlays
Diman Zad Tootaghaj and Faraz Ahmed (Hewlett Packard Labs, USA); Puneet Sharma (Hewlett Packard Labs & HP Labs, USA); Mihalis Yannakakis (Columbia University, USA)
This paper presents an efficient topology and route management approach in Software-Defined Wide Area Networks (SD-WAN). Traditional WANs suffer from low utilization and lack of global view of the network. Therefore, during failures, topology/service/traffic changes, or new policy requirements, the system does not always converge to the global optimal state. Using Software Defined Networking architectures in WANs provides the opportunity to design WANs with higher fault tolerance, scalability, and manageability. We exploit the correlation matrix derived from monitoring system between the virtual links to infer the underlying route topology and propose a route update approach that minimizes the total route update cost on all flows. We formulate the problem as an integer linear programming optimization problem and provide a centralized control approach that minimizes the total cost while satisfying the quality of service (QoS) on all flows. Experimental results on real network topologies demonstrate the effectiveness of the proposed approach in terms of disruption cost and average disrupted flows.
Incremental Server Deployment for Scalable NFV-enabled Networks
Jianchun Liu, Hongli Xu and Gongming Zhao (University of Science and Technology of China, China); Chen Qian (University of California at Santa Cruz, USA); Xingpeng Fan and Liusheng Huang (University of Science and Technology of China, China)
To construct a new NFV-enabled network, there are two critical requirements: minimizing server deployment cost and satisfying switch resource constraints. However, prior work mostly focuses on the server deployment cost, while ignoring the switch resource constraints (e.g., switch's flow-table size). It thus results in a large number of rules on switches and leads to massive control overhead. To address this challenge, we propose an incremental server deployment (INSD) problem for construction of scalable NFV-enabled networks. We prove that the INSD problem is NP-Hard, and there is no polynomial-time algorithm with approximation ratio of (1−\epsilon)·ln m, where \epsilon is an arbitrarily small value and m is the number of requests in the network. We then present an efficient algorithm with an approximation ratio of 2·H(q·p), where q is the number of VNF's categories and p is the maximum number of requests through a switch. We evaluate the performance of our algorithm with experiments on physical platform (Pica8), Open vSwitches (OVSes), and large-scale simulations. Both experiment and simulation results show high scalability of the proposed algorithm. For example, our solution can reduce the control and rule overhead by about 88% with about 5% additional server deployment, compared with the existing solutions.
Network Slicing in Heterogeneous Software-defined RANs
Qiaofeng Qin (Yale University, USA); Nakjung Choi (Nokia & Bell Labs, USA); Muntasir Raihan Rahman (Microsoft, USA); Marina Thottan (Bell Labs, USA); Leandros Tassiulas (Yale University, USA)
5G technologies promise to revolutionize mobile networks and push them to the limits of resource utilization. Besides better capacity, we also need better resource management via virtualization. End-to-end network slicing not only involves the core but also the Radio Access Network (RAN) which makes this a challenging problem. This is because multiple alternative radio access technologies exist (e.~g.~,LTE, WLAN, and WiMAX), and there is no unifying abstraction to compare and compose from diverse technologies. In addition, existing work assumes the all RAN infrastructure exists under a single administrative domain. Software-Defined Radio Access Network (SD-RAN) offers programmability that facilitates a unified abstraction for resource sharing and composition across multiple providers harnessing different technology stacks. In this paper we propose a new architecture for heterogeneous RAN slicing across multiple providers. A central component in our architecture is a service orchestrator that interacts with multiple network providers and service providers to negotiate resource allocations that are jointly optimal. We propose a double auction mechanism that captures the interaction among selfish parties and guarantees convergence to optimal social welfare in finite time. We then demonstrate the feasibility of our proposed system by using open source SD-RAN systems such as EmPOWER (WiFi) and FlexRAN (LTE).
Tamer Nadeem (Virginia Commonwealth University)
Session 10-A
Localization II
A Structured Bidirectional LSTM Deep Learning Method For 3D Terahertz Indoor Localization
Shukai Fan, Yongzhi Wu and Chong Han (Shanghai Jiao Tong University, China); Xudong Wang (Shanghai Jiao Tong University & Teranovi Technologies, Inc., China)
High-accuracy localization technology has gained increasing attention in the diverse applications gesture and motion control, among others. Due to the shadowing, multi-path fading, blockage effects in indoor propagation, 0.1m-level precise localization is still challenging. Promising for 6G wireless communications, the Terahertz (THz) spectrum provides ultra-broad bandwidth for indoor applications. Applying to indoor localization, the channel state information (CSI) of THz wireless signals, including angle of arrival (AoA), received power, and delay, has high resolution, which can be explored for positioning. In this paper, a Structured Bidirectional Long Short-term Memory (SBi-LSTM) recurrent neural network (RNN) architecture is proposed to solve the CSI-based three-dimensional (3D) THz indoor localization problem with significantly improved accuracy. First, the features of individual multi-path ray are analyzed in the Bi-LSTM network at the base level. Furthermore, the upper level residual network (ResNet) of the constructed SBi-LSTM network extracts for the geometric information for localization. Simulation results validate the convergence of our SBi-LSTM method and the robustness against indoor non-line-of-sight (NLoS) blockage. Specifically, the localization accuracy in the metric of mean distance error is within 0.27m under the NLoS environment, which demonstrates over 60% enhancement over the state-of-the-art techniques.
MagB: Repurposing the Magnetometer for Fine-Grained Localization of IoT Devices
Paramasiven Appavoo and Mun Choon Chan (National University of Singapore, Singapore); Bhojan Anand (National University of Singapore & Anuflora International, Singapore)
Interest in fine-grained indoor localization remains high and various approaches including those based on Radio Frequency (RF), ultrasound, acoustic, magnetic field and light have been proposed. However, while the achieved accuracy may be high, many of these approaches do not work well in environments with lots of obstructions.In this paper, we present MagB, a decimeter-level localization scheme that uses the magnetometer commonly available on existing IoT devices. MagB estimates the bearing of beacons by detecting changes in the magnetic field strength. Localization is then performed based on Angle-of-Arrival (AoA) information. We have built a prototype of MagB using low cost, off-the-shelf components. Our evaluation shows that MagB is able to achieve a median accuracy of about 13cm and can localize devices even when they are placed in steel filing cabinet or inside the casing of a running PC.
mmTrack: Passive Multi-Person Localization Using Commodity Millimeter Wave Radio
Chenshu Wu, Feng Zhang, Beibei Wang and K. J. Ray Liu (University of Maryland, USA)
Passive human localization and tracking using RF signals has been studied for over a decade. Most of existing solutions, however, can only track a single moving subject due to the coarse multipath resolvability limited by bandwidth and antenna number. In this paper, we breakdown the limitations by leveraging the emerging 60GHz 802.11ad radios. We present mmTrack, the first system that passively localizes and tracks multiple users simultaneously using a single commodity 802.11ad radio. The design of mmTrack consists of three key components. First, we significantly improve the spatial resolution, limited by the small aperture of the compact 60GHz array, by performing digital beamforming over all receive antennas. Second, we propose a novel multi-target detection approach that tackles the near-far-effect and measurement noises. Finally, we devise a robust clustering technique to accurately recognize multiple targets and estimate the respective locations, from which their individual trajectories are further derived by a continuous tracking algorithm. We implement mmTrack on commodity 802.11ad devices and evaluate it in indoor environments. Experiments demonstrate that mmTrack counts multiple users precisely with an error <1 person for 97.8% of the time and achieves a respective median location error of 9.9cm and 19.7cm for dynamic and static targets.
Selection of Sensors for Efficient Transmitter Localization
Arani Bhattacharya (KTH Royal Institute of Technology, Sweden); Caitao Zhan, Himanshu Gupta, Samir R. Das and Petar M. Djurić (Stony Brook University, USA)
We address the problem of localizing an (illegal) transmitter using a distributed set of sensors. Our focus is on developing techniques that perform the transmitter localization in an efficient manner. Localization of illegal transmitters is an important problem which arises in many important applications. Localization of transmitters is generally done based on observations from a deployed set of sensors with limited resources, thus it is imperative to design techniques that minimize the sensors' energy resources.
In this paper, we design greedy approximation algorithms for the optimization problem of selecting a given number of sensors in order to maximize an appropriately defined objective function of localization accuracy. The obvious greedy algorithm delivers a constant-factor approximation only for the special case of two hypotheses (potential locations). For the general case of multiple hypotheses, we design a greedy algorithm based on an appropriate auxiliary objective function---and show that it delivers a provably approximate solution for the general case. We evaluate our techniques over multiple simulation platforms, including an indoor as well as an outdoor testbed, and demonstrate the effectiveness of our designed techniques---our techniques easily outperform prior and other approaches by up to 50-60% in large-scale simulations.
Session 10-B
Adaptive Algorithms
Automatically and Adaptively Identifying Severe Alerts for Online Service Systems
Nengwen Zhao (Tsinghua University, China); Panshi Jin, Lixin Wang and Xiaoqin Yang (China Construction Bank, China); Rong Liu (Stevens Institute of Technology, USA); Wenchi Zhang and Kaixin Sui (Bizseer Technology Co., Ltd., China); Dan Pei (Tsinghua University, China)
In large-scale online service system, to enhance the quality of services, engineers need to collect various monitoring data and write many rules to trigger alerts. However, the number of alerts is way more than what on-call engineers can investigate. Thus, in practice, alerts are classified into several priority levels using manual rules, and on-call engineers primarily focus on handling the alerts with the highest priority level (i.e., severe alerts). Unfortunately, due to the complex and dynamic nature of the online services, this rule-based approach results in missed severe alerts or wasted troubleshooting time on non-severe alerts. In this paper, we propose AlertRank, an automatic and adaptive framework for identifying severe alerts. Specifically, AlertRank extracts a set of powerful and interpretable features (textual and temporal alert features, univariate and multivariate anomaly features for monitoring metrics), adopts an XGBoost ranking algorithm to identify the severe alerts out of all incoming alerts, and uses novel methods to obtain labels for both training and testing. Experiments on the datasets from a top global commercial bank demonstrate that AlertRank is effective and achieves the F1-score of 0.89 on average, outperforming all baselines, and reduces the number of alerts to be investigated by more than 80%.
On the impact of accurate radio link modeling on the performance of WirelessHART control networks
Yuriy Zacchia Lun (IMT School for Advanced Studies Lucca, Italy); Claudia Rinaldi, Amal Alrish and Alessandro D'Innocenzo (University of L'Aquila, Italy); Fortunato Santucci (University of l'Aquila, Italy)
The challenges in analysis and co-design of wireless networked control systems are well highlighted by considering wireless industrial control protocols. In this perspective, this paper addresses the modeling and design challenge by focusing on WirelessHART, which is a networking protocol stack widely adopted for wireless industrial automation. Specifically, we first develop and validate a Markov channel model that abstracts the WirelessHART radio link subject to channel impairments and interference. The link quality metrics introduced in the theoretical framework are validated in order to enable the accurate representation of the average and extreme behavior of the radio link. By adopting these metrics, it is straightforward to handle a consistent finite-state abstraction. On the basis of such a model, we then derive a stationary Markov jump linear system model that captures the dynamics of a control loop closed over the radio link. Subsequently, we show that our modeling framework is able to discover and manage the challenging subtleties arising from bursty behavior. A relevant theoretical outcome consists in designing a controller that guarantees stability and improves control performance of the closed-loop system, where other approaches based on a simplified channel model fail.
Online Spread Estimation with Non-duplicate Sampling
Yu-e Sun and He Huang (Soochow University, China); Chaoyi Ma and Shigang Chen (University of Florida, USA); Yang Du (University of Science and Technology of China, China); Qingjun Xiao (SouthEast University of China, China)
Per-flow spread measurement in high-speed networks has many practical applications. Most prior work is sketch-based, focusing on reducing their space requirements to fit in on-chip memory. This design allows measurement to be performed at the line rate, but it has to accept tradeoff with expensive computation for spread queries (unsuitable for online operations) and large estimation errors for small flows. This paper complements the prior art with a new spread estimator design based on an on-chip/off-chip model which is common in practice. The new estimator supports online queries in real-time and produces spread estimation with better accuracy. By storing traffic data in off-chip memory, our design faces a key technical challenge of efficient non-duplicate sampling. We propose a two-stage solution with on-chip/off-chip data structures and algorithms, which are not only efficient but also highly configurable for a variety of probabilistic performance guarantees. We implemented our estimator in hardware using FPGA. The experiment results based on real traces show that our estimator increases the query throughput by around three orders of magnitude, reduces the mean relative (absolute) error by around two (one) orders of magnitude, and saves 84% on-chip memory for probabilistic guarantee in flow classification compared to the prior art.
Session 10-C
Security VI
ADA: Adaptive Deep Log Anomaly Detector
Yali Yuan (University of Goettingen, Germany); Sripriya Srikant Adhatarao (Uni Goettingen, Germany); Mingkai Lin (Nanjing University, China); Yachao Yuan (University of Goettingen, Germany); Zheli Liu (Nankai University, China); Xiaoming Fu (University of Goettingen, Germany)
Large private and government networks are often subjected to attacks like data extrusion and service disruption. Existing anomaly detection systems use offline supervised learning and hence cannot detect anomalies in real-time. Even though unsupervised algorithms are increasingly used, they cannot readily adapt to newer threats. Moreover, such systems also suffer from high cost of storage and require extensive computational resources in addition to employing experts for labeling. In this paper, we propose ADA: Adaptive Deep Log Anomaly Detector, an unsupervised online deep neural network framework that leverages LSTM networks. We regularly adapt to new log patterns to ensure accurate anomaly detection. We also design an adaptive model selection strategy to choose parito-optimal configurations and thereby utilize resources efficiently. Further, we propose a dynamic threshold algorithm to dictate the optimal threshold based on recently detected events to improve the detection accuracy. We then use the predictions to guide storage of abnormal data and effectively reduce the overall storage cost. We compare ADA with the state-of-the-art using the Los Alamos National Laboratory cyber security dataset and show that ADA accurately detects anomalies with high F1-score ~95% and it is 97 times faster than existing approaches and incurs very low storage cost.
DFD: Adversarial Learning-based Approach to Defend Against Website Fingerprinting
Ahmed Abusnaina (University of Central Florida, USA); RhongHo Jang (Inha University, Korea (South) & University of Central Florida, USA); Aminollah Khormali (University of Central Florida, USA); Daehun Nyang (Ewha Womans University & TheVaulters Company, Korea (South)); David Mohaisen (University of Central Florida, USA)
Website Fingerprinting (WF) attacks allow an adversary to recognize the visited websites by exploiting and analyzing network traffic patterns. The success rate of WF attacks is highly dependent on the set of network traffic features used to build the fingerprint. Such features can be used to launch a machine/deep learning-based WF attack which can break the existing state-of-the-art defense mechanisms. In this paper, we use an adversarial learning technique to present a novel defense mechanism, Deep Fingerprinting Defender (DFD), against deep learning-based WF attacks. The DFD aims to break the inherent pattern of the Tor users' online activity through the careful injection of dummy patterns in specific locations in a packet flow. We designed two configurations for dummy message injection, the one-way injection and two-way injection. We conducted extensive experiments to evaluate the performance of DFD over both closed-world and open-world settings. Our results demonstrate that these two configurations can successfully break the Tor network traffic pattern and achieve a high evasion rate of 86.02% over one-way client-side injection rate of 100%, a promising improvement in comparison with state-of-the-art adversarial trace's evasion rate of 60%. Moreover, DFD outperforms its state-of-the-art alternatives by requiring lower bandwidth overhead; 14.26% using client-side injection.
Threats of Adversarial Attacks in DNN-Based Modulation Recognition
Yun Lin, Haojun Zhao and Ya Tu (Harbin Engineering University, China); Shiwen Mao (Auburn University, USA); Zheng Dou (Harbin Engineering University, China)
With the emergence of the information age, mobile data has become more random, heterogeneous and massive. Thanks to its many advantages, deep learning is increasingly applied in communication fields such as modulation recognition. However, recent studies show that the deep neural networks (DNN) is vulnerable to adversarial examples, where subtle perturbations deliberately designed by an attacker can fool a classifier model into making mistakes. From the perspective of an attacker, this study adds elaborate adversarial examples to the modulation signal, and explores the threats and impacts of adversarial attacks on the DNN-based modulation recognition in different environments. The results show that regardless of a white-box or a black-box model, the adversarial attack can reduce the accuracy of the target model. Among them, the performance of the iterative attack is superior to the one-step attack in most scenarios. In order to ensure the invisibility of the attack (the waveform being consistent before and after the perturbations), an appropriate perturbation level is found without losing the attack effect. Finally, it is attested that the signal confidence level is inversely proportional to the attack success rate, and several groups of signals with high robustness are obtained.
ZeroWall: Detecting Zero-Day Web Attacks through Encoder-Decoder Recurrent Neural Networks
Ruming Tang, Zheng Yang, Zeyan Li and Weibin Meng (Tsinghua University, China); Haixin Wang (University of Science and Technology Beijing, China); Qi Li (Tsinghua University, China); Yongqian Sun (Nankai University, China); Dan Pei (Tsinghua University, China); Tao Wei (Baidu USA LLC, USA); Yanfei Xu and Yan Liu (Baidu, Inc, China)
Zero-day Web attacks are arguably the most serious threats to Web security, but are very challenging to detect because they are not seen or known previously and thus cannot be detected by widely-deployed signature-based Web Application Firewalls (WAFs). This paper proposes ZeroWall, an unsupervised approach, which works with an existing WAF in pipeline, to effectively detecting zero-day Web attacks. Using historical web requests allowed by an existing signature-based WAF, a vast majority of which are assumed to be benign, ZeroWall trains a self-translation machine using an encoder-decoder recurrent neural network to capture the syntax and semantic patterns of benign requests. In real-time detection, a zero-day attack request (which the WAF fails to detect), not understood well by self-translation machine, cannot be translated back to its original request by the machine, thus is declared as an attack. In our evaluation using 8 real-world traces of 1.4 billion Web requests, ZeroWall successfully detects real zero-day attacks missed by existing WAFs and achieves high F1-scores over 0.98, which significantly outperforms all baseline approaches.
Shucheng Yu (Stevens Institute of Technology)
Session 10-D
Network Intelligence VI
An Incentive Mechanism Design for Efficient Edge Learning by Deep Reinforcement Learning Approach
Yufeng Zhan (The Hong Kong Polytechnic University, China); Jiang Zhang (University of Southern California, USA)
Emerging technologies and applications have generated large amounts of data at the network edge. Due to bandwidth, storage, and privacy concerns, it is often impractical to move the collected data to the cloud. With the rapid development of edge computing and distributed machine learning (ML), edge-based ML called federated learning has emerged to overcome the shortcomings of cloud-based ML. Existing works mainly focus on designing efficient learning algorithms, few works focus on designing the incentive mechanisms with heterogeneous edge nodes (EN) and uncertainty of network bandwidth. The incentive mechanisms affect various tradeoffs: (i) between computation and communication latencies, and thus (ii) between the edge learning time and payment consumption. We fill this gap by designing an incentive mechanism that captures the tradeoff between latencies and payment. Due to the network dynamics and privacy protection, we propose a deep reinforcement learning-based (DRL-based) solution that can automatically learn the best pricing strategy. To the best of our knowledge, this is the first work that applies the advances of DRL to design the incentive mechanism for edge learning. We evaluate the performance of the incentive mechanism using trace-driven experiments. The results demonstrate the superiority of our proposed approach as compared with the baselines.
Intelligent Video Caching at Network Edge: A Multi-Agent Deep Reinforcement Learning Approach
Fangxin Wang (Simon Fraser University, Canada); Feng Wang (University of Mississippi, USA); Jiangchuan Liu and Ryan Shea (Simon Fraser University, Canada); Lifeng Sun (Tsinghua University, China)
Today's explosively growing Internet video traffics and viewers' ever-increasing quality of experience (QoE) demands for video streaming bring tremendous pressures to the backbone network. Mobile edge caching provides a promising alternative by pushing video content closer at the network edge rather than the remote CDN servers. However, our large-scale trace analysis shows that edge caching environment is much more complicated with massively dynamic and diverse request patterns, which renders that existing rule-based and model-based caching solutions may not well fit such complicated edge environments. Our trace analysis also shows that the request similarity among neighboring edges can be highly dynamic and diverse, which can easily compromise the benefits from traditional cooperative caching mostly designed based on CDN environment. In this paper, we propose \texttt{MacoCache}, an intelligent edge caching framework that is carefully designed to afford the massively diversified and distributed caching environment to minimize both content access latency and traffic cost. Specifically, MacoCache leverages a multi-agent deep reinforcement learning (MADRL) based solution, where each edge is able to adaptively learn its own best policy in conjunction with other edges for intelligent caching. The real trace-driven evaluation further demonstrate its superiority.
Network-Aware Optimization of Distributed Learning for Fog Computing
Yuwei Tu (Zoomi Inc., USA); Yichen Ruan and Satyavrat Wagle (Carnegie Mellon University, USA); Christopher G. Brinton (Purdue University & Zoomi Inc., USA); Carlee Joe-Wong (Carnegie Mellon University, USA)
Fog computing holds promise of scaling machine learning tasks to network-generated datasets by distributing processing across connected devices. Key challenges to doing so, however, are heterogeneity in devices' compute resources and topology constraints on which devices can communicate. We are the first to address these challenges by developing a network-aware distributed learning optimization methodology where devices process data for a task locally and send their learnt parameters to a server for aggregation at certain time intervals. In particular, different from traditional federated learning frameworks, our method enables devices to offload their data processing tasks, with these decisions determined through a convex data transfer optimization problem which trades off costs associated with devices processing, offloading, or discarding data points. Using this model, we analytically characterize the optimal data transfer solution for different fog network topologies, showing for example that the value of a device offloading is approximately linear in the range of computing costs in the network. Our subsequent experiments on both synthetic and real-world datasets we collect confirm that our algorithms are able to improve network resource utilization substantially without sacrificing the accuracy of the learned model.
SurveilEdge: Real-time Video Query based on Collaborative Cloud-Edge Deep Learning
Shibo Wang and Shusen Yang (Xi'an Jiaotong University, China); Cong Zhao (Imperial College London, United Kingdom (Great Britain))
Large volumes of surveillance videos are continuously generated by ubiquitous cameras, which brings the demand of real-time queries that return the video frames with objects of certain classes with low latency and low bandwidth cost. We present SurveilEdge, a collaborative Cloud-Edge system for real-time queries of large-scale surveillance video streams. Specifically, a (convolution neural network) CNN training scheme based on the fine-tuning the technique, and an intelligent task allocator with the task scheduling and parameter adjustment algorithm are developed. We implement SurveilEdge on a prototype with multiple edge devices and a public cloud, and conduct extensive experiments based on real-world surveillance video datasets. Experimental results demonstrate that the Cloud-edge collaborative SurveilEdge manages to reduce the average latency and bandwidth cost by up to 81.7% and 86.2% (100% and 200%) respectively, traditional cloud-based (edge-based) solutions. Meanwhile, SurveilEdge balances the computing load effectively and significantly reduces the variance of per frame query latencies.
Session 10-E
Enabling Live Migration of Containerized Applications Across Clouds
Thad Benjaponpitak, Meatasit Karakate and Kunwadee Sripanidkulchai (Chulalongkorn University, Thailand)
Live migration, the process of transferring a running application to a different physical location with minimal downtime, can provide many benefits desired by modern cloud-based systems. Furthermore, live migration across data centers between different cloud providers enables a new level of freedom for cloud users to move their workloads around for performance or business objectives without having to be tied down to any single provider. While this vision is not new, to-date, there are few solutions and proof-of-concepts that provide this capability. As containerized applications are gaining popularity, we focus on the design and implementation of live migration of containers across cloud providers. A proof of concept live migration service between Amazon Web Services, Google Cloud Platform, and Microsoft Azure is evaluated using a common web-based workload. Our implemented service is automated, includes pre-copy optimization, connection holding, traffic redirection, and support for multiple interdependent containers, applicable to a broad range of application use cases.
Online Placement of Virtual Machines with Prior Data
David Naori (Technion, Israel); Danny Raz (Nokia and Technion, Israel)
The cloud computing market has a wide variety of customers that deploy various applications from deep learning to classical web services. Each application may have different computing, memory and networking requirements, and each customer may be willing to pay a different price for the service. When a request for a VM arrives, the cloud provider decides online whether or not to serve it and which resources to allocate for this purpose. The goal is to maximize the revenue while obeying the constraints imposed by the limited physical infrastructure and its layout.
Although requests arrive online, cloud providers are not entirely in the dark; historical data is readily available, and may contain strong indications regarding future requests. Thus, standard theoretical models that assume the online player has no prior knowledge are inadequate. In this paper we adopt a recent theoretical model for the design and analysis of online algorithms that allows taking such historical data into account. We develop new competitive online algorithms for multidimensional resource allocation and analyze their guaranteed performance. Moreover, using extensive simulation over real data from Google and AWS, we show that our new approach yields much higher revenue to cloud providers than currently used heuristics.
PAM & PAL: Policy-Aware Virtual Machine Migration and Placement in Dynamic Cloud Data Centers
Hugo Flores and Vincent Tran (CSUDH, USA); Bin Tang (California State University Dominguez Hills, USA)
In this paper we focus on policy-aware data centers (PADCs), wherein virtual machine (VM) traffic traverses a sequence of middleboxes (MBs) for security and performance purposes, and study two new VM migration and placement problems. We first study PAM: \underline{p}olicy-\underline{a}ware virtual machine \underline{m}igration. Given an existing VM placement in the PADC, a data center policy that communicating VM pairs must satisfy, and dynamic traffic rates among VMs, the goal of PAM is to migrate VMs in order to minimize the total cost of migration and communication of all the VM pairs. We design optimal, approximation, and heuristic {\em policy-aware} VM migration algorithms. Then we study PAL: \underline{p}olicy-\underline{a}ware virtual machine p\underline{l}acement, and show that it is a special case of PAM. Further, by exploiting unique structures of VM placement in PADCs, we design new PAL algorithms that are more time efficient than while achieving the optimality and approximality as those in PAM. Our experiments show that i) VM migrations reduces total communication costs of VM pairs by 30%, ii) our PAM algorithms outperform {\em only} existing policy-aware VM migration scheme by 20-30%, and iii) our PAL algorithms outperform state-of-the-art VM placement algorithm that is oblivious to data center policies by 40-50%.
SplitCast: Optimizing Multicast Flows in Reconfigurable Datacenter Networks
Long Luo (University of Electronic Science and Technology of China, China); Klaus-Tycho Foerster and Stefan Schmid (University of Vienna, Austria); Hongfang Yu (University of Electronic Science and Technology of China, China)
Many modern cloud applications frequently generate multicast traffic, which is becoming one of the primary communication patterns in datacenters. Emerging reconfigurable datacenter technologies enable interesting new opportunities to support such multicast traffic in the physical layer: novel circuit switches offer high-performance inter-rack multicast capabilities. However, not much is known today about the algorithmic challenges introduced by this new technology.
This paper presents SplitCast, a preemptive multicast scheduling approach that fully exploits emerging physical-layer multicast capabilities to reduce flow times. SplitCast dynamically reconfigures the circuit switches to adapt to the multicast traffic, accounting for reconfiguration delays. In particular, SplitCast relies on simple single-hop routing and leverages flexibilities by supporting splittable multicast so that a transfer can already be delivered to just a subset of receivers when the circuit capacity is insufficient. Our evaluation results show that SplitCast can reduce flow times significantly compared to state-of-the-art solutions.
Sangtae Ha (University of Colorado Boulder)
Session 10-F
WiFi and Wireless Sensing
Joint Access Point Placement and Power-Channel-Resource-Unit Assignment for 802.11ax-Based Dense WiFi with QoS Requirements
Shuwei Qiu, Xiaowen Chu, Yiu-Wing Leung and Joseph Kee-Yin Ng (Hong Kong Baptist University, Hong Kong)
IEEE 802.11ax is a promising standard for the next-generation WiFi network, which uses orthogonal frequency division multiple access (OFDMA) to segregate the wireless spectrum into time-frequency resource units (RUs). In this paper, we aim at designing an 802.11ax-based dense WiFi network to provide WiFi services to a large number of users within a given area with the following objectives: (1) to minimize the number of access points (APs); (2) to fulfil the user's throughput requirement; and (3) to be resistant to AP failures. We formulate the above into a joint AP placement and power-channel-RU assignment optimization problem, which is NP-hard. To tackle this problem, we first derive an analytical model to estimate each user's throughput under the mechanism of OFDMA and a widely used interference model. We then design a heuristic algorithm to find high-quality solutions with polynomial time complexity. Simulation results show that our algorithm can achieve the optimal performance for a small area of 50 x 50 m2. For a larger area of 100 x 80 m2 where we cannot find the optimal solution through an exhaustive search, our algorithm can reduce the number of APs by 32 - 55% as compared to the random and Greedy solutions.
Machine Learning-based Spoofing Attack Detection in MmWave 60GHz IEEE 802.11ad Networks
Ning Wang and Long Jiao (George Mason University, USA); Pu Wang (Xidian University, China); Weiwei Li (Hebei University of Engineering, China & George Mason University, USA); Kai Zeng (George Mason University, USA)
Spoofing attacks pose a serious threat to wireless communications. Exploiting physical-layer features to counter spoofing attacks is a promising technology. Although various physical-layer spoofing attack detection (PL-SAD) techniques have been proposed for conventional 802.11 networks in the sub-6GHz band, the study of PL-SAD for 802.11ad networks in 5G millimeter wave (mmWave) 60GHz band is largely open. In this paper, we propose a unique physical layer feature in IEEE 802.11ad networks, i.e., the signal-to-noise-ratio (SNR) traces in the sector level sweep (SLS) of beam pattern selections, to achieve PL-SAD. The proposed schemes are based on the observation that each 802.11ad device presents distinctive beam patterns in the beam sweeping process, which results in distinguishable SNR traces. Based on these observations, we present a novel neural network framework, named BNFN-framework, that can tackle small samples learning and allow for quick construction. The BNFN-framework consists of a backpropagation neural network and a fast forward propagation neural network. Generative adversarial networks (GANs) are introduced to optimize these neural networks. We conduct experiments using off-the-shelf 802.11ad devices, Talon AD7200s and MG360, to evaluate the performance of the proposed PL-SAD scheme. Experimental results confirm the effectiveness of the proposed PL-SAD scheme under different scenarios.
MU-ID: Multi-user Identification Through Gaits Using Millimeter Wave Radios
Xin Yang (Rutgers University, USA); Jian Liu (The University of Tennessee, Knoxville, USA); Yingying Chen (Rutgers University, USA); Xiaonan Guo and Yucheng Xie (Indiana University-Purdue University Indianapolis, USA)
Multi-user identification could facilitate various large-scale identity-based services such as access control, automatic surveillance system, and personalized services, etc. Although existing solutions can identify multiple users using cameras, such vision-based approaches usually raise serious privacy concerns and require the presence of line-of-sight. Differently, in this paper, we propose MU-ID, a gait-based multi-user identification system leveraging a single commercial off-the-shelf (COTS) millimeter-wave (mmWave) radar. Particularly, MU-ID takes as input frequency-modulated continuous-wave (FMCW) signals from the radar sensor. Through analyzing the mmWave signals in the range-Doppler domain, MU-ID examines the users' lower limb movements and captures their distinct gait patterns varying in terms of step length, duration, instantaneous lower limb velocity, and inter-lower limb distance, etc. Additionally, an effective spatial-temporal silhouette analysis is proposed to segment each user's walking steps. Then, the system identifies steps using a Convolutional Neural Network (CNN) classifier and further identifies the users in the area of interest. We implement MU-ID with the TI AWR1642BOOST mmWave sensor and conduct extensive experiments involving 10 people. The results show that MU-ID achieves up to 97% single-person identification accuracy, and over 92% identification accuracy for up to four people, while maintaining a low false positive rate.
SmartBond: A Deep Probabilistic Machinery for Smart Channel Bonding in IEEE 802.11ac
Raja Karmakar and Samiran Chattopadhyay (Jadavpur University, India); Sandip Chakraborty (Indian Institute of Technology Kharagpur, India)
Dynamic bandwidth operation in IEEE 802.11ac helps wireless access points to tune channel widths based on carrier sensing and bandwidth requirements of associated wireless stations. However, wide channels result in a reduction in the carrier sensing range, which leads to the problem of channel sensing asymmetry. As a consequence, access points face hidden channel interference that may lead to as high as 60% reduction in the throughput under certain scenarios of dense deployments of access points. Existing approaches handle this problem by detecting the hidden channels once they occur and affect the channel access performance. In a different direction, in this paper, we develop a method for avoiding hidden channels by meticulously predicting the channel width that can reduce interference as well as can improve the average communication capacity. The core of our approach is a deep probabilistic machinery based on point process modeling over the evolution of channel width selection process. The proposed approach, SmartBond, has been implemented and deployed over a testbed with 8 commercial wireless access points. The experiments show that the proposed model can significantly improve the channel access performance although it is lightweight and does not incur much overhead during the decision making process.
Yuanqing Zheng (The Hong Kong Polytechnic University)
Session 10-G
Edge Computing III
A Fast Hybrid Data Sharing Framework for Hierarchical Mobile Edge Computing
Junjie Xie and Deke Guo (National University of Defense Technology, China); Xiaofeng Shi and Haofan Cai (University of California Santa Cruz, USA); Chen Qian (University of California at Santa Cruz, USA); Honghui Chen (National University of Defense Technology, China)
Edge computing satisfies the stringent latency requirements of data access and processing for applications running on edge devices. The data location service is a key function to provide data storage and retrieval to enable these applications. However, it still lacks research of a scalable and low-latency data location service in mobile edge computing. Meanwhile, the existing solutions, such as DNS and DHT, fail to meet the low latency requirement of mobile edge computing. This paper presents a low-latency hybrid data sharing framework, HDS, in which the data location service is divided into two parts: intra-region and inter-region. In the intra-region part, we design a data sharing protocol called Cuckoo Summary to achieve fast data localization. In the inter-region part, we design a geographic routing based scheme to achieve efficient data localization with only one overlay hop. The advantages of HDS include short response latency, low implementation overhead, and few false positives. We implement our HDS framework based on a P4 prototype. The experimental results show that, compared to the state-of-the-art solutions, our design achieves 50.21% shorter lookup paths and 92.75% fewer false positives.
Data-driven Distributionally Robust Optimization for Edge Intelligence
Zhaofeng Zhang, Sen Lin, Mehmet Dedeoglu, Kemi Ding and Junshan Zhang (Arizona State University, USA)
The past few years have witnessed the explosive growth of Internet of Things (IoT) devices. The necessity of real-time edge intelligence for IoT applications indicates that decision making must take place right here right now at the network edge, thus dictating that a high percentage of the IoT created data should be stored and analyzed locally. However, the computing resources are constrained and the amount of local data is often very limited at edge nodes. To tackle these challenges, we propose a distributionally robust optimization (DRO)-based edge intelligence framework, which is based on an innovative synergy of cloud knowledge transfer and local learning. More specifically, the knowledge transfer from the cloud learning is in the form of a reference distribution and its associated uncertainty set. Further, based on its local data, the edge device constructs an uncertainty set centered around its empirical distribution. The edge learning problem is then cast as a DRO problem subject to the above two distribution uncertainty sets. Building on this framework, we investigate two problem formulations for DRO-based edge intelligence, where the uncertainty sets are constructed using the Kullback-Leibler divergence and the Wasserstein distance, respectively. Numerical results demonstrate the effectiveness of the proposed DRO-based framework.
Delay-Optimal Distributed Edge Computing in Wireless Edge Networks
Xiaowen Gong (Auburn University, USA)
Distributed edge computing (DEC) makes use of distributed devices in the edge network to perform computing in parallel. By integrating distributed computing with edge computing, DEC can reduce service delays compared to each of the two paradigms. In this paper, we explore DEC that exploits edge devices connected by a wireless network to perform distributed computing. In particular, we study the fundamental problem of minimizing the delay of executing a distributed algorithm. We first establish some structural properties of the optimal communication scheduling, which show that it is optimal to be non-preemptive, be non-idle, and schedule forward communications before backward communications. Then, based on these properties, we characterize the optimal computation allocation which can be found by an efficient algorithm. Next, based on the optimal computation allocation, we characterize the optimal scheduling order of communications for some cases, and develop an efficient algorithm with a finite approximation ratio to find it for the general case. Last, based on the optimal computation allocation and communication scheduling, we further show that the optimal selection of devices can be found efficiently for some cases. Our results provide useful insights into the optimal policies. We evaluate the performance of the theoretical findings using simulations.
Fog Integration with Optical Access Networks from an Energy Efficiency Perspective
Ahmed Helmy and Amiya Nayak (University of Ottawa, Canada)
Access networks have been continuously going through many reformations to make them better suited for many demanding applications and adapt to arising traffic trends. On one hand, incorporating fog and edge computing has become a necessity for alleviating network congestions and supporting numerous applications that can no longer rely on the resources of a remote cloud. On the other hand, energy-efficiency has become a strong imperative for telecommunication networks, which aims to reduce both their operational costs and carbon footprint, but often leads to some degradation in the network performance. In this paper, we address these two challenges by examining the integration of fog computing with optical access networks under power-conserving frameworks. As most power-conserving frameworks in the literature are centralized-based, we also propose a novel decentralized-based energy-efficient framework and compare its performance against its centralized counterpart in a long-reach passive optical network (LR-PON). We study the possible cloudlet placements and the offloading performance in each allocation paradigm to see which is able to meet the requirements of next-generation access networks by having better network performance and less energy consumption.
Marie-Jose Montpetit (MJMontpetit.com)
|
CommonCrawl
|
Asymmetric butterfly velocities in 2-local Hamiltonians
Yong-Liang Zhang, Vedika Khemani
The speed of information propagation is finite in quantum systems with local interactions. In many such systems, local operators spread ballistically in time and can be characterized by a ``butterfly velocity", which can be measured via out-of-time-ordered correlation functions. In general, the butterfly velocity can depend asymmetrically on the direction of information propagation. In this work, we construct a family of simple 2-local Hamiltonians for understanding the asymmetric hydrodynamics of operator spreading. Our models live on a one dimensional lattice and exhibit asymmetric butterfly velocities between the left and right spatial directions. This asymmetry is transparently understood in a free (non-interacting) limit of our model Hamiltonians, where the butterfly speed can be understood in terms of quasiparticle velocities.
Transport in one-dimensional integrable quantum systems
Jesko Sirker
These notes are based on a series of three lectures given at the Les Houches summer school on 'Integrability in Atomic and Condensed Matter Physics' in August 2018. They provide an introduction into the unusual transport properties of integrable models in the linear response regime focussing, in particular, on the spin-1/2 XXZ spin chain.
Toward a 3d Ising model with a weakly-coupled string theory dual
Nabil Iqbal, John McGreevy
It has long been expected that the 3d Ising model can be thought of as a string theory, where one interprets the domain walls that separate up spins from down spins as two-dimensional string worldsheets. The usual Ising Hamiltonian measures the area of these domain walls. This theory has string coupling of unit magnitude. We add new local terms to the Ising Hamiltonian that further weight each spin configuration by a factor depending on the genus of the corresponding domain wall, resulting in a new 3d Ising model that has a tunable bare string coupling $g_s$. We use a combination of analytical and numerical methods to analyze the phase structure of this model as $g_s$ is varied. We study statistical properties of the topology of worldsheets and discuss the prospects of using this new deformation at weak string coupling to find a worldsheet description of the 3d Ising transition.
Interacting edge states of fermionic symmetry-protected topological phases in two dimensions
Joseph Sullivan, Meng Cheng
SciPost Phys. 9, 016 (2020) · published 5 August 2020 |
Recently, it has been found that there exist symmetry-protected topological phases of fermions, which have no realizations in non-interacting fermionic systems or bosonic models. We study the edge states of such an intrinsically interacting fermionic SPT phase in two spatial dimensions, protected by $\mathbb{Z}_4\times\mathbb{Z}_2^T$ symmetry. We model the edge Hilbert space by replacing the internal $\mathbb{Z}_4$ symmetry with a spatial translation symmetry, and design an exactly solvable Hamiltonian for the edge model. We show that at low-energy the edge can be described by a two-component Luttinger liquid, with nontrivial symmetry transformations that can only be realized in strongly interacting systems. We further demonstrate the symmetry-protected gaplessness under various perturbations, and the bulk-edge correspondence in the theory.
Search for non-Abelian Majorana braiding statistics in superconductors
C. W. J. Beenakker
SciPost Phys. Lect. Notes 15 (2020) · published 4 August 2020 |
This is a tutorial review of methods to braid the world lines of non-Abelian anyons (Majorana zero-modes) in topological superconductors. That "Holy Grail" of topological quantum information processing has not yet been reached in the laboratory, but there now exists a variety of platforms in which one can search for the Majorana braiding statistics. After an introduction to the basic concepts of braiding we discuss how one might be able to braid immobile Majorana zero-modes, bound to the end points of a nanowire, by performing the exchange in parameter space, rather than in real space. We explain how Coulomb interaction can be used to both control and read out the braiding operation, even though Majorana zero-modes are charge neutral. We ask whether the fusion rule might provide for an easier pathway towards the demonstration of non-Abelian statistics. In the final part we discuss an approach to braiding in real space, rather than parameter space, using vortices injected into a chiral Majorana edge mode as "flying qubits".
Exponentially long lifetime of universal quasi-steady states in topological Floquet pumps
Tobias Gulden, Erez Berg, Mark S. Rudner, Netanel H. Lindner
We investigate a mechanism to transiently stabilize topological phenomena in long-lived quasi-steady states of isolated quantum many-body systems driven at low frequencies. We obtain an analytical bound for the lifetime of the quasi-steady states which is exponentially large in the inverse driving frequency. Within this lifetime, the quasi-steady state is characterized by maximum entropy subject to the constraint of fixed number of particles in the system's Floquet-Bloch bands. In such a state, all the non-universal properties of these bands are washed out, hence only the topological properties persist.
Critical properties of a comb lattice
Natalia Chepiga, Steven R. White
In this paper we study the critical properties of the Heisenberg spin-1/2 model on a comb lattice -- a 1D backbone decorated with finite 1D chains -- the teeth. We address the problem numerically by a comb tensor network that duplicates the geometry of a lattice. We observe a fundamental difference between the states on a comb with even and odd number of sites per tooth, which resembles an even-odd effect in spin-1/2 ladders. The comb with odd teeth is always critical, not only along the teeth, but also along the backbone, which leads to a competition between two critical regimes in orthogonal directions. In addition, we show that in a weak-backbone limit the excitation energy scales as $1/(NL)$, and not as $1/N$ or $1/L$ typical for 1D systems. For even teeth in the weak backbone limit the system corresponds to a collection of decoupled critical chains of length $L$, while in the strong backbone limit, one spin from each tooth forms the backbone, so the effective length of a critical tooth is one site shorter, $L-1$. Surprisingly, these two regimes are connected via a state where a critical chain spans over two nearest neighbor teeth, with an effective length $2L$.
Critical energy landscape of linear soft spheres
Silvio Franz, Antonio Sclocchi, Pierfrancesco Urbani
We show that soft spheres interacting with a linear ramp potential when overcompressed beyond the jamming point fall in an amorphous solid phase which is critical, mechanically marginally stable and share many features with the jamming point itself. In the whole phase, the relevant local minima of the potential energy landscape display an isostatic contact network of perfectly touching spheres whose statistics is controlled by an infinite lengthscale. Excitations around such energy minima are non-linear, system spanning, and characterized by a set of non-trivial critical exponents. We perform numerical simulations to measure their values and show that, while they coincide, within numerical precision, with the critical exponents appearing at jamming, the nature of the corresponding excitations is richer. Therefore, linear soft spheres appear as a novel class of finite dimensional systems that self-organize into new, critical, marginally stable, states.
Relaxation and entropy generation after quenching quantum spin chains
Máté Lencses, Octavio Pomponio, Gabor Takacs
This work considers entropy generation and relaxation in quantum quenches in the Ising and $3$-state Potts spin chains. In the absence of explicit symmetry breaking we find universal ratios involving R\'enyi entropy growth rates and magnetisation relaxation for small quenches. We also demonstrate that the magnetisation relaxation rate provides an observable signature for the "dynamical Gibbs effect" which is a recently discovered characteristic non-monotonous behaviour of entropy growth linked to changes in the quasi-particle spectrum.
Out-of-equilibrium phase transitions induced by Floquet resonances in a periodically quench-driven XY spin chain
Sergio Enrique Tapias Arze, Pieter W. Claeys, Isaac Pérez Castillo, Jean-Sébastien Caux
SciPost Phys. Core 3, 001 (2020) · published 22 July 2020 |
We consider the dynamics of an XY spin chain subjected to an external transverse field which is periodically quenched between two values. By deriving an exact expression of the Floquet Hamiltonian for this out-of-equilibrium protocol with arbitrary driving frequencies, we show how, after an unfolding of the Floquet spectrum, the parameter space of the system is characterized by alternations between local and non-local regions, corresponding respectively to the absence and presence of Floquet resonances. The boundary lines between regions are obtained analytically from avoided crossings in the Floquet quasi-energies and are observable as phase transitions in the synchronized state. The transient behaviour of dynamical averages of local observables similarly undergoes a transition, showing either a rapid convergence towards the synchronized state in the local regime, or a rather slow one exhibiting persistent oscillations in the non-local regime, where explicit decay coefficients are presented.
|
CommonCrawl
|
Hostname: page-component-7ccbd9845f-6pjjk Total loading time: 1.593 Render date: 2023-01-31T21:21:36.022Z Has data issue: true Feature Flags: { "useRatesEcommerce": false } hasContentIssue true
>Journal of Glaciology
>Volume 68 Issue 270
>Water flow through sediments and at the ice-sediment...
Journal of Glaciology
Borehole response tests
List of symbols
Water flow through sediments and at the ice-sediment interface beneath Sermeq Kujalleq (Store Glacier), Greenland
Published online by Cambridge University Press: 08 December 2021
Samuel H. Doyle [Opens in a new window] ,
Bryn Hubbard [Opens in a new window] ,
Poul Christoffersen [Opens in a new window] ,
Robert Law ,
Duncan R. Hewitt ,
Jerome A. Neufeld ,
Charlotte M. Schoonman ,
Thomas R. Chudley [Opens in a new window] and
Marion Bougamont
Samuel H. Doyle*
Centre for Glaciology, Department of Geography and Earth Sciences, Aberystwyth University, Aberystwyth, SY23 3DB, UK
Bryn Hubbard
Poul Christoffersen
Scott Polar Research Institute, Cambridge University, Cambridge, CB2 1ER, UK
Robert Law
Duncan R. Hewitt
Department of Mathematics, University College London, 25 Gordon Street, London, WC1H 0AY
Jerome A. Neufeld
Institute of Theoretical Geophysics, Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, UK BP Institute, University of Cambridge, Madingley Rise, Cambridge CB3 0EZ, UK Department of Earth Sciences, Bullard Laboratories, University of Cambridge, Madingley Rise, Cambridge CB3 0EZ, UK
Charlotte M. Schoonman
Thomas R. Chudley
Author for correspondence: Samuel Doyle, E-mail: [email protected]
Save PDF (2 mb) View PDF[Opens in a new window]
Subglacial hydrology modulates basal motion but remains poorly constrained, particularly for soft-bedded Greenlandic outlet glaciers. Here, we report detailed measurements of the response of subglacial water pressure to the connection and drainage of adjacent water-filled boreholes drilled through kilometre-thick ice on Sermeq Kujalleq (Store Glacier). These measurements provide evidence for gap opening at the ice-sediment interface, Darcian flow through the sediment layer, and the forcing of water pressure in hydraulically-isolated cavities by stress transfer. We observed a small pressure drop followed by a large pressure rise in response to the connection of an adjacent borehole, consistent with the propagation of a flexural wave within the ice and underlying deformable sediment. We interpret the delayed pressure rise as evidence of no pre-existing conduit and the progressive decrease in hydraulic transmissivity as the closure of a narrow (< 1.5 mm) gap opened at the ice-sediment interface, and a reversion to Darcian flow through the sediment layer with a hydraulic conductivity of ≤ 10−6 m s−1. We suggest that gap opening at the ice-sediment interface deserves further attention as it will occur naturally in response to the rapid pressurisation of water at the bed.
Journal of Glaciology , Volume 68 , Issue 270 , August 2022 , pp. 665 - 684
DOI: https://doi.org/10.1017/jog.2021.121[Opens in a new window]
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright © The Author(s), 2021. Published by Cambridge University Press
A list of symbols is presented in Appendix A.
The nature of subglacial hydrology and basal motion on ice masses underlain by soft sediments are central questions in ice dynamics (e.g., Clarke, Reference Clarke1987; Murray, Reference Murray1997; Tulaczyk and others, Reference Tulaczyk, Kamb and Engelhardt2000). However, despite abundant evidence for subglacial sediments beneath fast-moving outlet glaciers and ice streams draining the Greenland and Antarctic ice sheets (e.g., Alley and others, Reference Alley, Blankenship, Bentley and Rooney1986; Blankenship and others, Reference Blankenship, Bentley, Rooney and Alley1986; Christianson and others, Reference Christianson2014) and mountain glaciers (e.g., Humphrey and others, Reference Humphrey, Kamb, Fahnestock and Engelhardt1993; Iverson and others, Reference Iverson, Hanson, Hooke and Jansson1995), soft-bedded processes remain poorly constrained (Walter and others, Reference Walter, Chaput and Luthi2014; Alley and others, Reference Alley, Cuffey and Zoet2019). Water flow in a soft-bedded subglacial environment has been hypothesised to occur via: Darcian flow through permeable sediments (Clarke, Reference Clarke1987); sheet flow at the ice-sediment interface (e.g., Weertman, Reference Weertman1970; Alley and others, Reference Alley, Blankenship, Rooney and Bentley1986; Flowers and Clarke, Reference Flowers and Clarke2002; Creyts and Schoof, Reference Creyts and Schoof2009); and concentrated flow in channels cut into the ice, and canals eroded into the sediment (Walder and Fowler, Reference Walder and Fowler1994; Ng, Reference Ng2000). Drainage through gaps opened and closed dynamically at the ice-sediment interface by turbulent water flow at high pressure has also been proposed as an explanation for the rapid drainage of boreholes (Engelhardt and Kamb, Reference Engelhardt and Kamb1997; Kamb, Reference Kamb2001) and both supra- and pro-glacial lakes (Sugiyama and others, Reference Sugiyama, Bauder, Huss, Riesen and Funk2008; Tsai and Rice, Reference Tsai and Rice2010, Reference Tsai and Rice2012; Hewitt and others, Reference Hewitt, Chini and Neufeld2018). Direct evidence for gap-opening at the ice-sediment interface is limited to three observational studies (Engelhardt and Kamb, Reference Engelhardt and Kamb1997; Lüthi, Reference Lüthi1999; Iverson and others, Reference Iverson2007). However, despite support from detailed analytical modelling (Schoof and others, Reference Schoof, Hewitt and Werder2012; Rada and Schoof, Reference Rada and Schoof2018) dynamic gap opening has yet to be fully developed for larger-scale numerical models of subglacial hydrology.
The water-saturated sediment layer beneath a soft-bedded ice mass can be approximated as an aquifer confined by an overlying ice aquiclude (e.g., Lingle and Brown, Reference Lingle and Brown1987; Stone and Clarke, Reference Stone and Clarke1993). And, with careful adaptation, standard hydrogeological techniques can be used to estimate subglacial aquifer properties such as transmissivity, conductivity, diffusivity and storativity. These include slug tests, where the borehole water level is perturbed by the insertion and sudden removal of a sealed pipe of known volume (Hodge, Reference Hodge1979; Stone and Clarke, Reference Stone and Clarke1993; Iken and others, Reference Iken, Fabri and Funk1996; Kulessa and Hubbard, Reference Kulessa and Hubbard1997; Stone and others, Reference Stone, Clarke and Ellis1997; Kulessa and Murray, Reference Kulessa and Murray2003; Kulessa and others, Reference Kulessa, Hubbard, Williamson and Brown2005), packer tests where the borehole is sealed near the surface and subsequently rapidly pressurised with air (Stone and Clarke, Reference Stone and Clarke1993; Stone and others, Reference Stone, Clarke and Ellis1997), and pumping tests where the borehole hydraulic head is monitored in response to water injection or extraction (e.g., Engelhardt, Reference Engelhardt1978; Engelhardt and Kamb, Reference Engelhardt and Kamb1997; Iken and Bindschadler, Reference Iken and Bindschadler1986; Lüthi, Reference Lüthi1999). Borehole drainage on connection with the bed (hereafter 'breakthrough'), and the recovery to equilibrium water levels have also been used to determine subglacial aquifer properties (e.g., Stone and Clarke, Reference Stone and Clarke1993; Engelhardt and Kamb, Reference Engelhardt and Kamb1997; Stone and others, Reference Stone, Clarke and Ellis1997; Lüthi, Reference Lüthi1999). During breakthrough events the water level in the initially water-full borehole either: (i) drops rapidly to a new equilibrium level some tens of metres below the surface, (ii) does not drop at all, or (iii) drops slowly, or rapidly, to a new equilibrium level after a delay of minutes to days, with the variability in response usually explained in terms of the connectivity of the subglacial drainage system (e.g., Smart, Reference Smart1996; Gordon and others, Reference Gordon2001). The hydraulic conductivity of a subglacial sediment layer has also been derived from the propagation and attenuation of diurnal subglacial water pressure waves (e.g., Hubbard and others, Reference Hubbard, Sharp, Willis, Nielsen and Smart1995), and from numerical modelling of the pressure peaks induced when pressure sensors freeze in (Waddington and Clarke, Reference Waddington and Clarke1995). To date, the application of borehole response tests to marine-terminating glaciers in Greenland is limited to a single study (Lüthi, Reference Lüthi1999), presumably due to the challenges of adapting groundwater techniques to the ice-sheet setting.
The application of hydrogeological techniques requires a number of simplifying assumptions. Many techniques are fundamentally based on Darcian flow and inherently assume that the aquifer is isotropic and homogeneous; conditions that may rarely be met in the subglacial environment. Water flow in groundwater investigations is typically slow and assumed to be Darcian. While this may hold for low-velocity water flow through subglacial sediments, the discharge rates during borehole breakthrough events mean turbulent flow is likely in the vicinity of the borehole base (e.g., Stone and Clarke, Reference Stone and Clarke1993). Further complications arise due to the greater density of water than ice, overpressurising the ice at the base of water-filled glacier boreholes with the potential to raise the ice from its substrate permitting water to flow through the gap created. (Overpressure here being water pressure in excess of the ice overburden pressure). Previous studies have attempted to determine the widths of such gaps (Weertman, Reference Weertman1970; Engelhardt and Kamb, Reference Engelhardt and Kamb1997; Lüthi, Reference Lüthi1999; Iverson and others, Reference Iverson2007).
Ice boreholes provide direct access to the subglacial environment allowing sensor installation and borehole response tests. Here, we analyse borehole response tests conducted on Sermeq Kujalleq (Store Glacier) in West Greenland during summer 2019. The response tests included breakthrough events, which occurred consistently when boreholes intersected the ice-sediment interface, constant-rate pumping tests undertaken as water was pumped into the borehole as the drill stem was raised to the surface, and recovery tests following removal of the stem. The results provide insights into subglacial hydrological conditions and permit estimation of the hydraulic transmissivity and conductivity of the subglacial drainage system.
2.1 Field site
Sermeq Kujalleq (Store Glacier) is a major fast-moving outlet glacier of the Greenland Ice Sheet draining an ~ 34,000 km 2 catchment area (Rignot and others, Reference Rignot, Box, Burgess and Hanna2008) into Ikerasak Fjord – a tributary of Uummannaq Fjord. (Note that as several glaciers share the same name – and for continuity with previous literature – we give the English glacier name in brackets after the official Greenlandic name.) In summer 2019, we used pressurised hot water to drill seven boreholes on Sermeq Kujalleq (Store Glacier) at site R30 (N70° 34.0', W050° 5.2') located in the centre of the drained bed of supraglacial lake L028 (Fig. 1a; Table S1). R30 lies 30 km from the calving front at 863 m a.s.l. and is within the ablation area; there was no winter snow or firn present during the drilling campaign. Ice flow measured by a Global Navigation Satellite System (GNSS) receiver averaged 521 m a−1 in the SSW direction (217° True) between 9 July and 16 September 2019. The surface slope was calculated as 1.0° from linear regression of the ArcticDEM digital elevation model (Porter and others, Reference Porter2018) over a distance of ten ice thicknesses (10 km). Lake L028 drained via hydraulic fracture on 31 May 2019 (Chudley and others, Reference Chudley2019b) forming two major moulins (each of diameter ~6 m) located within 200 m of the drill site (Fig. 1b). Borehole-based Distributed Acoustic Sensing (DAS) in BH19c provides evidence for up to 37 m of consolidated subglacial sediment at R30 (Booth and others, Reference Booth2020), while seismic reflection surveys at site S30 (8 km to the south-east of R30; Fig. 1a) revealed up to 45 m of unconsolidated sediment overlying consolidated sediment (Hofstede and others, Reference Hofstede2018). Borehole-based investigations of englacial and basal conditions at S30 reported low effective pressures (180 − 280 kPa), an absent or thin (< 10 m) basal temperate ice layer, and internal deformation concentrated within the lowermost 100 m of ice, below the transition between interglacial (Holocene) and last-glacial (Wisconsin) ice (LGIT; Doyle and others, Reference Doyle2018; Young and others, Reference Young2019). At R30, Distributed Temperature Sensing (DTS) reveals a 70 m-thick basal temperate ice layer, the LGIT at 889 m depth and a steeply curving temperature profile with a minimum ice temperature of − 21.1°C near the centre of the ice column (Law and others, Reference Law2021).
Fig. 1. Maps of the field site. (a) Location of the study site R30 on Sermeq Kujalleq (Store Glacier) with the location of the R29 and S30 drill sites also marked. The background is a Sentinel-2 image acquired on 1 June 2019 and the red square on the inset map shows the location in Greenland. (b) Close up of the R30 study site showing the location of boreholes, moulins and the GNSS receiver. Three boreholes intersected the ice-sediment interface (filled, colour-coded circles) and four terminated above the base (hollow circles). The background orthophoto was acquired by an uncrewed aerial vehicle survey following Chudley and others (Reference Chudley, Christoffersen, Doyle, Abellan and Snooke2019) on 21 July 2019.
2.2 Hot water drilling
Boreholes were drilled using a hot water drill system similar to that described in Makinson and Anker (Reference Makinson and Anker2014). Pressurised, hot water (11.0 MPa; ~ 80°C) was provided by five pressure-heater units (Kärcher HDS1000DE) at a regulated flow rate of 75 l min−1, through a 1,350 m long, 19.3 mm (0.75') bore hose. A load cell and rotary encoder recorded the load on the drill tower and the hose length below the surface at 0.5 Hz with a resolution of 1 kg and 0.1 m, respectively (Figs S1–S3). Borehole logging to a depth of 325 m indicates that the hot water drilling system consistently drills boreholes that are within 1° of vertical (Hubbard and others, Reference Hubbard2021).
Boreholes (BH) were named by year and by letter in chronological order of drilling, with BH19a the first borehole drilled in 2019 (Table S1). Boreholes were drilled in two clusters with the first (BH19a, b, c and d) separated from the second (BH19e, f and g) by 70 m (Fig. 1b). Seven boreholes were drilled in 2019 with three reaching the ice-sediment interface at depths of 1,043 m (BH19c), 1,022 m (BH19e) and 1,039 m (BH19g), giving a mean ice thickness of 1,035 m and a mean elevation of the glacier sole of − 172 m a.s.l. (Table 1). Four boreholes were terminated above the ice-sediment interface (see Table S1). Prior to breakthrough boreholes were water-filled to the bare ice surface, with excess water supplied by the pressure-heater units overflowing from the top of the borehole.
Table 1. Key data for the boreholes that reached the bed. Variables h 0, p w, and N were calculated for the reference period 36–60 h after each respective breakthrough, which was deemed representative of subglacial water pressure. A list of symbols is presented in Appendix A.
a Drill-indicated depths do not account for the elastic extension of the hose under load.
b Recorded in BH19e due to freeze-in of pressure transducer in BH19g.
To reduce overall drilling duration and produce a more uniform borehole radius (0.06 m four hours after termination of drilling), we optimised drilling speed using the numerical borehole model of Greenler and others (Reference Greenler2014). The borehole model was constrained by ice temperature from site R29, 1.1 km distant (Hubbard and others, Reference Hubbard2021, Fig. 1a), and a hose thermal conductivity of 0.24 W m−1 K−1. Borehole radius at the time of breakthrough was then estimated by re-running the model with the recorded drill speeds and the equilibrated ice temperature profile measured in BH19c at site R30 (Law and others, Reference Law2021). The mean borehole radius for BH19c, BH19e and BH19g output by the model at the time of borehole breakthrough was 0.07 m, with larger radii (mean of 0.10 m) in the lowermost 100 m of the ice column (Table A1) due to intentionally slower drilling as the drill approached the ice-sediment interface, together with the presence of temperate ice that was unaccounted for during initial model runs. The borehole model underestimated the near-surface (i.e. 0 − 100 m) borehole radius (r s), possibly due to turbulent heat exchange that is not included in the model, so we use the radius at the water line calculated for BH19g (0.14 m) as r s for all the borehole response tests (see Appendix B).
Analysis of the temperature time series recorded by DTS in BH19c (Law and others, Reference Law2021) shows that the boreholes rapidly froze shut. At 580 m depth, where the undisturbed ice temperature was − 21.1°C, the temperature fell below the pressure-dependent melting temperature 3 h after drilling. Within warmer ice, refreezing was slower: at 920 m depth in BH19c, the ice temperature was − 3°C and refreezing was complete after 5 d.
2.3 Pressure measurements
Basal water pressures were recorded by vibrating wire piezometers (Geokon 4500SH) installed at the base of BH19c and BH19e and a current loop transducer (Omega Engineering Ltd. PXM319) installed at the base of BH19g. Pressure records from the Geokon 4500SH were zeroed with atmospheric pressure at the surface, temperature compensated using a high-accuracy thermistor in contact with the piezometer body, and calibrated using the manufacturer's second-order polynomial to an accuracy of ± 3 kPa, equivalent to ± 0.3 m of hydraulic head. The pressure record from the PXM319 current loop transducer (accuracy = ± 35 kPa, equivalent to ± 3.6 m of head) was calibrated using the manufacturer's linear calibration and zeroed with atmospheric pressure at the surface. A pressure spike indicates that the ice surrounding the transducer installed in BH19g froze at 13.7 h post-breakthrough.
All pressure sensors were lowered until contact with the ice-bed interface was confirmed by the pressure ceasing to increase. The sensor was then raised slightly (piezometer offset: 0.05 − 0.4 m; Table 1) to prevent the piezometer from being dragged through the substrate. The borehole water level below the surface (that is the length of the uppermost air-filled section of the borehole) at installation was measured with a well depth meter, and by reference to distance markers on the piezometer cable. The final installation depth was determined by adding this water level to the depth recorded by the piezometer. The ice thickness (H i) was calculated by adding the piezometer offset to the final installation depth. Borehole positions were surveyed on 22 July 2019 using a Trimble R9s GNSS receiver with 8 min long observations post-processed using the precise point positioning service provided by Natural Resources Canada (CSRS-PPP). Borehole surface elevation was converted to orthometric EGM96 geoid heights. To allow inter-comparison of pressure records from sensors installed at different depths below the surface, water pressure was expressed as hydraulic head h, which represents the theoretical orthometric height of the borehole water level,
(1)$$h = {\,p_{\rm w}\over \rho_{\rm w} g} + z,\; $$
where ρ w = 999.8 kg m−3 is water density at 0°C, g = 9.81 m s−2 is gravitational acceleration and z is the orthometric height of the piezometer determined by subtracting the piezometer depth below the surface from the orthometric height of the borehole at the surface. Pressure was also expressed as the effective pressure N = p i − p w and the overpressure (p w − p i), the latter in respect of the excess pressure exerted at the base of water-filled boreholes due to the greater density of water than ice (Table 1). The ice-overburden pressure p i was approximated for an inclined, parallel-sided slab of ice as
(2)$$p_{\rm i} = \rho_{\rm i} g H_{\rm i} \cos{\alpha},\; $$
where ρ i is the density of ice, H i is the height of the overlying ice column, α = 1.0° is the mean surface and bed slope (see Section 2.1), and ice density was taken as ρ i = 910 ± 10 kg m−3.
2.4 Temperature measurements
Temperature was measured using high-accuracy (± 0.05°C) thermistors (Littelfuse: PR502J2) at ~0, 1, 3, 5 and 10 m above the bed in BH19c and BH19e and also throughout the full ice column in BH19c using fibre-optic DTS (Law and others, Reference Law2021). Here we present temperature measurements recorded by the lowermost thermistor in BH19c, which was mounted with the Geokon 4500SH piezometer. We calculated the pressure-dependent melting temperature
(3)$$T_{\rm m} = T_{{\rm tr}} - \gamma\; ( p_{\rm i} - p_{{\rm tr}}) ,\; $$
where γ = 9.14 × 10−8 K Pa−1 is the Clausius–Clapeyron gradient determined from the basal temperature gradient (Law and others, Reference Law2021), and T tr = 273.16 K and p tr = 611.73 Pa are the triple point temperature and pressure of water, respectively.
2.5 GNSS measurements of ice motion
Time series of horizontal and vertical ice motion were determined from dual frequency (L1 + L2) GNSS data recorded by a Trimble R7 receiver at 0.1 Hz and post-processed kinematically using Precise Point Positioning with Ambiguity Resolution (CSRS PPP-AR). The GNSS antenna was mounted on a 5 m long pole drilled 4 m into the ice surface at a location between the two clusters of boreholes (Fig. 1b). Rapid re-freezing of the hole ensured effective coupling of the antenna pole with the ice. Small gaps (< 5 min) in the position record were interpolated linearly before a 6 h low pass Butterworth filter was applied. The filtered position record was differentiated to calculate velocity. The time series was then resampled to 10 min medians and a further 6 h moving average was applied to the velocity record. To prevent a shift in phase, phase-preserving filters and differentiation were used.
3. Borehole response tests
We analysed the response of borehole water pressure to the perturbations induced at breakthrough, during the continued pumping of water into the borehole while the drill stem and hose were raised to the surface, and also during the recovery phase after which borehole water pressure was in equilibrium with the pressure in the subglacial drainage system. These tests were conducted at different times since breakthrough, allowing us to investigate whether hydraulic transmissivity changed as water pressure returned to equilibrium. Rapid borehole refreezing precluded slug testing. Below we describe the borehole response test results alongside the methods.
For the majority of tests the monitoring borehole was the same as the injection borehole and these are referred to simply by the borehole name. To distinguish response tests where the injection and monitoring boreholes were different, we give the injection borehole in full followed by the monitoring borehole's letter code in brackets. A conceptual illustration of our borehole response tests is presented in Figure 2.
Fig. 2. Conceptual diagram and nomenclature for borehole drainage via radial Darcian flow through a subglacial sediment aquifer confined by an overlying ice aquiclude. Note that monitoring boreholes are likely to have refrozen at the time of the tests and h is therefore the equivalent hydraulic head for the subglacial water pressure recorded.
All data loggers, including that of the drill, were synchronised precisely with Global Positioning System Time (GPST) immediately prior to drilling. Water pressure data were logged by separate Campbell Scientific CR1000X data loggers for each cluster of boreholes. The sampling frequency was increased to 0.2 Hz prior to borehole breakthrough, necessitating temporary suspension of thermistor measurements. Hence, no measurements of basal water temperature were made when drilling was taking place.
As it is difficult to measure the background hydraulic head without disturbing the subglacial environment it is necessary to define a reference head (h 0). The head in BH19e averaged from 36 − 60 h after BH19g breakthrough had recovered to within 0.1 m of the mean head over the 24 h period preceding BH19g breakthrough (Fig. 3b). On this basis, we define h 0 as the mean head from 36 − 60 h post-breakthrough for all tests. No corrections for background trends in hydraulic head were made but such trends are small relative to the perturbations induced (Fig. 3a).
Fig. 3. (a) Time series of hydraulic head (h). Borehole breakthrough times are marked with a vertical dashed line and arrow. (b) Time series of head above the reference head (s = h − h 0) plotted against time since respective breakthrough for all breakthrough tests. The yellow shade marks the 24 h period selected to define h 0 (36 − 60 h post-breakthrough).
3.1 Breakthrough tests
3.1.1 Observations
All three boreholes drilled to the bed in 2019 drained rapidly upon intersecting the basal interface. During breakthrough, water levels dropped to an initial level measured during pressure transducer installation of 78, 73 and 80 m below the surface in BH19c, BH19e and BH19g (Table 1). The frictional drag of water flowing past the hose during breakthrough events caused transient ~2 kN magnitude peak forces, as recorded on the drill tower (Figs 4, S1–S3). Following the peak, force on the drill tower became constant at ~200 s post-breakthrough but at a higher level than recorded prior to breakthrough. The offset in the pre- and post-breakthrough force on the drill tower represents the difference between the weight of the hose in a water-filled and part-filled borehole.
Fig. 4. (a) Force on the drill tower with best fit plotted against time since BH19g breakthrough, together with measured and modelled hydraulic head. (b) Volumetric flux into the subglacial drainage system (Q o) with error bars, and hydraulic head in BH19g determined by inverting the force on the drill tower. Labels (a–c) are described in Section 4.1.
As the drill stem was raised to the surface over ~2 h water continued to be pumped into the borehole, supplying an additional ~10 m3 of water (Table 1). The volume of water drained during the breakthrough events was determined from the initial water level and annular cross-sectional area of the borehole of near surface radius (r s) containing the hose of external radius (r d), yielding a mean volume for the three breakthrough events of 4.70 m3 (Table 1). Taking the duration of rapid drainage as the duration of the peak in force of ~200 s gives a mean discharge for the three breakthrough events of 2.3 × 10−2 m3 s−1 supplied from the borehole, with an additional flux supplied by the pumps Q i = 75 l min−1 (1.25 × 10−3 m3 s−1) bringing the total discharge to Q o = 2.5 × 10−2 m3 s−1, and the total volume over the ~200 s duration to 4.95 m3. The Reynolds number for outflow from the base of the borehole can be approximated as flow through a uniform cylindrical pipe, with a radius equal to that at the borehole base, the mean of which was r 0 = 0.10 m for the three boreholes (Table A1),
(4)$$Re = {U_{\rm w} 2 r_0 \rho_{\rm w}\over \eta_{\rm w}} = {2 Q_{\rm o} \rho_{\rm w}\over \pi \eta_{\rm w} r_0},\; $$
where η w = 0.0018 Pa s is the water viscosity at 0°C. Water flow through the boreholes near the base was turbulent with a high Re ≈ 87,500 greatly exceeding the threshold for laminar flow of 2,000 (de Marsily, Reference de Marsily1986).
3.1.2 Determining the BH19g breakthrough flux
To avoid sensor cables becoming tangled around the drill hose, pressure transducers were installed after the drill stem and hose had been recovered to the surface. Hence, no measurements of pressure were made within boreholes being drilled including during breakthrough. As the pressure response to BH19g breakthrough was captured by transducers already installed in BH19c and BH19e (Fig. 4) we now focus on the BH19g breakthrough.
We determined the time varying flux of water into the subglacial drainage system during the breakthrough of BH19g by inverting the recorded force on the drill tower from the hose, which is a combination of its weight, both in air and in water, and the frictional drag on the hose when the water drains through the borehole,
(5)$$\eqalign{F( t) & = \pi r_{\rm d}^2 \overline{\rho_{\rm d}} g ( H_{{\rm w}0} - H_{\rm w}) + \pi r_{\rm d}^2\Delta\overline{\rho} g H_{\rm w} \cr & \quad + {\pi r_{\rm d}\over 4} f_{\rm D} \rho_{\rm w} U_{\rm w}^2 H_{\rm w} + F_{{\rm s}},\; }$$
where r d is the radius of the drill, $\overline {\rho _{\rm d}}$ is the mean density of the drill (including the water core), $\Delta \overline {\rho } = \overline {\rho _{\rm d}}-\rho _{\rm w}$ , f D is the coefficient of frictional drag exerted on the outside of the hose by the down-rushing water in the borehole, H w(t) is the height of water in the borehole, F s is the force exerted by the weight of the drill stem in water, and the bulk velocity of water in the borehole during the drainage event is U w(t) = dH w/dt.
The force on the drill hose is initially set by the water height, which for a borehole full to the surface is equal to the ice thickness, therefore H w(t = 0) = H w0 = H i = 1039 m (Table 1). Since the initial force just before breakthrough F 0 = 893 N the density difference between the hose and water is
(6)$$\Delta\overline{\rho} = {F_0 - F_{{\rm s}}\over \pi r_{\rm d}^2 g H_{{\rm w}0}} = 96\, \hbox{kg m}^{-3}.$$
Taking ρ w = 999.8 kg m−3 gives a mean density of the hose filled with water $\overline {\rho _{\rm d}} = 1096$ kg m−3. Note that the composite density of the hose is
(7)$$\overline{\rho_{\rm d}} = \rho_{\rm d} - ( \rho_{\rm d} - \rho_{\rm w}) ( r_{\rm d}/\underline{r_{\rm d}}) ^2,\; $$
where ρ d is the density of the hose material, and rd = 9.7 mm is the internal bore radius of the hose. Using the calculated value of $\overline {\rho _{\rm d}} = 1096$ kg m−3 gives an estimate of the hose material density of ρ d = 1166 kg m−3, which is slightly larger than the nominal manufacturer's specification of 1149 kg m−3. This apparent extra density corresponds to an extra force measured on the drill tower prior to breakthrough of 65 N, which we interpret as a drag of 0.0625 N per metre of hose from the pumped water flowing down the centre of the hose.
Neglecting minor residual oscillations, the force F ∞ = F(t → ∞) on the drill tower after the initial rapid breakthrough was again approximately constant and is given by
(8)$$F_\infty = 1470\pm10\, {\rm N} = \pi r_{\rm d}^2 g \left[\overline{\rho_{\rm d}}( H_{{\rm w}0} - H_{{\rm w}\infty}) + \Delta\overline{\rho} H_\infty\right].$$
From this we can infer that the final height of the water level $H_{{\rm w}\infty } = 954\pm 1\, \hbox {m}$ . That is, during BH19g breakthrough the water in BH19g transiently drops H w0 − H w∞ ≈ 85 m below the surface.
Following BH19g breakthrough a portion of the water in the borehole is rapidly evacuated into the subglacial environment. We know that the water level in the borehole decreases monotonically from an initial height H 0 to a final height H ∞ and so fit the transient response with a modified exponential solution of the form
(9)$$H_{\rm w} = H_{{\rm w}\infty} + ( H_{{\rm w}0} -H_{{\rm w}\infty}) e^{-y( t) },\; $$
(10)$$y( t) = c_1 t^2 + c_1 t^3 + c_1 t^4.$$
A fourth order polynomial was found to be the lowest order of polynomial to accurately represent the data. The flux of water from the borehole into the subglacial environment (Q o) can then be given by
(11)$$Q_{{\rm o}}( t) = \pi r_{\rm d}^2 U_{{\rm w}}( t) + Q_{\rm i} = \pi r_{\rm d}^2{{\rm d}H_{\rm w}\over {\rm d}t} + Q_{\rm i},\; $$
where Q i = 1.25 × 10−3 m3 s−1 is the input flux from the drill. The three constants in the polynomial y(t), c i where i = 1, …, 3, along with the drag coefficient f D were estimated using non-linear regression (MATLAB: fitnlm). The resulting constants, with error estimation, are given in Table S2. From this fit (R 2 = 0.996) of the force on the drill hose the height of water, and therefore hydraulic head, in BH19g can be calculated, together with the flux into the subglacial hydrological network (Fig. 4b). This reveals that the discharge peaked at 4.5 ± 0.1 × 10−2 m3 s−1 at 38 s after breakthrough.
3.1.3 Modelling the pressure response to BH19g breakthrough
Distinct pressure perturbations, here expressed as hydraulic head, occurred in BH19c and BH19e following the breakthrough of BH19g (Fig. 4a). In BH19e, located 4.1 m from BH19g, head instantaneously decreased by 0.93 m over a 20 ± 5 s period before rising rapidly and peaking at 14.0 m above its pre-breakthrough level 130 ± 5 s post-breakthrough. Synchronously with the drop in head observed in BH19e, a 0.11 m drop in head began in BH19c.
To analyse these pressure perturbations further we modelled the propagation of water at the contact between elastic ice and poroelastic sediment during BH19g breakthrough following Hewitt and others (Reference Hewitt, Chini and Neufeld2018). The Maxwell time for the basal temperate ice at site R30 is 10–25 min, and it is therefore reasonable to assume an elastic ice rheology for the short duration (< 4 min) of breakthrough events (Appendix C). This model accounts for pressure diffusion, flexure of the ice and deformation of the sediment, and was originally developed to describe the subglacial response to a rapidly draining supraglacial lake. The original model, which is based on Darcy's law, allowed for the formation of a subglacial cavity as well as seepage through the sediment or established subglacial networks. However, for simplicity, here we do not include cavity formation and instead assume a single effective hydraulic transmissivity for subglacial water transport; and that the fluid is incompressible. The model allows the poroelastic sediment layer to deform in response to fluid flow and pressure gradients, which allows the overlying ice to flex and bend slightly as reflected in the small (0.93 m) transient head decrease preceding the large (14.0 m) head increase recorded in BH19e following BH19g breakthrough (Fig. 4a). With these features included, the model shows how an injected fluid diffuses through the subglacial environment and how this drives a propagating flexural wave in the overlying ice.
The linearised form of the model reduces to an evolution equation for the subglacial water pressure, which for consistency is here expressed as hydraulic head h
(12)$$\rho g {\partial h\over \partial t} = A_1 \nabla^2 h + A_2 \nabla^6 h.$$
Here A 1 = T M/b and A 2 = T B, in terms of transmissivity T, till stiffness (p-wave modulus) M, bending modulus of the ice B, and sediment thickness b. Here b is a fitting parameter, unconstrained by measurements of the actual sediment thickness, that represents the thickness of sediment affected by pressure diffusion. Assuming radial flow,
(13)$$\nabla^2 = {1\over r}{\partial\over \partial r} r {\partial\over \partial r},\; $$
the associated flux of water q at radius r is
(14)$$q( r) = -2 \pi r T {\partial h\over \partial r},\; $$
and q(r) = Q o(t) is the injection flux into the subglacial environment.
This problem can be solved numerically for any injection flux Q o(t). By entering the time-varying injection flux for BH19g breakthrough (Section 3.1.2) into Eqn (14), we predicted the response of hydraulic head at BH19e (4.1 m from the injection point of BH19g). An automated non-linear optimisation procedure (MATLAB: fitnlm) was used to determine the best-fit model parameters, yielding B = 2.75 × 109 Pa m3, M/b = 1 × 104 Pa m−1, and T = 1.46 × 10−4 m2 s−1. The prediction initially follows the data closely and it captures the initial decrease in BH19e hydraulic head as the flexural wave passes through (Fig. 4a). However, the model does not capture the subsequent development of the pressure recorded in BH19e; instead it predicts that the pressure drops off too rapidly after the first two minutes. We discuss this discrepancy further in Section 4.1.
3.2 Pumping tests
Following each breakthrough event, the hose was raised back to the surface over ~2 h (Table 1; Figs S1–S3), with the continued supply of water into the borehole functioning as a pumping test. We captured the pressure response at the base of BH19e to such a pumping test following the breakthrough of BH19g (Fig. 5). Although water was pumped down the hose while it was raised to the surface for all boreholes that reached the bed, no other pumping tests were useful as they occurred prior to the installation of pressure sensors. During the BH19g(e) pumping test the water pressure was measured in BH19e, 4.1 m distant (Fig. 5).
Fig. 5. Time series of BH19e hydraulic head (red line) capturing the response to BH19g breakthrough and the injection of water as the hose was raised to the surface. Post-breakthrough the drill stem was kept stationary at the bed for 4 min 39 s (yellow shading). Linear fits during the three pumping test periods are shown with black lines. The light blue shade marks the period during which a piezometer was lowered into BH19g, and the dark blue shade marks the time the piezometer was temporarily snagged (see Section 4.1 for details). Labels (a–e) are also described in Section 4.1.
Starting 28 min after the breakthrough of BH19g the head in BH19e increased at a steady rate of 1.24 m h−1 (Fig. 5). This period of steady increase was interrupted by the temporary shutdown of the water supply when pressure-heater units were refuelled, with the linear increase in head resuming at the slightly higher rate of 1.36 m h−1. The rate of change of hydraulic head increased again to 7.40 m h−1 when the drill stem and hose rose above the borehole water level, indicating that, while the stem was below the water line, part of the water pumped into the borehole was replacing the reducing volume displaced by the hose as it was raised to the surface. We refer to these three periods of linearly increasing head as PT1, PT2 and PT3, respectively.
Discharge from the base of BH19g (Q o) was calculated by correcting the input flux Q i (1.25 × 10−3 m3 s−1) for storage within BH19g (Q s), and for the flux offsetting the decreasing water displacement caused by the hose as it was raised to the surface (Q d)
(15)$$Q_{\rm o} = Q_{\rm i} - Q_{\rm d} - Q_{\rm s}.$$
The pumping test was undertaken 9 d after the breakthrough of BH19e. Hence, we assume that storage within BH19e was negligible due to rapid borehole refreezing within cold ice that was present above a 70 m thick basal temperate layer (Law and others, 2021). We also consider storage within temperate ice to be negligible within the time span of our experiments due to its low permeability (e.g., 10−12 − 10−8 m2; Haseloff and others, Reference Haseloff, Hewitt and Katz2019). Q d was calculated as
(16)$$Q_{\rm d} = \pi r_{\rm d}^2 \overline{U}_{\rm d},\; $$
where r d = 0.015 m is the hose radius and $\overline {U}_{\rm d}$ is the mean drill speed. For PT3, Q d = 0 as the drill stem and hose were above the borehole water level. Q s is the flux lost to storage in the injection borehole calculated from the rate of change in head dh/dt and the cross-sectional area of the borehole, which for PT1 and PT2 is annular as the hose was below the borehole water level
(17)$$Q_{\rm s} = ( \pi r_{\rm s}^2 - \pi r_{\rm d}^2) {{\rm d}h\over {\rm d}t},\; $$
where r s = 0.14 m is the radius of BH19g at the surface (see Appendix B). For PT3
(18)$$Q_{\rm s} = \pi r_{\rm s}^2 {{\rm d}h\over {\rm d}t}.$$
As the measurement of hydraulic head in BH19g did not start until after the pumping test, we assume that the rate of change of hydraulic head was the same in BH19g and BH19e.
These calculations reveal that during the pumping test the vast majority (90%) of water pumped into the borehole was discharged from the base (Table 2). Furthermore, this discharge from the borehole base (Q o) was remarkably steady, averaging 1.12 × 10−3 m3 s−1 with a standard deviation of 1.1 × 10−6 m3 s−1. It follows that the bulk velocity of the water ($\overline {U}_{\rm w} = Q_{\rm o} / \pi r_0^2$ ) through the borehole near the base during all periods was also steady, averaging 3.2 × 10−2 m s−1 with a standard deviation of 3.1 × 10−5 m s−1.
Table 2. Statistics for the BH19g(e) pumping test. V o is the volume of water discharged from the borehole base during the period. All other symbols are defined in the text.
* Calculated using the Thiem (Reference Thiem1906) method (Eqn (21)).
† Calculated using the analytical solution to the simplified Hewitt and others (Reference Hewitt, Chini and Neufeld2018) model (Eqn (23b)).
To test whether the outflow of borehole water during the pumping test was laminar or turbulent we calculated the Reynolds number (Re) using Eqn (4). During all periods, Re ≈ 3750, indicating that flow of water in the bottom of the borehole was turbulent during the pumping tests. If, however, we assume that water leaves the borehole through a gap of width δ the Reynolds number for flow through this gap is
(19)$$Re = {Q_{\rm o} D_{\rm h} \rho_{\rm w}\over 2 \phi \pi r \delta \eta_{\rm w}},\; $$
where D h is the hydraulic diameter of the water film, r is the distance from the borehole, and ϕ is the areal fraction of the bed occupied by the gap (de Marsily, Reference de Marsily1986; Iken and others, Reference Iken, Fabri and Funk1996). For thin films with a large lateral extent D h can be approximated as 2δ (de Marsily, Reference de Marsily1986) and the equation can be simplified to
(20)$$Re = {Q_{\rm o} \rho_{\rm w}\over \phi \pi r \eta_{\rm w}}.$$
Using Eqn (20) and following the approach of Lüthi (Reference Lüthi1999), the transition from turbulent to laminar flow occurs at a distance of ~1 m from the borehole base for even the low value of ϕ = 0.1. Hence, water flow beyond this point can be treated as laminar allowing the application of standard hydrogeological techniques.
3.2.2 Hydraulic transmissivity according to the Thiem method
The hydraulic transmissivity (T s) of a subglacial sediment layer can be calculated by applying the Thiem (Reference Thiem1906) method to the pumping test data. The Thiem method assumes that a steady state has been reached within a vertically-confined, homogeneous, isotropic and incompressible aquifer with Darcian flow. In these limits the hydraulic transmissivity
(21)$$T_{\rm s} = {Q_{\rm o}\over 2 \pi s} \ln {R\over r},\; $$
where r = 4.1 m is the horizontal distance between the injection borehole (BH19g) and the monitoring borehole (BH19e), and s = h − h 0, is the mean hydraulic head (h) during the pumping test above the reference head (h 0). The radius of influence (R) is the distance to the theoretical point at which the hydraulic head remains unchanged at the equilibrium level (i.e. at radial distance R, h = h 0; s = 0; Fig. 2). (Note that the subscript in T s indicates that the method used assumes Darcian flow through sediment rather than through a gap at the ice-sediment interface, later denoted T g, or some combination of the two, for which we use T to represent the effective transmissivity.) The strong response of hydraulic head in BH19e to breakthrough in BH19g and the close agreement between head in these boreholes during the recovery phase (Fig. 3) indicates that the radius of influence is greater than the distance between BH19e and BH19g, which is 4.1 m at the surface. On the other hand, assuming a homogeneous, isotropic aquifer, the lack of a positive pressure peak in BH19c suggests the radius of influence is <70 m. Using the Thiem (Reference Thiem1906) Eqn 21, and reasonable R values of 10 and 70 m gives hydraulic transmissivity from (1.31 − 4.75) × 10−5 m2 s−1 (Table 2).
Although the Thiem (Reference Thiem1906) method is well established, it has limitations. The first is that the radius of influence R is difficult to interpret physically. The second is the requirement that a steady state has been reached. A third limitation in our application is that to calculate the flux of water leaving the base of the injection borehole (BH19g) we assume that the rate of change in hydraulic head is the same in BH19g as that recorded in BH19e.
3.2.3 Hydraulic transmissivity according to the Hewitt model
An alternative method to calculate the transmissivity from the pumping test data is through the application of an analytical solution to the simplified Hewitt and others (Reference Hewitt, Chini and Neufeld2018) model. During the pumping test Q o is steady, thereby permitting an asymptotic solution of Eqn (12) that, based on the monitoring borehole at radius r being sufficiently near to the injection borehole, gives
(22)$$h( r) \to -{Q_{\rm o}\over 2\pi T} \ln \left(r \sqrt{{\rho g\over A_1 t}}\right).$$
Hence, the predicted rate of change in hydraulic head at the nearby monitoring borehole is:
(23a,b)$${\partial{h}\over \partial{t}} \to {Q_{\rm o}\over 4\pi T t} \quad \quad \quad T = {Q_{\rm o}\over 4 \pi t } \left({\partial{h}\over \partial{t}} \right)^{-1}. $$
This expression is independent of parameters B, M and b and is sensitive only to the transmissivity. In principle this provides an alternative means of predicting T from the measured rate of change in hydraulic head during the pumping test, which avoids the limitations of the Thiem (Reference Thiem1906) method outlined in Section 3.2.2. This method (Eqn (23b)) gives estimates of T decreasing from 7.96 × 10−5 m2 s−1 during PT1, to 3.93 × 10−5 m2 s−1 during PT2, to 0.62 × 10−5 m2 s−1 during PT3 (Table 2).
3.3 Recovery tests
After water input to the borehole ceased, the borehole water pressure recovered to the reference head (h 0) over ~ 36 − 50 h (Fig. 3b; Table 1). The range in recovery times can be explained by the variable timing and magnitude of the diurnal cycle in subglacial water pressure (Fig. 3). The observed recovery curves were similar (Fig. 3b) suggesting spatially uniform subglacial hydrological conditions between boreholes. We analysed the early phase of the recovery by fitting an exponential decay curve (Weertman, Reference Weertman1970, Reference Weertman1972; Engelhardt and Kamb, Reference Engelhardt and Kamb1997) and the late phase using the Cooper and Jacob (Reference Cooper and Jacob1946) recovery test method. This provides two further estimates of hydraulic transmissivity: the first at 4 − 5 h post-breakthrough (early-phase), and the second at 14 − 27 h post-breakthrough (late-phase).
3.3.2 Exponential decay curve
The early phase of the recovery curve can be approximated as an exponential decay using the water-film model of Weertman (Reference Weertman1970, Reference Weertman1972):
(24)$$s( t) = s_0 \exp {-t\over D},\; $$
where s 0 is the initial recharge at the time the pumps stopped, t is the time since the pumps stopped, and D is a time constant determined by log-linear fitting (Figs 6a–c). The water-film model, which is referred to as the gap-conduit model in Engelhardt and Kamb (Reference Engelhardt and Kamb1997), is based on the Hagen–Poiseuille equation and assumes laminar flow through a constant-width gap at the interface between the ice and a level, impermeable bed.
Fig. 6. Recovery tests including (a–c) exponential fits (black) applied to the early stage of recovery curves plotted as hydraulic head above background (s) on the logarithmic y-axis against time (t); and (d–e) Cooper and Jacob (Reference Cooper and Jacob1946) recovery test linear-log fitting (black) applied to the late stage of the recovery curves plotted as residual drawdown (s ′) against the logarithm of the time ratio (t/t ′).
In the recovery curves of tests BH19c and BH19e the first part of the curve is missing due to the time taken to lower the pressure transducer to the bed after the drill stem was raised to the surface (Fig. 3a). Hence, s 0 was also treated as an unknown. In the BH19g(e) test the monitoring borehole was different from the injection borehole and the first part of the recovery curve was recorded. The initial BH19g(e) recovery curve was not, however, exponential and linear-log fitting was delayed for 5,000 s (83 min; Fig. 6c). After this delay the trend for BH19g(e) was quasi-exponential, in common with the other tests, and s 0 was again treated as an unknown for this test (Figs 6a–c). Hence, measured s 0 for BH19g(e) is 12.7 m and that calculated by fitting Eqn (24) is 10.1 m. The resulting time constant D was 18,200 s for BH19c, 25,000 s for BH19e and 23,000 s for BH19g(e). Rearranging Eqn (9) of Engelhardt and Kamb (Reference Engelhardt and Kamb1997) allows the gap width δ to be calculated from the time constant as
(25)$$\delta = \left({6 \eta_{\rm w} r_{\rm s}^2\over D \rho_{\rm w} g \phi} \ln{R\over r_0} \right)^{1/3}.$$
Furthermore, if we make the reasonable assumption of laminar flow at distances > 1 m from the borehole (Section 3.2), the transmissivity (T g) of a continuous porous medium equivalent to a gap of width δ is given by de Marsily (Reference de Marsily1986) as
(26)$$T_{\rm g} = \delta^3 {\phi g \rho_{\rm w}\over 12 \eta_{\rm w}}.$$
Combining Eqns (25) and (26) (see Appendix D) allows T g to be calculated directly from the time constant (D)
(27)$$T_{\rm g} = {r_{\rm s}^2\over 2D} \ln {R\over r_0}.$$
For each test, two values of transmissivity were calculated, bracketing the radius of influence R to 10 − 70 m.
The results show that hydraulic transmissivity was an order of magnitude lower during the early recovery phase than during the pumping test, with hydraulic transmissivity spanning the range (1.8 − 3.5) × 10−6 m s−1 equivalent to gap widths of 0.16 − 0.20 mm for gaps covering the whole of the glacier bed (ϕ = 1; Table 3).
Table 3. Results from the gap-conduit model (exponential fit). Gap width and the apparent hydraulic transmissivity were calculated for two values of the radius of influence (R = 10 and 70 m). Gap widths were additionally calculated for two areal fractions of the bed covered by the gap (ϕ = 0.1 and 1.0). The apparent gap transmissivity is independent of ϕ because gap cross-sectional area is a product of δ and ϕ.
3.3.3 Cooper and Jacob recovery tests
Hydraulic transmissivity can also be derived from the later stages of the recovery curve using the Cooper and Jacob (Reference Cooper and Jacob1946) recovery test method, providing information about the nature of the subglacial hydrological system as it returns to its original state. This method is based on the observation that, after a certain period of time, drawdown (or in our case drawup) within an aquifer at a given distance from a borehole decreases approximately in proportion to the logarithm of time since the discharge (or in our case recharge) began. The method assumes a non-leaky, vertically-confined aquifer of infinite lateral extent. Although the Theis (Reference Theis1935) method – on which the Cooper and Jacob (Reference Cooper and Jacob1946) method is based – requires a constant pumping rate, the method can be applied to a recovery test (i.e. after the pumps have ceased) using the principle of superposition of drawdown (e.g., de Marsily, Reference de Marsily1986; Hiscock and Bense, Reference Hiscock and Bense2014). Under this principle, pumping is assumed to continue uninterrupted while a hypothetical drawdown well is superimposed on the monitoring well from the time pumping stopped to exactly counteract the recharge from the pump. The residual drawup s′ is
(28)$$s^{\prime} = h - h_0 = {Q\over 4 \pi T} \left[W( u) - W( u^{\prime}) \right],\; $$
where h, h 0, Q and T are as previously defined, and W(u) and W(u′) are well functions for the real and hypothetical boreholes where
$$u = {r^2 S\over 4 T t},\; \qquad u^{\prime} = {r^2 S\over 4 T t^{\prime}},\; $$
and S is the storage coefficient, which cannot be determined using this method. In the previous two equations, t is time since the start of pumping, which for our tests is at breakthrough, and t′ is the time since the pumps stopped. As per the standard Cooper and Jacob (Reference Cooper and Jacob1946) method for pumping tests, for small values of u′ and large values of t′, the well functions can be approximated so that residual drawup can be estimated from the simplified equation
(30)$$s^{\prime} = {2.303 Q\over 4 \pi T} \log_{10} {t\over t^{\prime}}.$$
Hence, linear-log fitting allows hydraulic transmissivity (T s) to be calculated,
(31)$$T_{\rm s} = {2.303 Q\over 4 \pi \Delta s^{\prime}},\; $$
where Δs ′ is the rate of change of residual drawup with respect to the logarithmic time ratio. The Cooper and Jacob (Reference Cooper and Jacob1946) recovery test method described above has the advantage that the rate of recharge can be assumed to be constant, in contrast to that during an actual pumping test, which may vary (Hiscock and Bense, Reference Hiscock and Bense2014).
During the recovery phase, the sampling interval was increased from 5 to 300 s. Prior to application of the Cooper and Jacob (Reference Cooper and Jacob1946) recovery test method, the data were resampled to a constant 5 s interval and interpolated linearly. The data presented in Figures 6d–f extend from the time of pressure transducer installation at the bed (or in the case of BH19g the earlier time at which the pumps were stopped), to when diurnal pressure variations began. Fitting was applied to the later stages of the recovery curve where the trend in recharge versus the logarithmic time ratio was linear, as is required for this method to be appropriate. Accordingly, hydraulic transmissivity was calculated to be 3.0 × 10−6 m2 s−1, 2.2 × 10−6 m2 s−1 and 2.8 × 10−6 m2 s−1 for BH19c, BH19e and BH19g respectively.
4.1 Hydraulic ice-sediment separation
The average drop in borehole water level during breakthrough indicates that the subglacial environment accommodated 4.70 m3 of water within 200 s. For all three boreholes that reached the bed, the delayed recovery to background levels over 36 − 50 h suggests that this breakthrough water and an additional ~10 m3 of water injected during the raise, could not be efficiently drained away from the immediate vicinity of the borehole's base. For example, recovery to the reference head took 45 h following the input of 13.6 m3 of water injected into BH19g at breakthrough and during the drill stem raise (Table 1; Fig. 3b) yielding a mean discharge of 8.4 × 10−5 m3 s−1. If the boreholes had intercepted a conduit with the capacity to drain the water away efficiently then the mean discharge rate would have been higher and the recovery time would have been shorter. Hence, it follows that at least some of this water must have been temporarily stored locally. We hypothesise that water was predominantly stored within a gap opened at the ice-sediment interface facilitated by the overpressure (913 ± 101 kPa; Table 1) exerted at the base of water-filled boreholes due to the greater density of water than ice. In the following analysis we constrain the geometry of this gap and investigate how the gap width changed through time.
An approximate calculation of the plausible range in gap width can be made for the BH19g breakthrough by assuming a uniform cylindrical subglacial water sheet with a radius ranging from 10 − 70 m (that is just greater than the distance to BH19e where a positive peak in pressure was observed and just less than the distance to BH19c where there was no positive peak in pressure). Under these assumptions, a gap width of 0.3 − 16.5 mm could accommodate the 5.17 m3 of water injected in 200 s after BH19g breakthrough. This range is consistent with a lack of discernible ice surface uplift in data collected by a GNSS receiver at R30, confirming that surface uplift was below the precision of the GNSS data of ± 50 mm (Fig. S4). Assuming a straight-sided cylinder with a volume equal to that injected during BH19g of 5.17 m3 the upper bound on the surface uplift of 50 mm provides a lower bound on the radius of the uplift of ~ 5.7 m.
Further estimates of gap widths can be determined from the hydraulic transmissivity measurements. If we assume laminar flow, which is reasonable at distances > 1 m from the borehole (see Section 3.2), the gap width (δ), equivalent to a continuous porous medium with an effective hydraulic transmissivity (T g), is given by rearranging Eqn (26)
(32)$$\delta = \left({12 T_{\rm g} \eta_{\rm w}\over \phi \rho_{\rm w} g}\right)^{1/3}.$$
Assuming the gap is uniformly distributed across the bed (ϕ = 1) these estimates show a decrease from 0.69 mm during breakthrough to a mean of 0.18 mm during the late recovery phase (Table 4; Fig. 7). A comparable trend was measured by Lüthi (Reference Lüthi1999) using similar methods on Sermeq Kujalleq (Jakobshavn Isbræ), with gap widths decreasing from 0.7 − 0.9 mm during a pumping test to 0.5 mm during the recovery phase. Similarly, pumping tests on a prism of simulated sediment installed beneath Engabreen yielded gap widths of 0.4 − 1.0 mm during pumping and 0.1 − 0.2 mm during recovery (Iverson and others, Reference Iverson2007). We interpret this decrease in hydraulic transmissivity and equivalent gap widths with time since breakthrough (Fig. 7) as evidence for progressive closure of gaps opened at the ice-sediment interface (in response to decreasing hydraulic head). Both our estimates, and those of Lüthi (Reference Lüthi1999) and Iverson and others (Reference Iverson2007), are lower than those of 1.4 − 2.0 mm estimated from boreholes drilled on Whillans Ice Stream (formerly Ice Stream B) in West Antarctica; however, this may, at least partly, be explained by the earlier timing made possible by measuring pressure within the Whillans boreholes while they were drilled (Engelhardt and Kamb, Reference Engelhardt and Kamb1997). The areal extent of the gap exerts a relatively weak control on gap width, with gap width approximately doubling for gaps occupying just one-tenth of the bed (ϕ = 0.1; Table 4; Fig. 7). Other lines of evidence that support the gap opening hypothesis are discussed below.
Fig. 7. Hydraulic transmissivity (T) from multiple tests and methods plotted against time (t) since respective breakthrough. The equivalent gap width (δ) is shown on the right-hand axes for gaps covering a range of fractions of the bed (ϕ = 1 and ϕ = 0.1). Where appropriate, the range in the hydraulic transmissivity derived using radius of influence R = 10 − 70 m is shown by error bars.
Table 4. Summary of borehole response test results.
a Simplified model (Eqn (14)).
b Analytical solution (Eqn (23b)).
The initial drop in hydraulic head in BH19e was punctuated by a 14 m increase after 20 ± 5 s, which we interpret to be the arrival of the water from the BH19g breakthrough event through a gap opened at the ice-sediment interface. The delayed arrival of the pressure increase demonstrates that no efficient hydraulic connection existed between BH19e and BH19g prior to the breakthrough of BH19g. The 20 ± 5 s delay between the start of the load increase on the drill tower and the start of the pressure increase in BH19e gives a mean velocity of the pressure pulse of 0.20 ± 0.04 m s−1. Similar pressure pulse propagation velocities of 0.08 − 0.18 m s−1 were observed on Whillans Ice Stream (Engelhardt and Kamb, Reference Engelhardt and Kamb1997). If a conduit existed between BH19g and BH19e prior to breakthrough, the pressure pulse would be transmitted at the speed of sound (1440 m s−1) and attenuated in amplitude by the viscosity of water at a rate proportional to the gap width (Engelhardt and Kamb, Reference Engelhardt and Kamb1997). The observed delay of 20 ± 5 s is four orders of magnitude longer than the expected delay of a sound wave through 4.1 m of water of 0.003 s, which confirms that no conduit existed between BH19g and BH19e prior to breakthrough. Instead, we infer that the delay represents the propagation velocity of the gap tip outwards from BH19g.
On the other hand, the disturbance in hydraulic head in BH19e caused by attempts to free a piezometer snagged at 394 m depth in BH19g, demonstrates that a hydraulic connection between the two boreholes was present at this time 2.4 h after breakthrough (Fig. 5). The piezometer in BH19g was freed after repeated pulling on the cable, which caused the hydraulic head to fluctuate in BH19e, with disturbance continuing as the piezometer was lowered to the bed. We infer that this inter-borehole transmission of pressure perturbations indicates an open gap at the ice-sediment interface at this time.
The performance of the simplified Hewitt and others (Reference Hewitt, Chini and Neufeld2018) model in predicting the pressure response to borehole breakthrough provides further evidence for gap opening. The simplified model makes a reasonable prediction of the initial pressure response in BH19e to BH19g breakthrough (Fig. 4). The model closely reproduces the small (0.93 m) drop in hydraulic head followed by the rapid rise within the first minute. This suggests that the small drop in BH19e head can be explained by the propagation of a flexural wave through the ice that is faster than the spread of water. Furthermore, the initial drop in pressure indicates that the sediment is deformable because such a drop cannot be reproduced by the model if the sediment is rigid (see Fig. 7b of Hewitt and others, Reference Hewitt, Chini and Neufeld2018). The model, however, predicts that the hydraulic head should reduce much more rapidly after the peak than was observed (Fig. 4a). Furthermore the analytical solution to the model (Eqn (23b)) predicts that ∂h/∂t should decrease non-linearly as 1/t, whereas the measured linear trends in hydraulic head during the pumping test suggest that ∂h/∂t was constant (Fig. 5). Both these disparities can be explained by gap opening.
The response of hydraulic head in BH19e to BH19g breakthrough and pumping (Figs 4 and 5) resembles the idealised pressure response of petroleum reservoirs to hydraulic fracture treatment (cf. Fig. 18a of Hubbert and Willis, Reference Hubbert and Willis1957). Specifically, the BH19g(e) breakthrough curve can be interpreted as a horizontal hydraulic fracture induced from a relatively smooth borehole, which is consistent with our interpretation of gap-opening at the ice-sediment interface induced by borehole breakthrough. We can therefore apply hydraulic fracture treatment theory to interpret the response to BH19g(e) breakthrough, as follows. After the initial drop in head, the arrival of water in BH19e is marked by a steep rise (labelled A in Figs 4a and 5), and the gradient of this increase indicates compression of the water and subglacial sediment prior to the initiation of gap opening beyond BH19e. As gap opening begins the energy stored within the compressed water and sediment is transferred to gap propagation outwards from BH19e resulting in more space for the water to occupy, and therefore lower pressure and a decrease in the gradient (dh/dt; label B). The peak in head after 130 s represents the transition from stable to unstable gap opening at the so-called 'breakdown pressure'. The ensuing transient head decrease (label C) can be explained by the gap opening rate transiently exceeding the water input rate, and by the diffusion of unevenly distributed pressure within the immature gap. With continued water input, a steady state of gap opening was reached resulting in the linear trend in hydraulic head (label D). In our pump tests, the recharge from the pump exceeded the discharge through the gap and the borehole filled with water at a linear rate determined by the supply rate from the pumps and the extraction rate of the drill hose. That water input exceeded water output during the pumping test despite discharge rates being much lower than during breakthrough provides evidence for partial gap closure in response to reduced water pressure. When the pumps ceased, head briefly stayed constant before dropping rapidly and then transitioning into a logarithmic decay representing gap closure and reversion to Darcian flow. In petroleum engineering, the pressure at the onset of the rapid drop (label E) has been interpreted to approximate the fracture propagation pressure. For BH19g(e) this occurs at 9.290 MPa, which is comparable to the ice overburden pressure (Table 1), and is thus consistent with hydraulic ice-sediment separation. This interpretation suggests that the application of hydraulic fracture models to borehole breakthrough and pumping tests would be an improvement over hydrogeological techniques such as the Thiem (Reference Thiem1906) method, which inherently assume Darcian flow through an incompressible, isotropic aquifer. Such assumptions are unlikely to be valid if gap opening is taking place and this may explain the difference between the Thiem (Reference Thiem1906) and (analytical) Hewitt and others (Reference Hewitt, Chini and Neufeld2018) estimates of transmissivity during the pumping test (Table 4; Fig. 7).
The observation of an instantaneous drop in hydraulic head of 0.11 m in BH19c in response to BH19g breakthrough without a subsequent increase in head (Fig. 4a) also cannot be reproduced by the simplified Hewitt and others (Reference Hewitt, Chini and Neufeld2018) model; the model predicts a flexural wave that would be apparent at any fixed radius as a small pressure drop followed by a large pressure rise. We hypothesise that the drop in pressure in BH19c is caused by uplift at the BH19g injection site increasing the volume of a hydraulically-isolated cavity at BH19c, and that cavity expansion without an increase in water mass leads to a reduction in water density and pressure – that is a rarefaction. The simplified Hewitt and others (Reference Hewitt, Chini and Neufeld2018) model cannot reproduce rarefactions caused by stress transfer through the ice because it assumes that water compressibility is zero and, more fundamentally, it directly couples vertical displacement of the ice to the pressure in the subglacial environment, so that cavity expansion cannot occur without an increase in pressure (and vice versa). Further evidence for hydraulic isolation of the BH19c cavity is provided by diurnal water pressure variations that are anti-correlated with those in BH19e and ice velocity (Figs 8a.b; e.g., Murray and Clarke, Reference Murray and Clarke1995; Meierbachtol and others, Reference Meierbachtol, Harper, Humphrey and Wright2016; Lefeuvre and others, Reference Lefeuvre2018). The inference of BH19c cavity isolation is also supported by the observation that diurnal pressure variations in BH19c are manifested as small (~ 0.05° C peak-to-peak) temperature cycles recorded at the base of BH19c (Fig. 8). This demonstrates that the water temperature quickly equilibrates with the pressure-dependent ice temperature, which would occur within an isolated cavity but not in a connected conduit. We would expect that within a connected conduit a throughput of water from different regions of the bed at variable pressures and temperatures would mask the small pressure-driven diurnal variations in temperature.
Fig. 8. Time series of (a) horizontal ice velocity, (b) hydraulic head in BH19c and BH19e, (c) temperature at the base of BH19c, and (d) pressure-dependent melting temperature T m calculated from the water pressure recorded in BH19c. Note that although the y-axes for (c) and (d) are offset the y-axis range is identical for both. The offset between measured temperature and T m can be explained by uncertainties in the sensor installation depths and the Clausius–Clapeyron gradient.
Rearranging the equation of state for water assuming mass is conserved and that temperature is constant, allows the pressure change to be related to the change in cavity volume
(33)$${V\over V_0} = {1\over \exp[ \beta_{\rm w}( p_{\rm w} - p_{{\rm w}0}) ] },\; $$
where V 0 and p w0 are the reference volume and pressure and β w = 5.1 × 10−10 Pa−1 is the compressibility of water. We can constrain the initial cavity geometry in two situations. First, the observation of no prior hydraulic connection between BH19e and BH19g, which were separated at the surface by 4.1 m, indicates the BH19e cavity was smaller than this distance. Second, the volume of water drained during BH19c breakthrough and the hose raise of 15.6 m3 provides an approximate maximum constraint on the BH19c cavity volume. These constraints are consistent with measurements of dye dilution in boreholes drilled on Isunnguata Sermia, which indicated cavity volumes of 7.6 ± 6.7 m3 (Meierbachtol and others, Reference Meierbachtol, Harper, Humphrey and Wright2016). Assuming the initial BH19c cavity volume was within the reasonable range of 0.5 − 15 m3 the small 0.11 m decrease in hydraulic head measured in BH19c located ~70 m distant can be explained by the contraction of the BH19c cavity of 0.3 − 8.2 × 10−6 m3. This demonstrates that, due to the low compressibility of water, the 0.11 m head decrease can be explained by a small cavity contraction of $5.5 \times 10^{-5} \% $ . Hence, we hypothesise that hydraulic ice-sediment separation caused by the overpressure at the base of BH19g caused elastic uplift of the BH19c cavity roof. The 0.11 m drop in BH19c head in response to BH19g breakthrough therefore provides direct evidence for the hypothesis of Murray and Clarke (Reference Murray and Clarke1995) that pressure variations in hydraulically-isolated cavities occur due to elastic displacement of the ice roof driven by perturbations in hydraulically-connected regions of the bed. We discuss this further in Section 4.3.
4.2 Hydraulic conductivity of subglacial sediments
We interpret the decrease in hydraulic transmissivity with time since breakthrough (Table 4; Fig. 7) as evidence for the closure of a gap at the ice-sediment interface that was opened by the overpressure at borehole breakthrough. It is notable that hydraulic transmissivity estimates derived using the Cooper and Jacob (Reference Cooper and Jacob1946) recovery tests were relatively constant (that is within 8 × 10−7 m2 s−1), despite the tests occurring over a wide range in time since breakthrough (14.1 − 27.2 h; Table 4; Fig. 7). Hence, these tests may be representative of Darcian flow through the sediment layer after gap closure. This suggestion is supported by the observation that the drawdown decreased logarithmically through time (Figs 6d–f) as is expected under Darcian flow, which is unlikely to be the case if gap closure was incomplete. Darcian flow through subglacial sediments was also inferred at site S30 from the initially logarithmic recovery in subglacial water electrical conductivity (EC) observed over 12 h following the dilution effect caused by drilling with low EC surface waters (Doyle and others, Reference Doyle2018).
When there is no flow through a gap at the ice-sediment interface, hydraulic transmissivity (T) is the hydraulic conductivity (K) integrated over the sediment thickness b
(34)$$T = bK.$$
The sediment thickness at the borehole location has been estimated at $20_{-2}^{ + 17}$ m by fibre-optic distributed acoustic seismics in BH19c (Booth and others, Reference Booth2020). The full sediment thickness represents an upper limit for the calculation of hydraulic conductivity due to an increase in sediment compaction with depth, and the pressure-dependent depth limit to the diffusion of water from the ice-sediment interface (Tulaczyk and others, Reference Tulaczyk, Kamb and Engelhardt2000). For the range of hydraulic transmissivity from the Cooper and Jacob (Reference Cooper and Jacob1946) recovery tests of (2.2 − 3.0) × 10−6 m2 s−1 (Table 4), and a range of reasonable 'hydraulically-active' sediment thicknesses of 2 − 20 m, the hydraulic conductivity is (0.1 − 1.5) × 10−6 m s−1. This estimate is reasonable and within the range of hydraulic conductivities of glacial tills found in a range of settings by previous studies (Table 5). The Cooper and Jacob (Reference Cooper and Jacob1946) recovery test for BH19c was performed several hours earlier with respect to the time of breakthrough than those in BH19e and BH19g (Fig. 7) due to the earlier establishment of diurnal pressure variations in BH19c (Fig. 3b). If gap closure was still taking place, this earlier timing could explain the slightly higher transmissivity derived for BH19c. We also cannot exclude the possibility that water flow during breakthrough and pumping tests – or from previous natural subglacial water flow – winnowed fine particles from the upper layer of sediment, increasing the hydraulic conductivity of this layer (Iverson and others, Reference Iverson2007; Fischer and others, Reference Fischer, Iverson, Hanson, LeB Hooke and Jansson1998). As we cannot exclude winnowing, or be certain that the gap was fully closed, we interpret our estimates to represent an upper bound on the hydraulic conductivity of the sediment beneath this site.
Table 5. Selected hydraulic conductivities of glacial sediments from the literature in ascending order. Sediments at the lower end of the scale (K ≤ 10−4 m s−1) were typically interpreted as unconsolidated sands and gravels, often associated with water flow from subglacial channels winnowing fine particles (Fischer and others, 1998).
Our inferred sediment hydraulic conductivity is two orders of magnitude higher than that determined from laboratory analysis of sediment retrieved from beneath Whillans Ice Stream (Engelhardt and others, Reference Engelhardt, Humphrey, Kamb and Fahnestock1990) and Trapridge Glacier in Canada (Murray and Clarke, Reference Murray and Clarke1995), see Table 5. A hydraulic conductivity of 10−7 − 10−6 m s−1 is, however, broadly consistent with the type of glacigenic sediment within core samples taken from Uummannaq Fjord. These core samples comprise glacimarine sediments deposited during the last glacial maxima including matrix supported diamict with angular to sub-angular clasts of basalt and granitic gneiss dispersed throughout a sandy mud matrix (Ó'Cofaigh and others, Reference Ó'Cofaigh2013).
Laboratory measurements of the hydraulic conductivity of glacial sediments, which inherently measure only Darcian flow, are typically a few orders of magnitude lower than field measurements (Table 5; Hubbard and Maltman, Reference Hubbard and Maltman2000), a disparity that could, at least partly, be explained by residual gap opening at the ice-sediment interface during borehole response tests (e.g., Fountain, Reference Fountain1994; Stone and others, Reference Stone, Clarke and Ellis1997). While in situ measurement of hydraulic conductivity of subglacial sediments appears to overestimate hydraulic conductivity under strict Darcian flow conditions, laboratory measurements provide little insight into the complexity of subglacial hydrological processes such as ice-sediment separation. Furthermore, as glacial sediment is by nature poorly sorted, with grain sizes ranging from boulders to clays, analysing samples that are large enough to be representative in laboratory experiments conducted at the scale necessary is more difficult than conducting in situ measurements (Clarke, Reference Clarke1987; Hubbard and Maltman, Reference Hubbard and Maltman2000). True subglacial water flow at this site may neither occur as entirely Darcian (laminar) flow through subglacial sediment nor exclusively through a gap at the ice-sediment interface, but rather a combination of the two. In any case, our in situ measurements represent a constraint on the effective hydraulic transmissivity that is independent of the process of water flow.
4.3 Implications for subglacial hydrology and basal motion
Subglacial water flow at glaciers underlain by porous sediment will naturally occur as laminar Darcian flow through interconnected pore spaces, although only insofar as the hydraulic transmissivity of the sediment is sufficient to accommodate the input of meltwater. With sustained inputs of water to the bed of many glaciers, from surface melt, for example, it may also be natural for a portion of that input to be stored temporarily in gaps opened at the ice-sediment interface, when water is delivered faster than it can permeate into the sediment below. The evidence presented herein demonstrates that the overpressure of a water-filled borehole can open a gap at the ice-sediment interface and need not directly intersect an active subglacial drainage system in order to drain. The delayed arrival of the pressure pulse in BH19e rules out the existence of sheet flow (Weertman, Reference Weertman1970; Alley and others, Reference Alley, Blankenship, Rooney and Bentley1986; Creyts and Schoof, Reference Creyts and Schoof2009), efficient conduits such as R-channels or canals (e.g., Röthlisberger, Reference Röthlisberger1972; Walder and Fowler, Reference Walder and Fowler1994; Ng, Reference Ng2000), and linked cavities (e.g., Kamb, Reference Kamb1987) prior to BH19g breakthrough, but supports the gap-opening theory of Engelhardt and Kamb (Reference Engelhardt and Kamb1997). We infer that prior to the breakthrough of BH19g, subglacial drainage at this location consisted exclusively of Darcian flow through subglacial sediments with a hydraulic conductivity K ≤ 10−6 m s−1.
Borehole drainage at the ice-sediment interface may be physically similar, but of lower magnitude, to that which occurs during the subglacial drainage of proglacial (Sugiyama and others, Reference Sugiyama, Bauder, Huss, Riesen and Funk2008), subglacial (e.g., Jóhannesson, Reference Jóhannesson2002) and supraglacial lakes (Tsai and Rice, Reference Tsai and Rice2010, Reference Tsai and Rice2012; Doyle and others, Reference Doyle2013; Dow and others, Reference Dow2015; Stevens and others, Reference Stevens2015; Hewitt and others, Reference Hewitt, Chini and Neufeld2018) via a broad, turbulent and transient sheet. We note that gap opening at the ice-sediment or ice-bedrock interface is conceptually the same as horizontal hydraulic fracture at this interface as envisaged by previous studies (Tsai and Rice, Reference Tsai and Rice2010, Reference Tsai and Rice2012; Hewitt and others, Reference Hewitt, Chini and Neufeld2018). Rapid water flow into this narrow gap is likely to be turbulent (Section 3.1.1); however, flow must become laminar near the gap tip as the width of the gap decreases to zero, and flow velocity will also decrease with distance from the injection point (Hewitt and others, Reference Hewitt, Chini and Neufeld2018). Continued sheet flow through a uniform gap would be unstable as irregularities in flow would theoretically favour the formation of conduits through preferential sediment erosion and concentrated ice melt from frictional heat (Röthlisberger, Reference Röthlisberger1972; Walder and Fowler, Reference Walder and Fowler1994; Ng, Reference Ng2000). Conduit development beneath kilometre-thick ice is, however, anticipated to require continuous water supply at high pressure over prolonged periods, which may only occur if there is continued water input from the surface (e.g., Dow and others, Reference Dow, Kulessa, Rutt, Doyle and Hubbard2014, Reference Dow2015). Hence, our inference of complete, or at least partial, gap closure in response to declining pressure is consistent with existing theory as the water volumes provided by borehole drainage and subsequent pumping (~15 m3) are likely insufficient to establish an efficient conduit beneath kilometre-thick ice. The development of efficient conduits in response to borehole breakthrough can also be excluded by the low discharge rate of 8.4 × 10−5 m3 s−1 calculated from the 45 h required for hydraulic head to recover to the equilibrium level following the injection of 13.6 m3 of water at BH19g breakthrough and during the drill stem raise. Although we cannot rule out the persistence of stable sheet flow following borehole drainage facilitated by clasts partially supporting the ice overburden pressure (Creyts and Schoof, Reference Creyts and Schoof2009), our observations of a progressive decrease in hydraulic transmissivity can be entirely explained by gap closure and a reversion to Darcian flow through the sediment layer. For simplicity, this and previous studies (Tsai and Rice, Reference Tsai and Rice2010, Reference Tsai and Rice2012; Hewitt and others, Reference Hewitt, Chini and Neufeld2018) make the reasonable assumption that initial gap opening is elastic; however, where temperate ice is present, as it is at R30, viscous deformation cannot be neglected during the longer time scales of pumping tests or lake drainage events (Appendix C). The application of a viscoelastic model (e.g., Reeh and others, Reference Reeh, Christensen, Mayer and Olesen2003) to borehole response tests (and lake drainage events) would therefore represent an improvement over the analysis presented herein (and previously).
The instantaneous 0.11 m drop in BH19c head in response to BH19g breakthrough (Fig. 4a) provides direct evidence for the hypothesis of Murray and Clarke (Reference Murray and Clarke1995) that pressure variations can be transmitted to unconnected cavities through elastic displacement of the ice roof. Murray and Clarke (Reference Murray and Clarke1995) theorised that uplift caused by high water pressure relieves the pressure in adjacent hydraulically-isolated cavities. This hypothesis is one of three hypotheses of mechanical forcing of water pressure that have been proposed to explain the often observed diurnal variation of water pressure in hydrologically-isolated cavities that is out of phase with both ice velocity and water pressure in boreholes and moulins deemed to be connected to efficient subglacial conduits (Murray and Clarke, Reference Murray and Clarke1995; Engelhardt and Kamb, Reference Engelhardt and Kamb1997; Gordon and others, Reference Gordon1998; Dow and others, Reference Dow, Kavanaugh, Sanders, Cuffey and MacGregor2011; Andrews and others, Reference Andrews2014; Ryser and others, Reference Ryser2014; Lefeuvre and others, Reference Lefeuvre, Jackson, Lappegard and Hagen2015; Meierbachtol and others, Reference Meierbachtol, Harper, Humphrey and Wright2016; Rada and Schoof, Reference Rada and Schoof2018). While we cannot rule out the possibility that such anti-correlated diurnal pressure and velocity variations in BH19c (Fig. 8) can be attributed to the alternative hypotheses of cavity expansion and contraction caused by longitudinal strain (Ryser and others, Reference Ryser2014) or basal sliding (Iken and Truffer, Reference Iken and Truffer1997; Bartholomaus and others, Reference Bartholomaus, Anderson and Anderson2011; Hoffman and Price, Reference Hoffman and Price2014), displacement of the ice roof due to elastic uplift during gap-opening at BH19g breakthrough can entirely explain the 0.11 m instantaneous drop in BH19c head. It is therefore plausible that elastic displacement of the ice roof by diurnal pressure variations within a nearby conduit also explains the anti-correlated diurnal variations in BH19c pressure. This assertion is supported by three-dimensional full-Stokes modelling (Lefeuvre and others, Reference Lefeuvre2018) that reproduced anti-correlated pressure variations between connected and unconnected components of the subglacial drainage system without invoking cavity expansion caused by sliding.
Similar to borehole breakthrough events, we argue that water flow at the ice-sediment interface may also occur at times of naturally high subglacial water pressures. It is important to note that the gap widths we report are probably larger than would have occurred naturally for the same volume of cold glacial water because warm drilling water would have enlarged the gaps through ice melt. The greater variability in meltwater supply means that gap opening at the ice-sediment interface is more likely to occur naturally on the Greenland Ice Sheet, and on mountain glaciers, than on the West Antarctic ice streams where the process was originally inferred (Engelhardt and Kamb, Reference Engelhardt and Kamb1997). Hence, gap opening at the ice-sediment interface has important implications for our understanding of subglacial hydrological systems that extends beyond its ability to explain the drainage of boreholes. Subglacial hydrology in ice-sheet models may for instance include exchanges of water flowing partly at the interface and partly within subglacial sediment, which has proven efficient in reproducing day-to-day variations in ice flow as observed at the land-terminating southwest ice margin (Bougamont and others, Reference Bougamont2014). Darcian flow and gap-opening therefore provide a physical explanation for the partitioning of water flowing at the interface and within subglacial sediment.
Gap-opening may also play a role in the formation and growth of subglacial drainage systems. Within the framework of existing theory, gap opening provides the initial conduit that may later develop into an inefficient narrow orifice in a distributed (i.e. linked cavity) drainage system (Kamb, Reference Kamb1987), which may ultimately develop into an efficient channel or canal (Röthlisberger, Reference Röthlisberger1972; Walder and Fowler, Reference Walder and Fowler1994; Ng, Reference Ng2000). That the overpressure of a water-filled vertical conduit stretching from the surface to the bed (i.e. a borehole) can open a gap at the ice-sediment interface, despite the low volumes of water involved, has implications for the establishment of subglacial drainage of the much larger water volumes supplied via moulins, crevasses and supraglacial lakes. It illustrates the manner in which regions of the basal environment can become hydrologically connected during peaks in water pressure. Hence, gap opening can explain transient periods of borehole water pressure synchroneity that abruptly punctuate the often observed long-term pattern of anti-correlated variations in water pressure and velocity measured in hydraulically-isolated cavities during periods of high water pressure (e.g., Murray and Clarke, Reference Murray and Clarke1995; Engelhardt and Kamb, Reference Engelhardt and Kamb1997; Harper and others, Reference Harper, Humphrey, Pfeffer and Lazar2007; Andrews and others, Reference Andrews2014; Rada and Schoof, Reference Rada and Schoof2018). If areas of the bed that were previously hydraulically isolated experience net drainage as a result of gap opening at the ice-sediment interface, it may also explain the hydro-mechanical regulation of ice flow (e.g., Sole and others, Reference Sole2013; Tedstone and others, Reference Tedstone2015; Davison and others, Reference Davison2020), which observations suggest cannot be entirely explained by water pressures within efficient channels (Andrews and others, Reference Andrews2014). It follows that drainage at the ice-sediment interface and Darcian flow through sediments with a low hydraulic conductivity may be two of potentially multiple processes behind the hypothesised weakly-connected component of the subglacial drainage system (Hoffman and others, Reference Hoffman2016).
A drainage system consisting of cavities, which we assume are present at the base of our boreholes, linked via gaps opened at the ice-sediment interface would at first appear similar to the linked cavity theory of glacial drainage, which consists of cavities connected via narrow orifices (e.g., Kamb, Reference Kamb1987). There is, however, an important distinction in that the linked cavity model specifies that orifices are continuously open and water flow is inefficient and turbulent due to the length and narrowness of orifices (Kamb, Reference Kamb1987). Modification of the linked cavity theory to allow transient gap opening between cavities under high water pressure with turbulent flow would explain the same characteristics associated with linked cavity drainage systems: enhanced basal motion, sediment entrainment (as indicated by increased turbidity) and increased connectivity of the bed at times of high water pressure. It would also explain the existence of neighbouring yet behaviourally-independent subglacial drainage subsystems in close proximity (e.g., Murray and Clarke, Reference Murray and Clarke1995; Harper and others, Reference Harper, Humphrey, Pfeffer and Lazar2007; Rada and Schoof, Reference Rada and Schoof2018), which the majority of previous models of subglacial drainage cannot reproduce as they inherently allow water to diffuse across the entire glacier bed (e.g., Schoof, Reference Schoof2010; Hewitt, Reference Hewitt2013; Werder and others, Reference Werder, Hewitt, Schoof and Flowers2013). This implies a strong link between subglacial hydrology, stresses within the ice and basal motion that will be challenging to reproduce within numerical models due to the requirement to combine linear-elastic gap opening with a viscous ice rheology.
To date, every borehole drilled on Sermeq Kujalleq (Store Glacier) drained rapidly and immediately upon reaching the bed. This includes three boreholes at R30 in 2019, four boreholes at R29 in 2018 (unpublished), and seven boreholes at S30 in 2014 and 2016 (Doyle and others, Reference Doyle2018). A similar pattern of rapid borehole drainage, with a small number of exceptions, has been reported for Whillans Ice Stream in West Antarctica (Engelhardt and Kamb, Reference Engelhardt and Kamb1997) and Sermeq Kujalleq (Jakobshavn Isbræ) in West Greenland (Lüthi, Reference Lüthi1999). While the results presented here provide further evidence for gap opening as a mechanism for rapid borehole drainage, it also raises the question of why some boreholes on other ice masses do not drain rapidly upon reaching the bed. Some boreholes appear to never drain (e.g., Smart, Reference Smart1996), while others drain slowly (e.g., Andrews and others, Reference Andrews2014), and others drain after a delay (e.g., Gordon and others, Reference Gordon2001; Kamb and Engelhardt, Reference Kamb and Engelhardt1987; Engelhardt and Kamb, Reference Engelhardt and Kamb1997; Fischer and Clarke, Reference Fischer and Clarke2001). This heterogeneity, which often occurs within the same field site, could be explained by the stress regime, boreholes terminating blind in debris-rich basal ice before they are able to connect to the subglacial drainage system, or by the presence of impermeable barriers such as areas of ice-bedrock contact or cold ice, the latter of which can occur even within predominantly temperate glaciers (Robin, Reference Robin1976). A detailed discussion of the heterogeneity of borehole drainage is not warranted here (see instead Smart, Reference Smart1996 and Gordon and others, Reference Gordon2001), but we do seek an explanation for the homogeneity in borehole drainage observed to date on Sermeq Kujalleq (Store Glacier). Hot water drilling is ineffective at penetrating debris-rich basal ice, which is characteristic of many exposed margins of the Greenland Ice Sheet, for example, on Russell Glacier (Knight and others, Reference Knight, Waller, Patterson, Jones and Robinson2002) and at the base of icebergs discharging from Sermeq Kujalleq (Jakobshavn Isbræ; Lüthi and others, Reference Lüthi, Fahnestock and Truffer2009), yet none of the boreholes drilled to date on Sermeq Kujalleq (Store Glacier) terminated above the bed due to an obstruction by englacial clasts. We therefore speculate (while noting the small number of boreholes drilled at a limited number of sites) that debris content within basal ice on Sermeq Kujalleq (Store Glacier) may be low. If so, this could be explained by the removal of debris-rich basal ice formed upstream by basal melt. Furthermore, low (and potentially even negative) effective pressures (e.g. − 46 ± 102 kPa at R30; Table 1) are conducive to hydraulic ice-bed separation (e.g., Schoof and others, Reference Schoof, Hewitt and Werder2012) and these conditions are found at all the Sermeq Kujalleq (Store Glacier) sites drilled to date. Modelling of subglacial drainage through a poroelastic sediment and cavity suggests that elastic gap opening is enabled by the suction of water from an underlying porous sediment layer without the requirement for a pre-wetted water film (Hewitt and others, Reference Hewitt, Chini and Neufeld2018). We therefore conclude that rapid borehole drainage on Sermeq Kujalleq (Store Glacier) is facilitated by low effective pressures, subglacial sediment, and a potentially low debris content within basal ice.
Booth and others (Reference Booth2020) used the low basal reflectivity in vertical seismic profiles to infer that the subglacial sediment layer at site R30 has an acoustic impedance similar to that of basal ice, and from this, they suggested that the sediment is consolidated, and neither deforming nor lithified. The inference that the sediment layer is not deforming implies that the fast ice velocity at this site must be accommodated by either enhanced internal deformation of the ice, ice-sediment decoupling under high water pressure (e.g., Iverson and others, Reference Iverson, Hanson, Hooke and Jansson1995, Reference Iverson2007), or deformation of a sediment layer thinner than the 5 − 10 m vertical resolution of the seismic technique. With regard to the last assertion we note that sediment deformation often occurs within an upper layer that is typically only decimetres to a few metres thick (e.g., Clarke, Reference Clarke1987; Murray, Reference Murray1997; Humphrey and others, Reference Humphrey, Kamb, Fahnestock and Engelhardt1993; Engelhardt and Kamb, Reference Engelhardt and Kamb1998), and that the shape of the pressure pulse during BH19g breakthrough can only be reproduced using the model of Hewitt and others (Reference Hewitt, Chini and Neufeld2018) if the sediment layer is deformable. While the extent of sediment deformation beneath this site remains inconclusive the evidence presented herein supports the hypothesis of ice-sediment decoupling under periods of high water pressure. Indeed, we suggest that the theory of gap opening at the ice-sediment interface (Engelhardt and Kamb, Reference Engelhardt and Kamb1997) may involve the same physical process as ice-sediment decoupling envisaged by Iverson and others (Reference Iverson, Hanson, Hooke and Jansson1995). To explain the reverse tilt of inclinometers just below the ice-sediment interface, Iverson and others (Reference Iverson, Hanson, Hooke and Jansson1995) envisaged that sediment would be squeezed into the zone of uplift at times of high water pressure. The modulation of slip by pressurised water at the ice-sediment interface was confirmed by pump tests on a simulated prism of till on Engabreen (Iverson and others, Reference Iverson2007). Further evidence for gap opening and decoupling at the ice-sediment interface is provided by (as far as we are aware) unrepeated, direct observation of a cm-wide gap at the ice-sediment interface of Blue Glacier, USA (Engelhardt and others, Reference Engelhardt, Harrison and Kamb1978). Borehole photography revealed a ~0.1 m thick sediment layer overlying bedrock that was mechanically and visibly distinct from a 0.1 − 16.0 m thick debris-laden basal ice layer. Engelhardt and others (Reference Engelhardt, Harrison and Kamb1978) suggested that the gap was opened by the overpressure of the water-filled borehole and that basal sliding velocities were faster where gaps were present. They also inferred that interstitial pressure within the sediment must be close to or at the ice overburden pressure in order to prevent the basal ice merging with the sediment layer through regelation, an assertion supported by Rempel (Reference Rempel2008). Hence, further in situ observations are required to investigate whether ice-sediment decoupling occurs via a gap at the ice-sediment interface or through an increase in the thickness of the sediment layer as proposed by Iverson and others (Reference Iverson, Hanson, Hooke and Jansson1995), or a combination of both processes as modelled by Hewitt and others (Reference Hewitt, Chini and Neufeld2018).
Detailed measurements of pressure pulses during a borehole breakthrough event, and a decrease in hydraulic transmissivity with time since breakthrough, provide evidence for hydraulic gap opening and closure at the ice-sediment interface, with gaps opening and closing in response to water pressure. Analysis of the subsequent recovery of subglacial water pressure indicates that the hydraulic conductivity of the subglacial sediment layer is on the order of 10−7 − 10−6 m s−1, which suggests it is coarse-grained and more permeable than the fine-grained sediments beneath West Antarctic ice streams. As seismic surveys suggest that sediment at this site is not deforming, we infer that fast basal motion may be accommodated by ice-sediment decoupling and potentially shallow-depth sediment deformation in a layer thinner than the 5 − 10 m resolution of the seismic technique.
Observations of a pressure drop simultaneous with the breakthrough of a borehole 70 m away provides direct evidence for the hypothesis that anti-correlations between water pressure in connected and unconnected regions of the bed can be explained via elastic displacement of the ice roof.
We argue that water flow via gaps opened at the ice-sediment interface is likely to play a critical role in both basal motion and the development of subglacial hydrology on soft-bedded ice masses, and that Darcian flow through sediments may explain the drainage and recharge of areas of the bed that are otherwise hydrologically isolated.
The supplementary material for this article can be found at https://doi.org/10.1017/jog.2021.121.
The data sets presented in this paper are available for download from https://doi.org/10.6084/m9.figshare.16838020.
This research was funded by the European Research Council as part of the RESPONDER project under the European Union's Horizon 2020 research and innovation program (Grant 683043). T.R.C. and R.L. were supported by Natural Environment Research Council Doctoral Training Partnership Studentships (Grant NE/L002507/1). We thank Lee Greenler for providing the code for modelling borehole diameter; Katie Miles, Emma Docherty and Tom Chase for assistance in the construction of borehole sensor strings; and Sean Peters, Mickey MacKie, Mike Prior-Jones, Eliza Dawson and Tun Jan Young for assistance in the field. We are very grateful to Ann Andreasen and the Uummannaq Polar Institute for their kind hospitality. Comments from the Scientific Editor, Ali Graham, B. de Fleurian and two anonymous reviewers resulted in significant improvements to this paper.
The overall research project (RESPONDER) was led by P.C., with B.H. leading the hot water drilling and borehole instrumentation reported herein. Data collection was led by S.D., with contributions from B.H., P.C., R.L., C.S. and T.C. S.D. conducted the data analysis. R.L. adapted and ran the borehole drilling model. T.C. surveyed the borehole positions and led site mapping. D.H. and J.N. calculated the breakthrough volumetric flux and pressure response. The manuscript was written by S.D., with contributions from all co-authors.
Appendix A. List of symbols
Surface and bed slope (°)
β w
Water compressibility (5.1 × 10−10 Pa−1)
Clausius–Clapeyron constant (9.14 × 10−8 K Pa−1)
Gap width (m)
η i
Effective ice viscosity (Pa s−1)
η w
Water viscosity at 0°C (0.0018 Pa s)
ρ i
Ice density (910 ± 10 kg m−3)
ρ w
Water density at 0°C (999.8 kg m−3)
ρ d
Hose density (kg m−3)
τ e
Effective stress (Pa)
ϕ
Areal fraction of the bed covered by gap
Rate factor in Glen's flow law (Pa−3 s−1)
Sediment thickness (m)
Bending modulus of the ice (Pa m3)
Time constant (s)
Elastic modulus of ice (9.3 GPa)
Shape factor
Frictional drag coefficient
Force on the drill tower (N)
Gravitational acceleration (9.81 m s−2)
Hydraulic head (m)
Reference hydraulic head (m)
Ice thickness (m)
H w
Water height (m)
Hydraulic conductivity (m s−1)
Sediment stiffness (p-wave modulus) (Pa)
Exponent in Glen's flow law (3)
Effective pressure (Pa)
p i
Ice overburden pressure (Pa)
p w
Subglacial water pressure (Pa)
p tr
Triple point pressure of water (611.73 Pa)
Volumetric flux (m3 s−1)
Radial distance (m)
r d
External hose radius (0.015 m)
Borehole radius at base (m)
Borehole radius at near-surface (m)
Radius of influence (m)
Reynolds number
Recharge (s = h − h 0) (m)
Reference recharge (m)
Storage coefficient (m)
Time (s)
Maxwell time (s)
Hydraulic transmissivity (m2 s−1)
Melting temperature of ice (°C)
T tr
Triple point temperature of water (273.16 K)
U d
Drill velocity (m min−1)
U w
Water velocity (m s−1)
W(u)
Well function
Orthometric height (m)
Appendix B. Borehole radius
As the hose radius (r d) and speed (U d) are known, the differential rate of change in hydraulic head below and above the water line during the BH19g(e) pumping test allows the borehole radius at the water line (r s) to be determined as follows. The total volumetric flux of water stored within the borehole when the drill hose was below the water line during PT2 is Q b2 = Q s2 + Q d2, or alternatively
(B1)$$Q_{{\rm b}2} = \left(\pi r_{\rm s}^2 - \pi r_{\rm d}^2 \right){{\rm d}h_2\over {\rm d}t} + \pi r_{\rm d}^2 U_{\rm d},\; $$
where the numeric subscript indicates the period. Similarly the borehole storage flux with the drill stem above the water line during PT3 is
(B2)$$Q_{b3} = \pi r_{\rm s}^2 {{\rm d}h_3\over {\rm d}t}.$$
Assuming water input (Q i) and output (Q o) were constant at the transition from PT2 to PT3
(B3)$$Q_{{\rm b}2} = Q_{{\rm b}3}.$$
Therefore equating fluxes gives
(B4)$$\left(\pi r_{\rm s}^2 - \pi r_{\rm d}^2 \right){{\rm d}h_2\over {\rm d}t} + \pi r_{\rm d}^2 U_{\rm d} = \pi r_{\rm s}^2 {{\rm d}h_3\over {\rm d}t}.$$
Expanding on the left-hand side gives
(B5)$$\pi r_{\rm s}^2 {{\rm d}h_2\over {\rm d}t} - \pi r_{\rm d}^2 {{\rm d}h_2\over {\rm d}t} + \pi r_{\rm d}^2 U_{\rm d} = \pi r_{\rm s}^2 {{\rm d}h_3\over {\rm d}t}.$$
Rearranging gives
(B6)$$\pi r_{\rm s}^2 {{\rm d}h_3\over {\rm d}t} - \pi r_{\rm s}^2 {{\rm d}h_2\over {\rm d}t} = \pi r_{\rm d}^2 U_{\rm d} - \pi r_{\rm d}^2 {{\rm d}h_2\over {\rm d}t},\; $$
and factorising gives
(B7)$$\pi r_{\rm s}^2 \left({{\rm d}h_3\over {\rm d}t} - {{\rm d}h_2\over {\rm d}t}\right) = \pi r_{\rm d}^2 \left(U_{\rm d} - {{\rm d}h_2\over {\rm d}t} \right),\; $$
which we rearrange to find
(B8)$$r_{\rm s} = \left[{r_{\rm d}^2 \left(U_{\rm d} - \frac{{\rm d}h_2}{{\rm d}t} \right)\over \frac{{\rm d}h_3}{{\rm d}t} - \frac{{\rm d}h_2}{{\rm d}t}} \right]^{1/2}.$$
Using Eqn (B8), the known hose radius (r d= 0.015 m), the measured mean drill speed during PT2 ($\overline {U}_{\rm d} = 8.82$ min−1), and the rate of change in hydraulic head during PT2 (dh 2/dt = 1.36 m h−1) and PT3 (dh 3/dt = 7.40 m h−1), gives a borehole radius at the water-line r s = 0.14 m. This estimate is double that of the borehole model (r s = 0.07 m; Table B1), but consistent with the borehole radius measured at the surface.
Measurements were not made of BH19g but BH19e had a radius at the surface of 0.17 m. As the pumping test period was not recorded in BH19c and BH19e we assume that their near-surface radius was the same as BH19g: that is, we assume r s = 0.14 m for all response tests. Near-surface borehole radii larger than predicted by the Greenler and others (Reference Greenler2014) model could be explained by turbulent heat exchange from warm upwelling water. Laminar flow is specified in the model. The effect of turbulent heat exchange on borehole radius would decrease with depth so the model should perform better near the base. With no better estimate available, we therefore use the model output for the borehole radius at the base (r 0; Table B1).
Table B1. Borehole radii at the time of borehole breakthrough predicted using the model of Greenler et al. (2014) over ten depth intervals ranging from the ice surface to the ice-sediment interface at a depth below the ice surface corresponding to the ice thickness (H i)
Appendix C. Elastic response of ice to borehole breakthrough
Here we consider the relative importance of viscous and elastic deformation in the response of the ice sheet at site R30 to borehole breakthrough forcing by calculating the Maxwell relaxation time
(C1)$$t_{\rm M} = {\eta_{\rm i}\over E},\; $$
where E = 9.3 GPa is the elastic modulus for ice (Sinha, Reference Sinha1978), and η i is the effective ice viscosity. The effective viscosity can be given as
(C2)$$\eta_{\rm i} = {1\over 2A} \left(\tau_{\rm e}^2\right)^{{1-n\over 2}},\; $$
where A and n = 3 are the rate factor and exponent in Glen's flow law, and τe is the effective stress (Hutter, Reference Hutter1983). For simplicity, we estimate the effective stress as
(C3)$$\tau_{{\rm e}} = f \rho_{{\rm i}}gH_{{\rm i}}\sin\alpha,\; $$
where, for site R30, f ≈ 0.75 is the shape factor representing the proportion of driving stress supported by basal drag (Nye, Reference Nye1952). Using Eqn (C3), the effective stress at site R30 is 121 kPa. We assume that viscous deformation will be greatest within the basal temperate ice layer and therefore use upper and lower limits of A for temperate ice of 5.5 − 2.4 × 10−24 Pa−3 s−1 (Cuffey and Paterson, Reference Cuffey and Paterson2010). With these values the effective viscosity is 6.2 − 14.2 × 10−12 Pa s−1, and the Maxwell time is 11 − 25 min. Hence, assuming elastic ice rheology at site R30 is reasonable during the initial stages of gap opening. Over the time scales relevant to pumping and recovery tests viscous deformation should not be neglected and a viscoelastic model (e.g., Reeh and others, Reference Reeh, Christensen, Mayer and Olesen2003) would be more appropriate. Note, however, that the rheology of the ice actually drops out of the asymptotic solution of the Hewitt and others (Reference Hewitt, Chini and Neufeld2018) model in Eqn (22), and so incorporating viscous deformation may not have a large effect on the predictions of transmissivity from that model.
Appendix D. Transmissivity from time constant
The hydraulic transmissivity (T g) of a porous medium equivalent to a gap of uniform width δ is given by de Marsily (Reference de Marsily1986) as
(D1)$$T_{\rm g} = {\phi \delta^3 \rho_{\rm w} g\over 12 \eta_{\rm w}}.$$
The time constant D is given by
(D2)$$D = {6 \eta_{\rm w} r_{\rm s}^2\over \delta^3 \rho_{\rm w} g} \ln {R\over r_0},\; $$
which is Eqn (7a) of Weertman (Reference Weertman1970) and Eqn (9) of Engelhardt and Kamb (Reference Engelhardt and Kamb1997). Combining Eqns (D1) and (D2) as follows allows the hydraulic transmissivity to be approximated from the time constant D. Inserting ϕ and then multiplying both sides of Eqn (D2) by two gives
(D3)$$2D = {12 \eta_{\rm w} r_{\rm s}^2\over \phi\delta^3 \rho_{\rm w} g} \ln {R\over r_0}.$$
This permits simplification by inserting the inverse of Eqn (D1) into Eqn (D3)
(D4)$$2D = {1\over T_{\rm g}} r_{\rm s}^2 \ln {R\over r_0}.$$
Multiplying both sides by T g gives
(D5)$$2D T_{\rm g} = r_{\rm s}^2 \ln {R\over r_0}.$$
And further rearranging gives
(D6)$$T_{\rm g} = {r_{\rm s}^2\over 2D} \ln {R\over r_0},\; $$
which is Eqn (8.7) of Lüthi (Reference Lüthi1999) and Eqn (27) of this paper.
Present address: Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research, Bremerhaven, Germany
Present address: Byrd Polar & Climate Research Center, The Ohio State University, 1090 Carmack Rd, Columbus, Ohio 43210
Alley, R, Blankenship, D, Bentley, C and Rooney, S (1986) Deformation of till beneath Ice Stream B, West Antarctica. Nature 322, 57–59. doi: 10.1038/322057a0.CrossRefGoogle Scholar
Alley, R, Blankenship, D, Rooney, S and Bentley, C (1989) Water-pressure coupling of sliding and bed deformation: III. Application to Ice Stream B, Antarctica. Journal of Glaciology 35(119), 130–139. doi: 10.3189/002214389793701572CrossRefGoogle Scholar
Alley, RB, Cuffey, KM and Zoet, LK (2019) Glacial erosion: status and outlook. Annals of Glaciology 60(80), 1–13. doi: 10.1017/aog.2019.38CrossRefGoogle Scholar
Andrews, LC and 7 others (2014) Direct observations of evolving subglacial drainage beneath the Greenland Ice Sheet. Nature 514(7520), 80–83. doi: 10.1038/nature13796CrossRefGoogle ScholarPubMed
Bartholomaus, T, Anderson, R and Anderson, S (2011) Growth and collapse of the distributed subglacial hydrologic system of Kennicott Glacier, Alaska, USA, and its effects on basal motion. Journal of Glaciology 57(206), 985–1002. doi: 10.3189/002214311798843269CrossRefGoogle Scholar
Blankenship, DD, Bentley, CR, Rooney, ST and Alley, RB (1986) Seismic measurements reveal a saturated porous layer beneath an active Antarctic ice stream. Nature 322(6074), 54–57. doi: 10.1038/322054a0CrossRefGoogle Scholar
Booth, AD and 8 others (2020) Distributed acoustic sensing of seismic properties in a borehole drilled on a fast-flowing Greenlandic outlet glacier. Geophysical Research Letters 47(13), e2020GL088148. doi: 10.1029/2020GL088148.CrossRefGoogle Scholar
Bougamont, M and 5 others (2014) Sensitive response of the Greenland Ice Sheet to surface melt drainage over a soft bed. Nature Communications 5, 5052. doi: 10.1038/ncomms6052CrossRefGoogle Scholar
Boulton, GS and Dent, D (1974) The nature and rates of post-depositional changes in recently deposited till from south-east Iceland. Geografiska Annaler: Series A, Physical Geography 56(3-4), 121–134. doi: 10.1080/04353676.1974.11879894CrossRefGoogle Scholar
Christianson, K and 7 others (2014) Dilatant till facilitates ice-stream flow in northeast Greenland. Earth and Planetary Science Letters 401(0), 57–69. doi: 10.1016/j.epsl.2014.05.060CrossRefGoogle Scholar
Chudley, TR and 7 others (2019b) Supraglacial lake drainage at a fast-flowing Greenlandic outlet glacier. Proceedings of the National Academy of Sciences 116(51), 25468–25477. doi: 10.1073/pnas.1913685116.CrossRefGoogle Scholar
Chudley, TR, Christoffersen, P, Doyle, SH, Abellan, A and Snooke, N (2019a) High-accuracy UAV photogrammetry of ice sheet dynamics with no ground control. The Cryosphere 13(3), 955–968. doi: 10.5194/tc-13-955-2019CrossRefGoogle Scholar
Clarke, GKC (1987) Subglacial till: a physical framework for its properties and processes. Journal of Geophysical Research: Solid Earth 92(B9), 9023–9036. doi: 10.1029/JB092iB09p09023CrossRefGoogle Scholar
Cooper, H and Jacob, C (1946) A generalized graphical method for evaluating formation constants and summarizing well field history. American Geophysical Union Transactions 27, 526–534. doi: 10.1029/TR027i004p00526CrossRefGoogle Scholar
Creyts, TT and Schoof, CG (2009) Drainage through subglacial water sheets. Journal of Geophysical Research: Earth Surface 114(F4), F04008. doi: 10.1029/2008JF001215.CrossRefGoogle Scholar
Cuffey, K and Paterson, W (2010) The Physics of Glaciers. 4th ed.: Elsevier.Google Scholar
Davison, BJ and 6 others (2020) Subglacial drainage evolution modulates seasonal ice flow variability of three tidewater glaciers in southwest Greenland. Journal of Geophysical Research: Earth Surface 125(9), e2019JF005492. doi: 10.1029/2019JF005492.Google Scholar
de Marsily, G (1986) Quantitative Hydrogeology. Orlando, Florida: Academic Press Inc..Google Scholar
Dow, CF and 10 others (2015) Modeling of subglacial hydrological development following rapid supraglacial lake drainage. Journal of Geophysical Research: Earth Surface 120, 1127–1147. doi: 10.1002/2014JF003333.CrossRefGoogle ScholarPubMed
Dow, C, Kavanaugh, J, Sanders, J, Cuffey, K and MacGregor, K (2011) Subsurface hydrology of an overdeepened cirque glacier. Journal of Glaciology 57(206), 1067–1078. doi: 10.3189/002214311798843412CrossRefGoogle Scholar
Dow, C, Kulessa, B, Rutt, I, Doyle, S.H and Hubbard, A (2014) Upper bounds on subglacial channel development for interior regions of the Greenland ice sheet. Journal of Glaciology 60(224), 1044–1052. doi: 10.3189/2014JoG14J093CrossRefGoogle Scholar
Doyle, S and 9 others (2013) Ice tectonic deformation during the rapid in situ drainage of a supraglacial lake on the Greenland Ice Sheet. The Cryosphere 7(1), 129–140. doi: 10.5194/tc-7-129-2013CrossRefGoogle Scholar
Doyle, SH and 7 others (2018) Physical conditions of fast glacier flow: 1. Measurements from boreholes drilled to the bed of Store Glacier, West Greenland. Journal of Geophysical Research: Earth Surface 123(2), 324–348. doi: 10.1002/2017JF004529CrossRefGoogle Scholar
Engelhardt, H (1978) Water in glaciers: observations and theory of the behaviour of water levels in boreholes. Zeitschrift für Gletscherkunde und Glazialgeologie. 14, 35–60.Google Scholar
Engelhardt, HF, Harrison, WD and Kamb, B (1978) Basal sliding and conditions at the glacier bed as revealed by borehole photography. Journal of Glaciology 20(84), 469–508. doi: 10.3198/1978JoG20-84-469-508CrossRefGoogle Scholar
Engelhardt, H, Humphrey, N, Kamb, B and Fahnestock, M (1990) Physical conditions at the base of a fast moving Antarctic ice stream. Science, 248(4951), 57–59. doi: 10.1126/science.248.4951.57CrossRefGoogle ScholarPubMed
Engelhardt, H and Kamb, B (1997) Basal hydraulic system of a West Antarctic ice stream: constraints from borehole observations. Journal of Glaciology 43(144), 207–229. doi: 10.3198/1997JoG43-144-207-230CrossRefGoogle Scholar
Engelhardt, H and Kamb, B (1998) Basal sliding of Ice Stream B, West Antarctica. Journal of Glaciology 44(147), 223–230. doi: 10.3189/S0022143000002562Google Scholar
Fischer, UH and Clarke, GK (2001) Review of subglacial hydro-mechanical coupling: Trapridge Glacier, Yukon Territory, Canada. Quaternary International 86(1), 29–43. doi: 10.1016/S1040-6182(01)00049-0CrossRefGoogle Scholar
Fischer, UH, Iverson, NR, Hanson, B, LeB Hooke, R and Jansson, P (1998) Estimation of hydraulic properties of subglacial till from ploughmeter measurements. Journal of Glaciology 44(148), 517–522. doi: 10.3189/S0022143000002033CrossRefGoogle Scholar
Flowers, G and Clarke, G (2002) A multicomponent coupled model of glacier hydrology 1. Theory and synthetic examples. Jounal of Geophysical Research 107(B11). doi: 10.1029/2001JB001122Google Scholar
Fountain, A (1994) Borehole water-level variations and implications for the subglacial hydraulics of South Cascade Glacier, Washington State, USA. Journal of Glaciology 40(135), 293–304. doi: 10.3189/S0022143000007383CrossRefGoogle Scholar
Freeze, RA and Cherry, JA (1979) Groundwater. Englewood Cliffs, N.J, Prentice-Hall Inc.Google Scholar
Gordon, S and 5 others (1998) Seasonal reorganization of subglacial drainage system of Haut Glacier d'Arolla, Valais, Switzerland, inferred from measurements in boreholes. Hydrological Processes 12, 105–133.3.0.CO;2-#>CrossRefGoogle Scholar
Gordon, S and 7 others (2001) Borehole drainage and its implications for the investigation of glacier hydrology: experiences from Haut Glacier d'Arolla, Switzerland. Hydrological Processes 15, 797–813. doi: 10.1002/hyp.184CrossRefGoogle Scholar
Greenler, L and 6 others (2014) Modeling hole size, lifetime and fuel consumption in hot-water ice drilling. Annals of Glaciology 55(68), 115–123. doi: 10.3189/2014AoG68A033CrossRefGoogle Scholar
Harper, J, Humphrey, N, Pfeffer, W and Lazar, B (2007) Two modes of accelerated glacier sliding related to water. Geophysical Research Letters 34, L12503. doi: 10.1029/2007GL030233CrossRefGoogle Scholar
Haseloff, M, Hewitt, IJ and Katz, RF (2019) Englacial pore water localizes shear in temperate ice stream margins. Journal of Geophysical Research: Earth Surface 124(11), 2521–2541. doi: 10.1029/2019JF005399CrossRefGoogle Scholar
Hewitt, I (2013) Seasonal changes in ice sheet motion due to melt water lubrication. Earth and Planetary Science Letters 371-372, 16–25. doi: 10.1016/j.epsl.2013.04.022CrossRefGoogle Scholar
Hewitt, DR, Chini, GP and Neufeld, JA (2018) The influence of a poroelastic till on rapid subglacial flooding and cavity formation. Journal of Fluid Mechanics 855, 1170–1207. doi: 10.1017/jfm.2018.624CrossRefGoogle Scholar
Hiscock, K and Bense, V (2014) Hydrogeology: Principles and Practice. 2nd ed. Chichester, West Sussex, UK: Wiley-Blackwell.Google Scholar
Hodge, SM (1979) Direct measurement of basal water pressures: progress and problems. Journal of Glaciology 23(89), 309–319. doi: 10.3189/S0022143000029920CrossRefGoogle Scholar
Hoffman, MJ and 9 others (2016) Greenland subglacial drainage evolution regulated by weakly connected regions of the bed. Nature Communications 7, 13903. doi: 10.1038/ncomms13903.CrossRefGoogle Scholar
Hoffman, M and Price, S (2014) Feedbacks between coupled subglacial hydrology and glacier dynamics. Journal of Geophysical Research: Earth Surface 119(3), 414–436. doi: 10.1002/2013JF002943CrossRefGoogle Scholar
Hofstede, C and 7 others (2018) Physical conditions of fast glacier flow: 2. Variable extent of anisotropic ice and soft basal sediment from seismic reflection data acquired on Store Glacier, West Greenland. Journal of Geophysical Research: Earth Surface 123(2), 349–362. doi: 10.1002/2017JF004297CrossRefGoogle Scholar
Hubbard, B and 6 others (2021) Borehole-based characterization of deep mixed-mode crevasses at a greenlandic outlet glacier. AGU Advances 2(2), e2020AV000291. doi: 10.1029/2020AV000291.CrossRefGoogle Scholar
Hubbard, B and Maltman, A (2000) Laboratory investigations of the strength, static hydraulic conductivity and dynamic hydraulic conductivity of glacial sediments, In: Maltman, A., Hubbard, B., and Hambrey, M.J. Deformation of Glacial Materials 176, 231–242.Google Scholar
Hubbard, B, Sharp, M, Willis, I, Nielsen, M and Smart, C (1995) Borehole water-level variations and the structure of the subglacial hydrological system of Haut Glacier d'Arolla, Valais, Switzerland. Journal of Glaciology 41(139), 572–583. doi: 10.3189/S0022143000034894CrossRefGoogle Scholar
Hubbert, MK and Willis, DG (1957) Mechanics of hydraulic fracturing. Transactions of the AIME 210(01), 153–168.CrossRefGoogle Scholar
Humphrey, N, Kamb, B, Fahnestock, M and Engelhardt, H (1993) Characteristics of the bed of the lower Columbia Glacier, Alaska. Journal of Geophysical Research: Solid Earth 98(B1), 837–846. doi: 10.3189/002214390793701354CrossRefGoogle Scholar
Hutter, K (1983) Theoretical glaciology: material science of ice and the mechanics of glaciers and ice sheets. Tokyo: D Reidel, Dordrecht/Terra Scientific.CrossRefGoogle Scholar
Iken, A and Bindschadler, R (1986) Combined measurements of subglacial water pressure and surface velocity of Findelengletscher, Switzerland: conclusions about drainage system and sliding mechanisms. Journal of Glaciology 32(110), 101–119. doi: 10.3189/S0022143000006936CrossRefGoogle Scholar
Iken, A, Fabri, K and Funk, M (1996) Water storage and subglacial drainage conditions inferred from borehole measurements on Gornergletscher, Valais, Switzerland. Journal of Glaciology 42(141), 233–248. doi: 10.3189/S0022143000004093CrossRefGoogle Scholar
Iken, A and Truffer, M (1997) The relationship between subglacial water pressure and velocity of Findelengletscher, Switzerland, during its advance and retreat. Journal of Glaciology 43(144), 328–338. doi: 10.3189/S0022143000003282CrossRefGoogle Scholar
Iverson, N and 7 others (2007) Soft-bed experiments beneath Engabreen, Norway: regelation infiltration, basal slip and bed deformation. Journal of Glaciology 53(182), 323–340. doi: 10.3189/002214307783258431CrossRefGoogle Scholar
Iverson, N, Hanson, B, Hooke, RL and Jansson, P (1995) Flow mechanism of glaciers on soft beds. Science, 267(5194), 80–81. doi: 10.1126/science.267.5194.80CrossRefGoogle ScholarPubMed
Iverson, NR, Jansson, P and Hooke, RL (1994) In-situ measurement of the strength of deforming subglacial till. Journal of Glaciology 40(136), 497–503. doi: 10.3189/S0022143000012375CrossRefGoogle Scholar
Jóhannesson, T (2002) Propagation of a subglacial flood wave during the initiation of a jökulhlaup. Hydrological Sciences Journal 47(3), 417–434. doi: 10.1080/02626660209492944CrossRefGoogle Scholar
Kamb, B (1987) Glacier surge mechanism based on linked-cavity configuration of the basal water conduit system. Journal of Geophysical Research 92(B9), 9083–9100. doi: 10.1029/JB092iB09p09083CrossRefGoogle Scholar
Kamb, B (2001) Basal zone of the West Antarctic ice streams and its role in lubrication of their rapid motion. In: The West Antarctic Ice Sheet: Behavior and Environment, 157–199. American Geophysical Union (doi: 10.1029/AR077p0157).CrossRefGoogle Scholar
Kamb, B and Engelhardt, H (1987) Waves of accelerated motion in a glacier approaching surge: the mini-surges of Variegated Glacier, Alaska, U.S.A. Journal of Glaciology 33(113), 27–46. doi: 10.3189/S0022143000005311CrossRefGoogle Scholar
Knight, PG, Waller, RI, Patterson, CJ, Jones, AP and Robinson, ZP (2002) Discharge of debris from ice at the margin of the Greenland ice sheet. Journal of Glaciology 48(161), 192–198. doi: 10.3189/172756502781831359CrossRefGoogle Scholar
Kulessa, B and Hubbard, B (1997) Interpretation of borehole impulse tests at Haut Glacier d'Arolla, Switzerland. Annals of Glaciology 24, 397–402. doi: 10.3189/S0260305500012507CrossRefGoogle Scholar
Kulessa, B, Hubbard, B, Williamson, M and Brown, GH (2005) Hydrogeological analysis of slug tests in glacier boreholes. Journal of Glaciology 51(173), 269–280. doi: 10.3189/172756505781829458CrossRefGoogle Scholar
Kulessa, B and Murray, T (2003) Slug-test derived differences in bed hydraulic properties between a surge-type and a non-surge-type Svalbard glacier. Annals of Glaciology 36, 103–109. doi: 10.3189/172756403781816257CrossRefGoogle Scholar
Law, R and 11 others (2021) Thermodynamics of a fast-moving greenlandic outlet glacier revealed by fiber-optic distributed temperature sensing. Science Advances 7(20). doi: 10.1126/sciadv.abe7136CrossRefGoogle ScholarPubMed
Lefeuvre, PM and 5 others (2018) Stress redistribution explains anti-correlated subglacial pressure variations. Frontiers in Earth Science 5, 110. doi: 10.3389/feart.2017.00110CrossRefGoogle Scholar
Lefeuvre, PM, Jackson, M, Lappegard, G and Hagen, JO (2015) Interannual variability of glacier basal pressure from a 20 year record. Annals of Glaciology 56(70), 33–44. doi: 10.3189/2015AoG70A019CrossRefGoogle Scholar
Lingle, CS and Brown, TJ (1987) A subglacial aquifer bed model and water pressure dependent basal sliding relationship for a West Antarctic ice stream. In CJ Van der Veen and J Oerlemans (eds.), Dynamics of the West Antarctic Ice Sheet, 249–285, Springer Netherlands, Dordrecht.CrossRefGoogle Scholar
Lüthi, M (1999) Experimental and numerical investigation of a firn covered cold glacier and a polythermal ice stream: case studies at Colle Gniffetti and Jakobshavn Isbræ. Ph.D. thesis, Swiss Federal Institute of Technology Zurich.Google Scholar
Lüthi, M, Fahnestock, M and Truffer, M (2009) Calving icebergs indicate a thick layer of temperate ice at the base of Jakobshavn Isbræ, Greenland. Journal of Glaciology 55(191), 563–566. doi: 10.3189/002214309788816650CrossRefGoogle Scholar
Makinson, K and Anker, P (2014) The BAS ice-shelf hot-water drill: design, methods and tools. Annals of Glaciology 55(68), 44–52. doi: 10.3189/2014AoG68A030CrossRefGoogle Scholar
Meierbachtol, T, Harper, J, Humphrey, N and Wright, P (2016) Mechanical forcing of water pressure in a hydraulically isolated reach beneath Western Greenland's ablation zone. Annals of Glaciology 52(72), 62–70. doi: 10.1017/aog.2016.5CrossRefGoogle Scholar
Murray, T (1997) Assessing the paradigm shift: deformable glacier beds. Quaternary Science Reviews 16(9), 995–1016. ISSN 0277-3791. doi:10.1016/S0277-3791(97)00030-9).CrossRefGoogle Scholar
Murray, T and Clarke, G (1995) Black-box modelling of the subglacial water system. Journal of Geophysical Research 100, 10231–10245. doi: 10.1029/95JB00671CrossRefGoogle Scholar
Ng, F (2000) Canals under sediment-based ice sheets. Annals of Glaciology 30(1), 146–152. doi: 10.3189/172756400781820633CrossRefGoogle Scholar
Nye, J (1952) The mechanics of glacier flow. Journal of Glaciology 2(11), 52–53.CrossRefGoogle Scholar
Ó'Cofaigh, C and 6 others (2013) Glacimarine lithofacies, provenance and depositional processes on a West Greenland trough-mouth fan. Journal of Quaternary Science 28(1), 13–26. doi: 10.1002/jqs.2569CrossRefGoogle Scholar
Porter, C and 27 others (2018) ArcticDEM, Harvard Dataverse, V1 (doi: 10.7910/DVN/OHHUKH).CrossRefGoogle Scholar
Rada, C and Schoof, C (2018) Channelized, distributed, and disconnected: subglacial drainage under a valley glacier in the Yukon. The Cryosphere 12, 2609–2636. doi: 10.5194/tc-12-2609-2018CrossRefGoogle Scholar
Reeh, N, Christensen, EL, Mayer, C and Olesen, OB (2003) Tidal bending of glaciers: a linear viscoelastic approach. Annals of Glaciology 37(7), 83–89. doi: doi:10.3189/172756403781815663CrossRefGoogle Scholar
Rempel, AW (2008) A theory for ice-till interactions and sediment entrainment beneath glaciers. Journal of Geophysical Research: Earth Surface 113(F1). doi: 10.1029/2007JF000870CrossRefGoogle Scholar
Rignot, E, Box, JE, Burgess, E and Hanna, E (2008) Mass balance of the Greenland ice sheet from 1958 to 2007. Geophysical Research Letters 35(20). doi: 10.1029/2008GL035417.CrossRefGoogle Scholar
Robin, G de Q (1976) Is the basal ice of a temperate glacier at the pressure melting point?. Journal of Glaciology 16(74), 183–196. doi: 10.3189/S002214300003152XCrossRefGoogle Scholar
Ronayne, MJ, Houghton, TB and Stednick, JD (2012) Field characterization of hydraulic conductivity in a heterogeneous alpine glacial till. Journal of Hydrology 458–459, 103–109. doi: 10.1016/j.jhydrol.2012.06.036CrossRefGoogle Scholar
Röthlisberger, H (1972) Water pressure in intra- and subglacial channels. Journal of Glaciology 11(62), 177–203. doi: 10.3189/S0022143000022188CrossRefGoogle Scholar
Ryser, C and 7 others (2014) Caterpillar-like ice motion in the ablation zone of the Greenland ice sheet. Journal of Geophysical Research: Earth Surface 119(10), 2258–2271.CrossRefGoogle Scholar
Schoof, C (2010) Ice-sheet acceleration driven by melt water supply variability. Nature 468, 803–806. doi: 10.1038/nature09618CrossRefGoogle ScholarPubMed
Schoof, C, Hewitt, IJ and Werder, MA (2012) Flotation and free surface flow in a model for subglacial drainage. Part 1. Distributed drainage. Journal of Fluid Mechanics 702, 126. doi: 10.1017/jfm.2012.165CrossRefGoogle Scholar
Sinha, NK (1978) Short-term rheology of polycrystalline ice. Journal of Glaciology 21(85), 457–474. doi: 10.3189/S002214300003361XCrossRefGoogle Scholar
Smart, CC (1996) Statistical evaluation of glacier boreholes as indicators of basal drainage systems. Hydrological Processes 10(4), 599–613. doi: 10.1002/(SICI)1099-1085(199604)10:4<599::AID-HYP394>3.0.CO;2-83.0.CO;2-8>CrossRefGoogle Scholar
Sole, A and 6 others (2013) Winter motion mediates dynamic response of the Greenland Ice Sheet to warmer summers. Geophysical Research Letters 40, 3940–3944. doi: 10.1002/grl.507764CrossRefGoogle Scholar
Stevens, LA and 7 others (2015) Greenland supraglacial lake drainages triggered by hydrologically induced basal slip. Nature 522(7554), 73–76. doi: 10.1038/nature14480CrossRefGoogle ScholarPubMed
Stone, D and Clarke, G (1993) Estimation of subglacial hydraulic properties from induced changes in basal water pressure: a theoretical framework for borehole-response tests. Journal of Glaciology 39(132), 327–340. doi: 10.3189/S0022143000015999CrossRefGoogle Scholar
Stone, DB, Clarke, GKC and Ellis, RG (1997) Inversion of borehole-response test data for estimation of subglacial hydraulic properties. Journal of Glaciology 43(143), 103–113. doi: 10.3189/S0022143000002860CrossRefGoogle Scholar
Sugiyama, S, Bauder, A, Huss, M, Riesen, P and Funk, M (2008) Triggering and drainage mechanisms of the 2004 glacier-dammed lake outburst in Gornergletscher, Switzerland. Journal of Geophysical Research 113, F04019. doi: 10.1029/2007JF000920CrossRefGoogle Scholar
Tedstone, AJ and 5 others (2015) Decadal slowdown of a land-terminating sector of the Greenland Ice Sheet despite warming. Nature 526(7575), 692–695. doi: 10.1038/nature15722CrossRefGoogle ScholarPubMed
Theis, CV (1935) The relation between the lowering of the piezometric surface and the rate and duration of discharge of a well using ground-water storage. Eos, Transactions American Geophysical Union 16(2), 519–524. doi: 10.1029/TR016i002p00519CrossRefGoogle Scholar
Thiem, G (1906) Hydrological methods, PhD Thesis. Royal Technical University of Stuttgart, Germany.Google Scholar
Tsai, V and Rice, J (2010) A model for turbulent hydraulic fracture and application to crack propagation at glacier beds. Journal of Geophysical Research 115, F03007. doi: 10.1029/2009JF001474CrossRefGoogle Scholar
Tsai, VC and Rice, JR (2012) Modeling turbulent hydraulic fracture near a free surface. Journal of Applied Mechanics 79(3), 031003. doi: 10.1115/1.4005879.CrossRefGoogle Scholar
Tulaczyk, S, Kamb, WB and Engelhardt, HF (2000) Basal mechanics of Ice Stream B, West Antarctica: 1. Till mechanics. Journal of Geophysical Research: Solid Earth 105(B1), 463–481. doi: 10.1029/1999JB900329CrossRefGoogle Scholar
Waddington, BS and Clarke, GK (1995) Hydraulic properties of subglacial sediment determined from the mechanical response of water-filled boreholes. Journal of Glaciology 41(137), 112–124. doi: 10.3189/S0022143000017810CrossRefGoogle Scholar
Walder, J and Fowler, A (1994) Channelized subglacial drainage over a deformable bed. Journal of Glaciology 40(134), 3–15. doi: 10.3189/S0022143000003750CrossRefGoogle Scholar
Walter, F, Chaput, J and Luthi, M (2014) Thick sediments beneath Greenland's ablation zone and their potential role in future ice sheet dynamics. Geology 42(6), 487–490. doi: 10.1130/G35492.1CrossRefGoogle Scholar
Weertman, J (1970) A method for setting a lower limit on the water layer thickness at the bottom of an ice sheet from the time required for upwelling of water into a borehole. IAHS Publ. 86, 69–73.Google Scholar
Weertman, J (1972) General theory of water flow at the base of a glacier or ice sheet. Reviews of Geophysics 10(1), 287–333. doi: 10.1029/RG010i001p00287CrossRefGoogle Scholar
Werder, MA, Hewitt, IJ, Schoof, CG and Flowers, GE (2013) Modeling channelized and distributed subglacial drainage in two dimensions. Journal of Geophysical Research: Earth Surface 118(4), 2140–2158. doi: 10.1002/jgrf.20146CrossRefGoogle Scholar
Young, TJ and 11 others (2019) Physical conditions of fast glacier flow: 3. Seasonally-evolving ice deformation on Store Glacier, West Greenland. Journal of Geophysical Research: Earth Surface 124(1), 245–267. doi: 10.1029/2018JF004821CrossRefGoogle ScholarPubMed
View in content
Fig. 1. Maps of the field site. (a) Location of the study site R30 on Sermeq Kujalleq (Store Glacier) with the location of the R29 and S30 drill sites also marked. The background is a Sentinel-2 image acquired on 1 June 2019 and the red square on the inset map shows the location in Greenland. (b) Close up of the R30 study site showing the location of boreholes, moulins and the GNSS receiver. Three boreholes intersected the ice-sediment interface (filled, colour-coded circles) and four terminated above the base (hollow circles). The background orthophoto was acquired by an uncrewed aerial vehicle survey following Chudley and others (2019) on 21 July 2019.
Table 1. Key data for the boreholes that reached the bed. Variables h0, pw, and N were calculated for the reference period 36–60 h after each respective breakthrough, which was deemed representative of subglacial water pressure. A list of symbols is presented in Appendix A.
Fig. 3. (a) Time series of hydraulic head (h). Borehole breakthrough times are marked with a vertical dashed line and arrow. (b) Time series of head above the reference head (s = h − h0) plotted against time since respective breakthrough for all breakthrough tests. The yellow shade marks the 24 h period selected to define h0 (36 − 60 h post-breakthrough).
Fig. 4. (a) Force on the drill tower with best fit plotted against time since BH19g breakthrough, together with measured and modelled hydraulic head. (b) Volumetric flux into the subglacial drainage system (Qo) with error bars, and hydraulic head in BH19g determined by inverting the force on the drill tower. Labels (a–c) are described in Section 4.1.
Table 2. Statistics for the BH19g(e) pumping test. Vo is the volume of water discharged from the borehole base during the period. All other symbols are defined in the text.
Fig. 6. Recovery tests including (a–c) exponential fits (black) applied to the early stage of recovery curves plotted as hydraulic head above background (s) on the logarithmic y-axis against time (t); and (d–e) Cooper and Jacob (1946) recovery test linear-log fitting (black) applied to the late stage of the recovery curves plotted as residual drawdown (s′) against the logarithm of the time ratio (t/t′).
Fig. 8. Time series of (a) horizontal ice velocity, (b) hydraulic head in BH19c and BH19e, (c) temperature at the base of BH19c, and (d) pressure-dependent melting temperature Tm calculated from the water pressure recorded in BH19c. Note that although the y-axes for (c) and (d) are offset the y-axis range is identical for both. The offset between measured temperature and Tm can be explained by uncertainties in the sensor installation depths and the Clausius–Clapeyron gradient.
Table B1. Borehole radii at the time of borehole breakthrough predicted using the model of Greenler et al. (2014) over ten depth intervals ranging from the ice surface to the ice-sediment interface at a depth below the ice surface corresponding to the ice thickness (Hi)
Doyle et al. supplementary material
Doyle et al. supplementary material 1
You have Access Open access
|
CommonCrawl
|
Journal of Applied Probability (5)
Advances in Applied Probability (1)
Annales de Limnologie - International Journal of Limnology (1)
Glasgow Mathematical Journal (1)
Probability in the Engineering and Informational Sciences (1)
Proceedings of the British Society of Animal Science (1)
Applied Probability Trust (6)
On component failure in coherent systems with applications to maintenance strategies
Operations research and management science
Survival analysis and censored data
M. Hashemi, M. Asadi
Journal: Advances in Applied Probability / Volume 52 / Issue 4 / December 2020
Providing optimal strategies for maintaining technical systems in good working condition is an important goal in reliability engineering. The main aim of this paper is to propose some optimal maintenance policies for coherent systems based on some partial information about the status of components in the system. For this purpose, in the first part of the paper, we propose two criteria under which we compute the probability of the number of failed components in a coherent system with independent and identically distributed components. The first proposed criterion utilizes partial information about the status of the components with a single inspection of the system, and the second one uses partial information about the status of component failure under double monitoring of the system. In the computation of both criteria, we use the notion of the signature vector associated with the system. Some stochastic comparisons between two coherent systems have been made based on the proposed concepts. Then, by imposing some cost functions, we introduce new approaches to the optimal corrective and preventive maintenance of coherent systems. To illustrate the results, some examples are examined numerically and graphically.
The Impact of Bone Marrow Transplantation on Sexual Functioning and it's Relation to Depression and Anxiety
H. Bajoghli, A. Nejatisafa, A. Ghavamzadeh, A. Shamshiri, A. Manoukian, M. Asadi, A. Mohammadi, M. Talei, M. Abdi
Journal: European Psychiatry / Volume 24 / Issue S1 / January 2009
Published online by Cambridge University Press: 16 April 2020, p. 1
The aim of this study was to investigate the prevalence of sexual dysfunctions and its relationship with depression and anxiety in a sample of patients underwent bone marrow transplantation (BMT).
A cross-sectional study was conducted in 135 married patients who underwent BMT at least 1 year before evaluation. Sexual dysfunctions assessed by a questionnaire that was derived from Sexual History Form and Sexual Problem Measure. Hospital Anxiety and Depression Scale (HADS) was used to assess depression and anxiety in patients.
Questionnaires were completed by 128 (82.5%) participant. Fifty three percents of participants was male. The mean age of participants was 39.57±8.74. Sexual dysfunctions in post BMT period were significantly more frequent than period prior to the beginning of oncologic malignancy (P< 0.05). Sexual activity was decreased significantly after BMT (P< 0.01). The three most prevalent sexual dysfunctions in male group were premature ejaculation(56%) and problem in orgasm(40%) and desire(32.7%), and in female group were problem in arousal(77%) and desire(77%) and painful intercourse(77%). Sexual dysfunction was more prevalent in female group.
According to HADS score, 42(32.8%) patients had clinical depression (HADS-D score>14) and 12 (9.8%) patients had clinical anxiety (HADS-A score>14). There was not any significant relationship between mean HADS-A and HADS-D scores and scores of sexual dysfunctions questionnaires.
This study showed that sexual function and activity may be adversely affected by BMT. Factors other than anxiety and depression may have correlation with sexual dysfunction in these patients, of course limitation of this study should be considered.
P-638 - Review the Effects of Information Technology Health e-newsletter to Psychiatric Nurses at Shiraz Psychiatry Hospital, Using Mobile (sms)
S.H. Kavari, M. Asadi
Journal: European Psychiatry / Volume 27 / Issue S1 / 2012
Introduction and objectives:
Information and computing technologies promise new virtual learning and communication opportunities within the real communities of health care professionals.
This research is interventional. in this study one questionnaire was distributed to nursing staff working in Psychiatry ward. the questionnaire was to assess the knowledge of the nurses with regards to essential information required for; nursing care for patients with Psychiatry problems, Anxiety Disorders, and Depression etc.
Their knowledge was then re-assessed following forward of-ten E- Newsletter via SMS mobile phone to the same nursing staff during a one month. the two results, before and after sending the information, was compared.
The findings of this study showed there was significant improvement in awareness and knowledge of the staff in thePsychiatry ward of Shiraz Psychiatry Hospital before and after sending E-newsletter containing the required information, via SMS. (p < .0/05)
According to the results of this research, development of mobile technology in all parts of our country can be used to forward the latest information to medical, paramedical professionals and to all employees and workers in these sectors, even in remote areas using this technology. the information can even be expanded based on request based on their needs.
EPA-0030 – The Effects of Bilateral Subthalamic Nucleus Stimulation on Cognitive and Neuropsychiatric Functions in Parkinson's Disease: A Case-control Study
R. Mahdavi, S. Malakouti, G. Shahidi, M. Parvaresh-Rizi, M.I.N.A. Asadi
Parkinson's disease is one of the most disabling neurological diseases; however much progress has been made in the treatment of drug resistant patients through electrode implantation and stimulation of the subthalamic nucleus (STN). This new neurosurgical method may lead to some neuropsychological side effects in patients. The main aim of this study was to evaluate the neuropsychiatric effects of this treatment.
This case-control study was designed to compare two groups of patients with Parkinson's disease. Thirty patients who underwent electrode implantation and deep brain stimulation were compared with 60 patients treated with antiparkinson drugs. These two groups were matched for age, sex, duration and severity of Parkinson's disease.
The UPDRS was used to assess the severity of Parkinson's disease. The Beck Depression Inventory questionnaire and the Hamilton Anxiety Rating Scale were used to evaluate depression and anxiety as a consequence of DBS. The Mini Mental Status Examination and Clock Drawing Test (CDT) were used to evaluate the cognitive and executive function.
Thirty months after STN stimulation, a lower level of anxiety and depression were seen in the DBS patients compared with drug treated subjects, but the differences were not significant. However, cognitive status deteriorated to a greater extent in study subjects compared to the control group. The MMSE results were not significantly different, but the differences in CDT scores were significant.
Patients who have undergone DBS surgery must be followed up for neuropsychiatric symptoms, particularly for subcortical cognitive deterioration in the long term.
Neuropsychiatric consequences of deep brain stimulation surgeries in the patients affected by chronic movement disorders: A brief report
S. Mahdavi, S.K. Malakouti, B. Naji, M. Asadi, S. Kahani
Published online by Cambridge University Press: 23 March 2020, pp. S94-S95
The main surgical procedure for PD and other chronic movement disorders is deep brain stimulation. DBS has been reported to have specific consequences such as decline in verbal fluency and episodes of depression.
We designed an interventional study in 12 patients affected by Parkinson, dystonia and tic who underwent DBS surgery. Patient assessed before surgery, one month and one year after surgery.
The results proved a significant improvement in SF36. The Hamilton's anxiety scale showed an overall but insignificant improvement. The mean of scores of the BDI had a great drop one month after surgery but a raise at the 12th month (insignificant pattern).
Pearson's correlation test showed a significant negative correlation between age and the SF36 scores. The BDI's scores were assessed in relation with age. Although there was no actual relation between them before surgery, we detected a positive correlation between them after one year.
The pattern of changes can be related with the differences between perioperative expectations and real long-term outcomes. Correlations between changes seen in BDI and SF36 scores with age can be considered as a confirmatory evidence for this idea.
All cases showed an insignificant gradual decline in digit span test, which may be independent of the surgical procedure. Although the COWA test could not prove a significant deterioration in verbal fluency but a slight decline after one year was obvious, in addition to one patient who turned aphasic during this period.
The outcomes showed that the benefits of DBS outweigh the slight risk of developing depression.
Disclosure of interest
The authors have not supplied their declaration of competing interest.
Transcranial direct current stimulation in treatment – resistance unipolar major depressive disorder
M. Asadi, S. Mahdavi
Published online by Cambridge University Press: 23 March 2020, p. S95
MDD is a common, chronic and recurrent illness .it is essential to reach full remission in acute treatment. tDCS is a non-invasive brain stimulation that uses direct electrical currents to stimulate specific parts of the brain.
Is to assess the effectiveness of tDCS in patients with treatment resistance MDD.
Eighty outpatients of a psychiatric clinic were selected. Subjects meet (DSM-IV) diagnostic criteria for MDD. All patients had failed to respond to at least two standard antidepressant medication, in the current episode. Patients with bipolar depressive disorder, MDD with psychotic and atypical features, other psychiatric disorders, sever medical condition, acute suicidality and pregnancy were excluded. All patients received stable drug regimens for at least two weeks before enrollment and drug dosages remained unchanged throughout the study. They revised 8 stimulation sessions, using a 2 mA current, for 20 minutes, in 8 consecutive days. The anodal electrode was placed over the left DLPFC. Cathode electrode over the right supraorbital region. Mood was evaluated with 21-item Hamilton Rating Depression Scale and the Beck Depression Inventory.
We designed a pretest–posttest study and evaluate depression at baseline (pre-intervention), immediately after 8 sessions (post-intervention) and two months after treatment onset (follow-up).
There is a significant difference between Pre- vs. post-intervention (FBDI = 246.58, P < 0.001; FHRSD = 214.56, P < 0.001) and pre vs. Follow-up intervention (FBDI = 323.10, P < 0.001; FHRSD = 150.96, P < 0.001).
It can be said that tDCS had effective and enduring variation (Ppostvs.follow-up > 0.05) in improving the clinical symptoms of MDD.
Reliability modeling of coherent systems with shared components based on sequential order statistics
Special processes
S. Ashrafi, S. Zarezadeh, M. Asadi
Journal: Journal of Applied Probability / Volume 55 / Issue 3 / September 2018
Published online by Cambridge University Press: 16 November 2018, pp. 845-861
In this paper we are concerned with the reliability properties of two coherent systems having shared components. We assume that the components of the systems are two overlapping subsets of a set of n components with lifetimes X1,...,Xn. Further, we assume that the components of the systems fail according to the model of sequential order statistics (which is equivalent, under some mild conditions, to the failure model corresponding to a nonhomogeneous pure-birth process). The joint reliability function of the system lifetimes is expressed as a mixture of the joint reliability functions of the sequential order statistics, where the mixing probabilities are the bivariate signature matrix associated to the structures of systems. We investigate some stochastic orderings and dependency properties of the system lifetimes. We also study conditions under which the joint reliability function of systems with shared components of order m can be equivalently written as the joint reliability function of systems of order n (n>m). In order to illustrate the results, we provide several examples.
SIGNATURE-BASED INFORMATION MEASURES OF MULTI-STATE NETWORKS
S. Zarezadeh, M. Asadi, S. Eftekhar
Journal: Probability in the Engineering and Informational Sciences / Volume 33 / Issue 3 / July 2019
Published online by Cambridge University Press: 14 June 2018, pp. 438-459
The signature matrix of an n-component three-state network (system), which depends only on the network structure, is a useful tool for comparing the reliability and stochastic properties of networks. In this paper, we consider a three-state network with states up, partial performance, and down. We assume that the network remains in state up, for a random time T1 and then moves to state partial performance until it fails at time T>T1. The signature-based expressions for the conditional entropy of T given T1, the joint entropy, Kullback-Leibler (K-L) information, and mutual information of the lifetimes T and T1 are presented. It is shown that the K-L information, and mutual information between T1 and T depend only on the network structure (i.e., depend only to the signature matrix of the network). Some signature-based stochastic comparisons are also made to compare the K-L of the state lifetimes in two different three-state networks. Upper and lower bounds for the K-L divergence and mutual information between T1 and T are investigated. Finally the results are extended to n-component multi-state networks. Several examples are examined graphically and numerically.
FRAME-LESS HILBERT C*-MODULES
Nontrigonometric harmonic analysis
Selfadjoint operator algebras
M. B. ASADI, M. FRANK, Z. HASSANPOUR-YAKHDANI
Journal: Glasgow Mathematical Journal / Volume 61 / Issue 1 / January 2019
Published online by Cambridge University Press: 07 February 2018, pp. 25-31
We show that if A is a compact C*-algebra without identity that has a faithful *-representation in the C*-algebra of all compact operators on a separable Hilbert space and its multiplier algebra admits a minimal central projection p such that pA is infinite-dimensional, then there exists a Hilbert A1-module admitting no frames, where A1 is the unitization of A. In particular, there exists a frame-less Hilbert C*-module over the C*-algebra $K(\ell^2) \dotplus \mathbb{C}I_{\ell^2}$ .
The failure probability of components in three-state networks with applications to age replacement policy
S. Ashrafi, M. Asadi
Journal: Journal of Applied Probability / Volume 54 / Issue 4 / December 2017
In this paper we investigate the stochastic properties of the number of failed components of a three-state network. We consider a network made up of n components which is designed for a specific purpose according to the performance of its components. The network starts operating at time t = 0 and it is assumed that, at any time t > 0, it can be in one of states up, partial performance, or down. We further suppose that the state of the network is inspected at two time instants t 1 and t 2 (t 1 < t 2). Using the notion of the two-dimensional signature, the probability of the number of failed components of the network is calculated, at t 1 and t 2, under several scenarios about the states of the network. Stochastic and ageing properties of the proposed failure probabilities are studied under different conditions. We present some optimal age replacement policies to show applications of the proposed criteria. Several illustrative examples are also provided.
On Ashrafi and Asadi (2014)
Journal: Journal of Applied Probability / Volume 52 / Issue 1 / March 2015
Published online by Cambridge University Press: 30 January 2018, p. 305
Dynamic Reliability Modeling of Three-State Networks
Published online by Cambridge University Press: 30 January 2018, pp. 999-1020
This paper is an investigation into the reliability and stochastic properties of three-state networks. We consider a single-step network consisting of n links and we assume that the links are subject to failure. We assume that the network can be in three states, up (K = 2), partial performance (K = 1), and down (K = 0). Using the concept of the two-dimensional signature, we study the residual lifetimes of the networks under different scenarios on the states and the number of failed links of the network. In the process of doing so, we define variants of the concept of the dynamic signature in a bivariate setting. Then, we obtain signature based mixture representations of the reliability of the residual lifetimes of the network states under the condition that the network is in state K = 2 (or K = 1) and exactly k links in the network have failed. We prove preservation theorems showing that stochastic orderings and dependence between the elements of the dynamic signatures (which relies on the network structure) are preserved by the residual lifetimes of the states of the network (which relies on the network ageing). Various illustrative examples are also provided.
On the Residual and Inactivity Times of the Components of Used Coherent Systems
Distribution theory - Probability
S. Goliforushani, M. Asadi, N. Balakrishnan
Journal: Journal of Applied Probability / Volume 49 / Issue 2 / June 2012
In the study of the reliability of technical systems in reliability engineering, coherent systems play a key role. In this paper we consider a coherent system consisting of n components with independent and identically distributed components and propose two time-dependent criteria. The first criterion is a measure of the residual lifetime of live components of a coherent system having some of the components alive when the system fails at time t. The second criterion is a time-dependent measure which enables us to investigate the inactivity times of the failed components of a coherent system still functioning though some of its components have failed. Several ageing and stochastic properties of the proposed measures are then established.
Water mites of the genus Torrenticola Piersig (Acari: Hydrachnidia, Torrenticolidae) from Iran
V. Pesic, A. Saboori, M. Asadi
Journal: Annales de Limnologie - International Journal of Limnology / Volume 40 / Issue 3 / September 2004
Five water mite species of the genus Torrenticola Piersig (Acari: Hydrachnidia, Torrenticolidae) are reported from Iran. T. disabatinola and T. persica are described as new to science; first description are given of the female of T. saboorii Pesic & Asadi, 2002; new records are given for T. nana Di Sabatino & Gerecke, 2003, and T. cf. jasminae Bader, 1988.
Effects of additives on fermentation quality and in vitro digestibility of millet silage
A. Asadi, M. Alikhani, G. R. Ghorbani
Journal: Proceedings of the British Society of Animal Science / Volume 2004 / 2004
Published online by Cambridge University Press: 20 November 2017, p. 240
In arid and semiarid areas, the possibility of growing high yielding forages such as corn is limited. Therefore interests have been focused on growing plants which are highly able to express their potential in hot and dry conditions. Millet has recently received considerable attention as a suitable candidate. Some advantages include: rapid growth rate, relatively high resistance to drought and salinity, moderate protein content, palatability and absence of prussic acid. The objective of this experiment was to evaluate the fermentation quality and digestibility of millet silage as affected by various additives.
|
CommonCrawl
|
Derivatives of L-series of weakly holomorphic cusp forms
Derivatives of L-series of weakly holomorphic cusp forms Diamantis, Nikolaos; Strömberg, Fredrik 2022-12-01 00:00:00 [email protected] University of Nottingham, Based on the theory of L-series associated with weakly holomorphic modular forms in Nottingham, UK Diamantis et al. (L-series of harmonic Maass forms and a summation formula for harmonic lifts. arXiv:2107.12366), we derive explicit formulas for central values of derivatives of L-series as integrals with limits inside the upper half-plane. This has computational advantages, already in the case of classical holomorphic cusp forms and, in the last section, we discuss computational aspects and explicit examples. 1 Introduction As evidenced by the prominence of conjectures such as those of Birch–Swinnerton-Dyer, Beilinson, etc., central values of derivatives of L-series are key invariants of modular forms. Explicit forms of their values are therefore desirable, since they can lead to either theoretical or numerical insight about their nature. On the other hand, an extension of classical modular forms that allowed for poles at the cusps, the weakly holomorphic modular forms, has, more recently, been the focus of intense research, with Borcherd's work [1] representing an important highlight followed by further applications to arithmetic, combinatorial and other aspects, e.g. in [4,7,13,21], etc. A comprehensive overview of the foundations of the theory as well as a variety of important applications is provided in [2]. Up until relatively recently, L-series of weakly holomorphic modular forms had not been studied systematically. In fact, to our knowledge, a first definition was given in [3] in 2014. In work by the first author and his collaborators [11], a systematic approach for all harmonic Maass forms was proposed which led to functional equations, converse theorems, etc. A first application to special values of the L-series defined in [11] was given in [10], where results of [6] on cycle integrals were streamlined and generalised. Part of the work in [6] was based on an explicit formula of what could be thought of as the (at the time of writing of [6], not yet defined) central L-value of a weight 0 weakly holomorphic form. That formula had been suggested, in the case of the Hauptmodul, by Zagier. In [10]we interpreted those cycle integrals as values of the L-series defined in [11] and this allowed us to generalise the formulas of [6]. © The Author(s) 2022. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. 0123456789().,–: volV 64 Page 2 of 14 Diamantis, Strömberg Res Math Sci (2022) 9:64 Here, we extend that study to values of derivatives of L-series of weakly holomorphic forms. To state the main theorem, we will briefly introduce the terms involved, but we will discuss them in more detail in the next section. Let k ∈ 2N. We consider the action | of SL (R) on smooth functions f : H → C on the k 2 complex upper half-plane H, given by ab −k (f | γ )(z):= j(γ,z) f (γ z), for γ = ∈ SL (R), k 2 cd where j(γ,z):= cz + d. We further recall the defining formula for the Laplace transform L of a piecewise smooth complex-valued function ϕ on R. It is given by −st (Lϕ)(s):= e ϕ(t)dt (1.1) for each s ∈ C for which the integral converges absolutely. We use the same notation Lϕ for its analytic continuation to a larger domain, if such a continuation exists. Finally, if N ∈ N, 0 −1/ N W := √ . N 0 Let now f be a weakly holomorphic cusp form of weight k for (N), i.e. a meromorphic modular form whose poles may only lie at the cusps and its Fourier expansion at each cusp has a vanishing constant term. Assume that its Fourier expansion at infinity is given by 2πinz f (z) = a (n)e . (1.2) n≥−n n =0 Then, the L-series of f is defined in [11] as the map given by (ϕ) = a (n)(Lϕ)(2πn) (1.3) f f n≥−n n =0 for each ϕ in a certain family of functions on R which will be defined in the next section. The main object of concern in this note will be the specialisation of this L-series to a specific family of test functions: For (s, w) ∈ C × H we denote w s/2 −wt s−1 ϕ (t):= 1 (t)N e t , for t > 0, (1.4) s [1/ N,∞) where 1 denotes the characteristic function of X ⊂ R. We then set (f, s):= (ϕ ) (1.5) With this notation, we have Theorem 1.1 Let k ∈ 2N and m ∈ N. For each weakly holomorphic cusp form of weight k for (N) such that f | W = f, we have 0 N √ +1 k k m i k (m) 2m− j (m−j) 2 4 (f, k/2) = i N log √ f (z)ζ 1 − ,z dz j 2 N √ j=0 (r) where ζ (s, z) stands for the classical Hurwitz zeta function and ζ (s, z) = ζ (s, z). ∂s Our approach yields new expressions for derivatives of L-series of classical cusp forms too. Specifically, classical L-series can be expressed in terms of the L-series associated Diamantis, Strömberg Res Math Sci (2022) 9:64 Page 3 of 14 64 with weakly holomorphic forms in [11] in the following way: For a classical cusp form f of weight k and level N with L-series L (s), we consider its completed L-function L (s):= (s)L (s). 2π Then, as verified in Sect. 4,wehave ∗ ix ix L (s) = lim L (ϕ − ϕ ) f k−s x→0 ix for ϕ as in (1.4). Because of this, we can apply the method that led to Theorem 1.1,to deduce Theorem 4.2, a special case of which is the following: Theorem 1.2 For each weight 2 cusp form f of level N, such that f | W = fwehave 2 N √ √ (L ) (1) = 2 Ni f (z) log( (z)) + (log( N) − πi/2)z dz. In particular, this formula interprets the central value of the first derivative as an integral with limits inside the upper half-plane. After providing the theoretical background in Sect. 2 and provide proofs of Theorems 1.1 and 1.2 in Sects. 3 and 4,wewillpresent some remarks regarding computational aspects, potential applications and numerical examples of Theorems 1.1 and 1.2 in the final section. 2 L-series evaluated at test functions In [11], a new type of L-series was associated with general harmonic Maass forms and some basic theorems about it were proved. In this section, we will provide relevant results in the special case which we need here, namely weight k weakly holomorphic cusp forms for (N). We require some additional definitions to describe the set-up. Let C(R, C) be the space of piecewise smooth complex-valued functions on R. For each function f given by an absolutely convergent series of the form 2πinz f (z) = a (n)e , (2.1) n≥−n n =0 we let G be the space of functions ϕ ∈ C(R, C) such that (i) the integral defining Lϕ converges absolutely if (s) ≥ 2πN for some N ∈ N, (ii) the function Lϕ has an analytic continuation to {s ∈ H, (s) > −2πn − } and can be continuously extended to {s = 0; s ≥−2πn } (iii) the following series converges: |a(n)|(L|ϕ|) (2πn) . (2.2) n≥N n =0 We are now able to define the L-series and recall some results from [11]. Definition 2.1 Let f be a function on H given by the Fourier expansion (2.1). The L-series of f is defined to be the map : G → C such that, for ϕ ∈ G , f f f (ϕ) = a (n)(Lϕ)(2πn). (2.3) f f n≥N n =0 64 Page 4 of 14 Diamantis, Strömberg Res Math Sci (2022) 9:64 Furthermore, for Re(z) > 0, we recall the generalised exponential integral by ∞ −zt p−1 E (z):= z (1 − p, z) = dt (2.4) The function E (z) has an analytic continuation to C\(−∞, 0] as a function of z to give the principal branch of E (z). Specifically, from now on we will always consider the prin- cipal branch of the logarithm, so that −π< arg(z) ≤ π. Then, we define the analytic continuation of E (z) as in (8.19.8) and (8.19.10) of [17]tobe: (−z) p−1 z (1 − p) − for p ∈ C − N, k!(1−p+k) 0≤k E (z) = (2.5) p−1 k (−z) (−z) ⎪ (ψ(p) − log(z)) − for p ∈ N. ⎩ (p−1)! k!(1−p+k) 0≤k =p−1 Since the two series on the right hand side of (2.5) give entire functions, we can continu- ously extend E (z)to R . By (8.11.2) of [17], we also have the bound p <0 −z E (z) = O(e ), as z →∞in the wedge arg(z) < 3π/2. (2.6) A lemma that will be crucial is the sequel is: Lemma 2.2 [11] If Im(w) > 0,thenwehave i+∞ a iwz a−1 i E (w) = e z dz. (2.7) 1−a for all a ∈ R.IfIm(w) = 0 and Re(w) > 0,then (2.7) holds for all a < 0. Let S (N) denote the space of weakly holomorphic cusp forms of weight k for (N). Suppose that f ∈ S (N) has Fourier expansion (2.1) with respect to the cusp at ∞.By[5, Lemma 3.4], there exists a constant C > 0 such that C n a (n) = O e , as n →∞. (2.8) The L-series of f is then defined to be the map : G → C given in Definition 2.1. f f To describe the L-values and derivatives which we are interested in, we consider the family of test functions given by (1.4) and then set 2πn (f, s):= (ϕ ) = a (n)E √ . (2.9) 1−s f s f n=−n n =0 Remark 2.3 Though more similar in appearance to the usual L-series than (2.3), we do not consider (f, s) as the "canonical" L-series of f , because, in contrast to (ϕ)(seeTh. 3.5 of [10]), it does not satisfy a functional equation with respect to s. We formulate our results in terms of (f, s) to incorporate it into the setting of [6] and Zagier's formula mentioned in the introduction. The choice of , rather than L in the notation hints at the analogy with the "completed" version of the classical L-series, rather than with the L-series itself. By the proof of Lemma 4.1 of [10], or directly, we see that, for Re(w) > − , ϕ ∈ G and s 2πn + w w −2πnt−wt s−1 (ϕ ) = N a (n) e t dt = a (n)E √ . f f f 1−s √ N n≥−n n≥−n 0 N 0 n =0 n =0 (2.10) Diamantis, Strömberg Res Math Sci (2022) 9:64 Page 5 of 14 64 Because of (2.6) and the trivial bound for a (n), the series a (n)E ((2πn+w)/ N) f f 1−s n>0 converges absolutely and uniformly in compact subsets of {w ∈ H;Re(w) > − }, for each fixed s ∈ C. Since, in addition, E (z) is continuous from above at each z ∈ R ,we 1−s <0 deduce, by comparing with (2.9), that ix lim (ϕ ) = (f, s). x→0 Let now s ∈ R and x > 0. By Lemma 2.2, followed by a change of variables and (2.1), the sum (2.10) becomes i+∞ (2πn+ix)iz −s s−1 i a (n) e z dz n≥−n n =0 √ +∞ −s s/2 −xz s−1 = i N e f (z)z dz. (2.11) With the periodicity of f , we see that the last integral equals i i √ √ +n+1 +1 N N ix −xz s−1 −xz e f (z)z dz = e f (z)ζ 1 − s, ,z dz, i i 2π √ √ +n n=0 N N where 2πima −s ζ (s, a, z):= e (z + m) m=0 is the Lerch zeta function, which is well defined since x > 0. Therefore, we have the following: Proposition 2.4 For each f ∈ S (N) and for each x > 0 and s ∈ R,wehave N ix ix −s −xz (ϕ ) = i N e f (z)ζ 1 − s, ,z dz. 2π 3 Derivatives of (f, s) (m) Let m be a positive integer. By (ϕ ), we denote the mth derivative with respect to s. Equation (2.10)implies that d 2πn + w (m) (ϕ )| k = a (n) E √ . (3.1) 1−s s f f s= k 2 s= ds n≥−n n =0 By the absolute and uniform, in w with Re(w) > − , convergence of the piece of this series with n > 0, we deduce that the limit as w → 0 (from above) exists and, with (2.9), we have d 2πn (m) ix (m) lim (ϕ )| k = a (n) E √ = (f, k/2). (3.2) f 1−s f s= k 2 s= x→0 ds n≥−n n =0 On the other hand, we have d ix −s (i/ N) ζ 1 − s, ,z m k ds 2π s= √ √ N m N k ix m j (m−j) = (−1) (−1) log ζ 1 − , ,z . i j i 2 2π j=0 64 Page 6 of 14 Diamantis, Strömberg Res Math Sci (2022) 9:64 Using (3.2)and Prop. 2.4, we deduce that 2 j N m i (m) m (f, k/2) = (−1) log √ i j j=0 k ix −xz (m−j) × lim e f (z)ζ 1 − , ,z dz. (3.3) 2 2π x→0 √ We now use (8) of Sect. 1.11 of [14] according to which, for z ∈ H, s ∈ / N and x > 0small enough, we have ix (−x) −xz s−1 e ζ s, ,z = (1 − s)x + ζ (s − r, z) , (3.4) 2π r! r=0 where ζ (s, w) is the Hurwitz zeta function. This gives, for every ∈ N, ix (−x) −xz ( ) j (j) s−1 j ( ) e ζ s, ,z = (−1) (1 − s)x log x + ζ (s − r, z) (3.5) 2π r! j=0 r=0 and thus, k ix k k (−x) −xz ( ) j −k/2 (j) ( ) e ζ 1 − , ,z = (−1) x ( )log x+ ζ (1 − − r, z) . 2 2π 2 2 r! j=0 r=0 This implies that, for each j ∈ N,wehave k ix −xz ( ) e f (z)ζ 1 − , ,z dz 2 2π ⎛ ⎞ √ +1 k k j (j) − j ⎝ ⎠ = (−1) x log x f (z)dz j=0 r +1 (−x) N k ( ) + f (z)ζ 1 − − r, z dz. r! 2 r=0 Since f has a zero constant term in its Fourier expansion, it follows that i/ N +1 f (z)dz = 0. (3.6) i/ N Therefore, i i √ √ +1 +1 N k ix N k −xz ( ) ( ) lim e f (z)ζ 1 − , ,z dz = f (z)ζ 1 − ,z dz. + i i x→0 2 2π 2 √ √ N N (3.7) This, combined with (4.5), proves Theorem 1.1. In the case of weight 2, it simplifies to Corollary 3.1 For each f ∈ S (N) such that f | W = f, we have 2 N √ √ (f, 1) = Ni f (z) log( (z)) + (log( N) − πi/2)z dz. Proof If k = 2and m = 1, the formula of the theorem becomes i i √ √ +1 +1 √ √ N N (f, 1) = Ni log(i/ N) f (z)ζ (0,z)dz + f (z)ζ (0,z)dz . (3.8) i i √ √ N N Diamantis, Strömberg Res Math Sci (2022) 9:64 Page 7 of 14 64 The well-known identity ζ (0,z) = 1/2 − z and (3.6) imply that the first integral equals − f (z)zdz. For the second integral, we combine (3.6) with the identity (see, e.g. (10) of 1.10 of [14]) ζ (0,z) = log( (z)) − log(2π). From those formulas for the two integrals, we deduce the corollary. Finally, we comment on the relation between Theorem 1.2 (applying to holomorphic cusp forms) and Corollary 3.1 (applying to weakly holomorphic ones). Since a holomorphic cusp form is, of course, weakly holomorphic, Corollary 3.1 applies to it too and one might expect the two formulas to agree completely. However, the subject of Theorem 1.2 is a different L-series from the (f, s) appearing in Corollary 3.1,namely L (s). They both originate in the more general (ϕ) but they are not quite the same, L (s)being simply a "symmetrised" version of (f, s). This explains why the formulas are identical except for the factor of 2 in the formula for the central derivative of L (s). 4 L-functions associated with cusp forms and their derivatives The case of classical cusp forms and their L-functions can be accounted for by the same approach. However, the setting must be slightly adjusted, ultimately because of the lack of a functional equation for (f, s) when f is weakly holomorphic, as discussed in Remark 2.3. Specifically, we let f be a holomorphic cusp form of weight k for (N)withaFourier expansion 2πinz f (z) = a (n)e , (4.1) n>0 and such that 0 −1/ N f | W = f, for W = . k N N N 0 We recall the classical integral expression for the completed L-function of f : L (s):= (s)L (s) 2π ∞ ∞ s k−s s−1 k k−1−s (4.2) 2 2 = N f (it)t dt + i N f (it)t dt √ √ 1/ N 1/ N √ √ = a (n)E (2πn/ N) + i a (n)E (2πn/ N) f 1−s f s−k+1 n>0 n>0 We observe that, thanks to (2.6), this converges for all s ∈ C. The completed L-function can be recast in terms of the L-series formalism of [10] and the family of test functions given in (1.4). Indeed, if Re(w) > − , we have, w k w −2πnt−wt s−1 L (ϕ + i ϕ ) = N a (n) e t dt f f s k−s n>0 k−s k −2πnt−wt k−1−s +i N a (n) e t dt n>0 (2πn+w)t s−1 = a (n) e t dt n>0 (2πn+w)t − √ k k−1−s +i a (n) e t dt (4.3) n>0 64 Page 8 of 14 Diamantis, Strömberg Res Math Sci (2022) 9:64 As in the previous section (but more easily, since we do not have any terms with n < 0), the series converges absolutely and uniformly in compact subsets of {w ∈ H;Re(w) > − }, for each fixed s ∈ C. Hence, comparing with (4.2), we see that ix k ix ∗ lim L (ϕ + i ϕ ) = L (s). k−s f x→0 Let now s ∈ R and w ∈ H with Re(w) > − . By Lemma 2.2, followed by a change of variables and (4.1), the sum (4.3) becomes i+∞ i+∞ (2πn+w)iz (2πn+w)t √ √ −s s−1 k s−k k−1−s N N i a (n) e z dz + i i a (n) e t dz f f i i n>0 n>0 √ √ i/ N +∞ i/ N +∞ −s s/2 iwz s−1 s (k−s)/2 iwz k−1−s = i N e f (z)z dz + i N e f (z)z dz. (4.4) √ √ i/ N i/ N This is a "symmetrised" analogue of (2.11), and therefore, working similarly to the last section, we can deduce the following analogue of Prop. 2.4: Proposition 4.1 Let f ∈ S (N) such that f | W = f . For each w ∈ H with Re(w) > − k k and each s ∈ R,wehave N s w w k w iwz −s L (ϕ + i ϕ ) = e f (z) i N ζ 1 − s, ,z f s 2−s 2π k−s w +i N ζ s − k + 1, ,z dz. 2π To pass to derivatives, we let m be a positive integer. Equation (4.3)implies that (2πn+w)t − √ (m) w k w 2m+k −1 m N 2 L (ϕ + i ϕ )| = (1 + i ) a (n) e t log tdt. s 2−s f s= n>0 which is the analogue of (3.1) and thus, we can work in an entirely analogous way to the last section to obtain 2 j k N m i ∗ (m) 2m k (L ) = (i + i ) log √ 2 i j j=0 √ +1 k ix −xz (m−j) × lim e f (z)ζ 1 − , ,z dz. (4.5) x→0 2 2π Applying (8)ofSect.1.11of[14] as in the last section implies that this equals 2 m √ +1 N m i k k 2m j (m−j) (i + i ) log √ f (z)ζ 1 − ,z dz. i j 2 N √ j=0 ∗ s Since L (s) = ( N /(2π)) (s)L (s), this gives: Theorem 4.2 Let m be a positive integer. For each f ∈ S (N) such that f | W = fand k k N (j) L (k/2) = 0 for j < m, we have k 2m k i + i m i (m) j L = (−2πi) log √ 2 j − 1 ! j=0 √ +1 N k (m−j) × f (z)ζ 1 − ,z dz. N Diamantis, Strömberg Res Math Sci (2022) 9:64 Page 9 of 14 64 Theorem 1.2 follows from this exactly as in Corollary 3.1 once we take into account that, if k = 2and f | W = f , we automatically have L (1) = 0 by the classical functional 2 N f equation for f ∈ S (N). 5 Computational and algorithmic aspects Consider first the special case of a holomorphic cusp form f of weight k = 2 and level N, which is invariant under the Fricke involution W . Suppose that f has a Fourier expansion of the form (4.1). It is clear from (4.2) and symmetry that the central value L (1) is zero and the rth central derivative is zero, if r is even, and 2πn ∗ (r) r (L ) (1) = 2r! a (n)E √ , n>0 if r is odd. Here r −zt r −s E (z) = e (log t) t dt r! is (−1) /r!times the rth derivative of E (z) with respect to s. It is initially defined for (z) > 0 and can be extended to H ∪ R via (5.4)and (5.2) below. Using integration by <0 r 1 r−1 parts, it can be shown that E (z) = E (z), which leads to the expression 0 z 1 N 1 2πn ∗ (r) r−1 (L ) (1) = r! a(n) E √ . (5.1) f 1 π n n>0 This expression was first obtained by Buhler, Gross and Zagier in [8], where the authors used the following expression to evaluate E (z) for any m ≥ 1and z > 0 n−m−1 (−1) m n E (z) = G = P (− log z) + z . (5.2) m+1 m+1 m+1 n n! n≥1 Here, P (x) is a polynomial of degree r and if we write (1 + z) = γ z then r n n≥0 P (t) = γ . r r−j j! j=0 Extending this method to weights k ≥ 4 and weakly holomorphic modular forms is immediate. If f ∈ S (N) has Fourier expansion at infinity of the form (2.1) then the analogue of (4.2)is(2.9). Upon differentiating (2.9) r times with respect to s and setting s = k/2leads to 2πn (r) r (f, k/2) = r! a (n)E √ , (5.3) 1−k/2 n≥−n n =0 ∗ (m) k+2m (m) where we note that for a holomorphic f we have (L ) (k/2) = (1 + i ) (f, k/2). It follows that we need to evaluate E where n = k/2 − 1. To compare the complexity of −n these computations with the weight 2 case, we note that Milgram [16, (2.22)] showed that n−m m (n + 1) z m −z m l−1 m−l E (z) = e ξ + ξ E (z) , (5.4) −n l,n 0,n 1 n+1 z l! l=0 l=1 where ξ are constants independent of z and can be precomputed. Using this together l,n with (5.2), it follows that the computation essentially reduces to that of a finite sum of polynomials and an infinite rapidly convergent sum. 64 Page 10 of 14 Diamantis, Strömberg Res Math Sci (2022) 9:64 It is also worth to mention here that the general algorithm to compute values and derivatives of Motivic L-functions introduced by Dokchitser in [12] and implemented in PARI/GP [19], essentially reduces to that described above in the case of holomorphic modular forms. Furthermore, in both [8]and [12] the authors make additional use of asymptotic expansions to speed up computations of E (z) for large z. −n 5.1 The new integral formula Let f ∈ S ( (N)) be a weakly holomorphic cusp form of even integral weight k and that satisfies f | W = f . Then, Theorem 1.1 implies that m i (m) 2m−k/2 k/4 j (f, k/2) = i N log √ j=0 i/ N +1 (m−j) × f (z)ζ 1 − ,z dz, i/ N where (f, s)isdefinedin(1.5). When computing these values, it is clear that the main CPU time is spent on computing integrals of the form √ √ (r) I (f ) = f (x + i/ N)ζ 1 − k/2,x + i/ N dx, 0 ≤ r ≤ m. The cusp form f is given in terms of the Fourier expansion (2.1) for some n ≥ 0. To −D evaluate f (x + i/ N) up to a precision of ε = 10 for all x ∈ [0, 1], we can truncate the Fourier series at some integer M > 0. The precise choice of M depends on the available coefficient bounds. In case f is holomorphic then Deligne's bound can be used to show that we can choose M such that √ √ √ M > c k N log M + N(c D + c log( N(k/2)!)) + c 1 2 3 4 for some explicit positive constants c ,c ,c and c , independent of N, D and k. However, 1 2 3 4 if f is not holomorphic then we only have the non-explicit bound (2.8)and M must satisfy √ √ √ M > c N M + c ND + c N log N, 1 2 3 where c ,c ,c and c are positive constants that depend on f and can be computed in 1 2 3 4 special cases using Poincaré series. In both cases we From both inequalities above it is clear that as the level or weight increases we need a larger number of coefficients, which increases the number of arithmetic operations needed. Note that the working precision might also need to be increased due to cancellation errors. To evaluate the Hurwitz zeta function and its derivatives, it is possible to use, for instance, the Euler–Maclaurin formula M−1 1−s 1 (z + M) ζ (s, z) = + (n + z) s − 1 n=0 1 1 B (s) 2l 2l−1 + + + Err(M, L), 2l−1 (z + M) 2 (2l!) (z + M) l=1 where M, L ≥ 1 and where the error term Err(M, L) can be explicitly bounded. For more details, including proof and analysis of rigorous error bounds and choice of parameters, (r) see [15], where the generalisation to derivatives ζ (s, z) is also included. In our case, s = 1 − k/2and z = x + i/ N with 0 ≤ x ≤ 1. It is easy to use Theorem 1 of [15] to show that if M > 1and L > k/4 then 2k 2M |(1 − k/2) | 2L Err(M, L) ≤ , 2L (2πM) L − k/4 Diamantis, Strömberg Res Math Sci (2022) 9:64 Page 11 of 14 64 where (s) = s(s + 1) ··· (s + m − 1) is the usual Pochhammer symbol. Furthermore, if the right-hand side above is denoted by B then it can be shown that the error in the Euler–Maclaurin formula for the rth derivative can be bounded by B · r! log(8(M + 1)) . In [15], it is observed that to obtain D digits of precision we should choose M ∼ L ∼ D, meaning that the number of terms in both sums is proportional to D. It is also clear that as k or r increases we will need larger values of M and L. Example 5.1 Consider f ∈ S (37) and standard double precision, i.e. 53 bits or 15 (deci- √ √ (r) mal) digits. Then, a single evaluation of f (x + i/ 37) takes 271μs while ζ (0,x + i/ 37) takes 2μs, 114μs, 124μs, 171μs for r = 1, 2, 3 and 20, respectively. 5.2 Comments on the implementation There are a few simple optimisations that can be applied immediately to decrease the number of necessary function evaluations. √ √ • Replace the sum of integrals by f (x + i/ N)Z (x + i/ N)dx, where m i k j (m−j) Z (z) = log ζ 1 − ,z . j 2 j=0 √ √ •If f (z) has real Fourier coefficients then f (1 − x + i/ N) = f (x + i/ N), which is very useful as we can choose the numerical integration method with nodes that are symmetric with respect to x = 1/2. (r) • If we need to compute (f, k/2) for a sequence of rs, then function values of f and (j) lower derivatives ζ can be cached in each step provided that the we use the same nodes for the numerical integration. As the main goal of this paper is to present a new formula and not to present an opti- mised efficient algorithm as such, we have implemented all algorithms in SageMath using the mpmath Python library for the Hurwitz zeta function evaluations as well as for the numerical integration using Gauss–Legendre quadrature. The implementation used to calculate the examples below can be found in a Jupyter notebook which is available from [20]. 5.3 Examples of holomorphic forms To demonstrate the veracity of the formulas in this paper, we first present a comparison of results and indicative timings between the new formula in this paper and Dokchitser's algorithm in PARI (interfaced through SageMath). Table 1 includes three holomorphic cusp forms 37.2.a.a, 127.4.a.a and 5077.2.a.a, labelled accordingtothe LMFDB[18]. These are all invariant under the Fricke involution and it is known that the analytic ranks are 1, 2 and 3, respectively. The last column gives the difference between the values computed by Dokchitser's algorithm and the integral formula. As the level increases, we find that f (x + i/ N) oscillates more and more and it is necessary to increase the degree of the Legendre polynomials used in the Gauss–Legendre quadrature. The comparison of timings in Table 1 indicates that our new formula is slower than Dokchitser's algorithm but it is important to keep in mind the latter is implemented in the PARI C library and is compiled while our formula is simply implemented directly in 64 Page 12 of 14 Diamantis, Strömberg Res Math Sci (2022) 9:64 ∗ (r) Table 1 Central derivatives (L ) (k/2) for f ∈ S ( (N)) k 0 Nk Label r Dokchitser/PARI Time (ms) Integral formula Time (ms) Error −17 37 2 37.2.a.a 1 0.296238908699801 18 0.2962389086998011 49 6 × 10 −10 127 4 127.4.a.a 2 7.83323138624802 42 7.8332313863855996+ 186 1 × 10 −11 5077 4 5077.2.a.a 3 117.837959237940 212 117.83795923792273+ 2000 2 × 10 SageMath using the mpmath Python library. All CPU times presented below are obtained on a 2GHz Intel Xeon Quad Core and we stress that the times should not be taken as absolute performance measures but simply to provide comparisons between different input and parameter values. 5.4 Examples of weakly holomorphic modular forms To construct weakly modular cusp forms, we use the Dedekind eta functions η(τ) = q 1 − q . ( ) n≥1 If we define + 8 2 3 4 5 (τ) = (η(τ)η(2τ)) = q − 8q + 12q + 64q + O(q ) and + 24 12 24 j (τ) = (η(τ)/η(2τ)) + 24 + 2 (η(2τ)/η(τ)) −1 2 3 4 = q + 4372q + 96256q + 1240002q + O(q ) + + then it can be shown that ∈ S ( (2)) and j ∈ S ( (2)) are both invariant under the 8 0 0 2 2 0 Fricke involution W . The following holomorphic and weakly holomorphic modular forms of weight 16 on (2) were introduced by Choi and Kim [9] to study weakly holomorphic Hecke eigenforms. + 2 2 3 4 f (τ) = (τ) = q − 16q + O(q ) 16,−2 + 2 + 3 4 f (τ) = (τ) (j (τ) + 16) = q + 4204q + O(q ) 16,−1 2 2 + 2 + 2 + 3 4 f (τ) = (τ) (j (τ) + 16j (τ) − 8576) = 1 + 261120q + O(q ) 16,0 2 2 2 + 2 + 3 + 2 + f (τ) = (τ) (j (τ) + 16j (τ) − 12948j (τ) − 427328) 16,1 2 2 2 2 −1 3 4 = q + 7525650q + O(q ) + 2 + 4 + 3 + 2 + f (τ) = (τ) (j (τ) + 16j (τ) − 17320j (τ) − 593536j (τ) − 27188524) 16,2 2 2 2 2 2 −2 3 4 = q + 140479808q + O(q ) and it is easy to see that all of these functions are also invariant under W . Furthermore, f ,f ∈ S ( (2)) and f ,f ∈ S ( (2)) while f is not cuspidal. 16,−2 16,−1 16 0 16,1 16,2 0 16,0 To check the accuracy of our formula in this setting, we first consider the holomorphic cusp forms. Observe that the unique newform of level 2 and weight 16 is 2 3 4 5 6 f (τ) = q − 128q + 6252q + 16384q + 90510q + O(q ) = f − 128f . 16,−1 16,−2 Using Dokchitser's algorithm, we find that L (8) = 0.0526855929956408, while using the integral formula with 53 bits precision, we obtain ∗ −20 L (8) = 0.00008045589767063483 + 6 · 10 i, 16,−2 ∗ −17 L (8) = 0.06298394789748197609 + 3 · 10 i, 16,−1 Diamantis, Strömberg Res Math Sci (2022) 9:64 Page 13 of 14 64 (r) Table 2 (f , 8) computed using the integral formula with 16,i 103 bits precision (r) ir (f , 8) T/ms Err. 16,i −31 −30 10 −0.2035186511755524285671725692737 + 1 × 10 204 6 × 10 −30 111.1597162067012225517004253561026 − 0.104294509255933530762675132394i 975 9 × 10 −30 12 −0.3329012203856171470128799683152 − 0.109371149169408369683239573058i 1790 7 × 10 −30 −27 20 −1.8934024663352144735029014555039 + 1 × 10 209 1 × 10 −28 2155.394013302380372465449909213930 − 0.000407400426780990354541699709i 996 2 × 10 −28 22 −0.1484917546377626240694524994979 + 0.000137545862921322355701592298i 1880 1 × 10 (r) Table 3 (f , 8) computed using the sum with 103 bits 16,i precision (r) ir (f , 8) T/ms Err. 16,i −17 10 −0.20351865117555238 10 4 × 10 3 −15 111.15971620670121522423 − 0.104294509255934i 11 × 10 8 × 10 3 −15 12 −0.33290122038562486306 − 0.109371149169408i 21 × 10 8 × 10 −14 20 −1.89340246633520092878 11 2 × 10 3 −14 2155.3940133023803440437 − 0.000407400426780990i 14 × 10 4 × 10 3 −14 22 −0.14849175463777442019 + 0.000137545862921322i 26 × 10 2 × 10 and ∗ ∗ −17 L (8) − 128L (8) = 0.05268559299564071785 + 2 · 10 i, f f 16,−1 16,−2 which agrees with the value of L (8) above. (r) Table 2 gives the values of (f , 8) for the weakly holomorphic modular forms 16,i f and f , computed using the integral formula with 103 bits working precision. The 16,1 16,2 table contains an indication of timings as well as a heuristic error estimate based on a comparison with the same value computed using 203 bits precision. To provide some independent verification of the algorithm in the case of weakly mod- ular forms, we also implemented the generalisation of the algorithm from [8]using (5.3) directly with E evaluated using (5.4)and (5.2). The main obstacle with the algorithm 1−k/2 modelled on [8] is that the infinite sum in (5.2) suffers from catastrophic cancellation for large z unless the working precision is temporarily increased within the sum. The (r) corresponding values of (f , 8) computed using the algorithm with 103 bits starting 16,i precision are given in Table 3 where we also give the corresponding timings as well as an error estimate based on comparison with values in Table 2. Acknowledgements We thank the referees for their insightful comments and helpful suggestions. We also thank D. Goldfeld for helpful and encouraging comments on the manuscript. Part of the work was done while the first author was visiting Max Planck Institute for Mathematics in Bonn, whose hospitality he acknowledges. Research on this work is partially supported by the authors' EPSRC Grants (ND: EP/S032460/1 FS: EP/V026321/1). Data Availability Statement All data generated and analysed during this study are included in this published article. Further data can be obtained by using the program available at [20] with different input parameters. Received: 21 September 2022 Accepted: 10 October 2022 Published online: 26 October 2022 References 1. Borcherds, R.: Automorphic forms with singularities on Grassmannians. Invent. Math. 132, 491–562 (1998) 2. Bringmann, K., Folsom, A., Ono, K., Rolen, L.: Harmonic Maass Forms and Mock Modular Forms: Theory and Applications, American Mathematical Society Colloquium Publications, vol. 64. American Mathematical Society, Providence (2017) 64 Page 14 of 14 Diamantis, Strömberg Res Math Sci (2022) 9:64 3. Bringmann, K., Fricke, K.H., Kent, Z.: Special L-values and periods of weakly holomorphic modular forms. Proc. Am. Math. Soc. 142(10), 3425–3439 (2014) 4. Bringmann, K., Ono, K.: The f (q) mock theta function conjecture and partition ranks. Invent. Math. 165(2), 243–266 (2006) 5. Bruinier, J., Funke, J.: On two geometric theta lifts. Duke Math. J. 125(1), 45–90 (2004) 6. Bruinier, J., Funke, J., Imamoglu, Ö.: Regularized theta liftings and periods of modular functions. J. Reine Angew. Math. 703, 43–93 (2015) 7. Bruinier, J., Ono, K.: Heegner divisors, L-functions and harmonic weak Maass forms. Ann. Math. (2) 172(3), 2135–2181 (2010) 8. Buhler, J.P., Gross, B.H., Zagier, D.B.: On the conjecture of Birch and Swinnerton-Dyer for an elliptic curve of rank 3. Math. Comput. 44(170), 473–481 (1985) 9. Choi, S.-Y., Kim, C.-H.: Weakly holomorphic Hecke eigenforms and Hecke eigenpolynomials. Adv. Math. 290, 144–162 (2016) 10. Diamantis, N., Rolen, L.: L-values of harmonic Maass forms (submitted). arXiv:2201.10193 11. Diamantis, N., Lee, M., Raji, W., Rolen, L.: L-series of harmonic Maass forms and a summation formula for harmonic lifts (submitted). arXiv:2107.12366 12. Dokchitser, T.: Computing special values of motivic L-functions. Exp. Math. 13(2), 137–149 (2004) 13. Duke, W., Imamoglu, Ö., Tóth, A.: Cycle integrals of the j-functions and mock modular forms. Ann. Math. (2) 173, 947–981 (2011) 14. Erdelyi, A., Magnus, W., Oberhettinger, F., Tricomi, F.G.: (The Bateman Manuscript Project), Higher Transcendental Functions, vol. I. McGraw-Hill, New York (1953) 15. Johansson, F.: Rigorous high-precision computation of the Hurwitz zeta function and its derivatives. Numer. Algo- rithms 69(2), 253–270 (2015) 16. Milgram, M.S.: The generalized integro-exponential function. Math. Comput. 44(170), 443–458 (1985) 17. Olver, F., Lozier, D., Boisvert, R., Clark, C. (eds.): NIST Handbook of Mathematical Functions. U.S. Department of Commerce, National Institute of Standards and Technology, Washington, DC (2004) 18. The LMFDB Collaboration. The L-functions and modular forms database. http://www.lmfdb.org (2022). Accessed 18 Sept 2022 19. The PARI Group, Univ. Bordeaux. PARI/GP version 2.13.4. http://pari.math.u-bordeaux.fr/ (2022) 20. Strömberg, F.: Algorithms and examples for derivatives of L-series available from https://github.com/fredstro/ derivatives_lseries 21. Zwegers, S. Mock.: θ-functions and real analytic modular forms in "q-Series with Applications to Combinatorics, Number Theory, and Physics" (Urbana: Contemporary Mathematics, 291, American Mathematical Society, Providence, 2001, 269–277 (2000) Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Research in the Mathematical Sciences Springer Journals http://www.deepdyve.com/lp/springer-journals/derivatives-of-l-series-of-weakly-holomorphic-cusp-forms-6jMlFa5090
Diamantis, Nikolaos; Strömberg, Fredrik
Research in the Mathematical Sciences
, Volume 9 (4) – Dec 1, 2022
/lp/springer-journals/derivatives-of-l-series-of-weakly-holomorphic-cusp-forms-6jMlFa5090
Research in the Mathematical Sciences /
Springer Journals
Copyright © The Author(s) 2022
[email protected] University of Nottingham, Based on the theory of L-series associated with weakly holomorphic modular forms in Nottingham, UK Diamantis et al. (L-series of harmonic Maass forms and a summation formula for harmonic lifts. arXiv:2107.12366), we derive explicit formulas for central values of derivatives of L-series as integrals with limits inside the upper half-plane. This has computational advantages, already in the case of classical holomorphic cusp forms and, in the last section, we discuss computational aspects and explicit examples. 1 Introduction As evidenced by the prominence of conjectures such as those of Birch–Swinnerton-Dyer, Beilinson, etc., central values of derivatives of L-series are key invariants of modular forms. Explicit forms of their values are therefore desirable, since they can lead to either theoretical or numerical insight about their nature. On the other hand, an extension of classical modular forms that allowed for poles at the cusps, the weakly holomorphic modular forms, has, more recently, been the focus of intense research, with Borcherd's work [1] representing an important highlight followed by further applications to arithmetic, combinatorial and other aspects, e.g. in [4,7,13,21], etc. A comprehensive overview of the foundations of the theory as well as a variety of important applications is provided in [2]. Up until relatively recently, L-series of weakly holomorphic modular forms had not been studied systematically. In fact, to our knowledge, a first definition was given in [3] in 2014. In work by the first author and his collaborators [11], a systematic approach for all harmonic Maass forms was proposed which led to functional equations, converse theorems, etc. A first application to special values of the L-series defined in [11] was given in [10], where results of [6] on cycle integrals were streamlined and generalised. Part of the work in [6] was based on an explicit formula of what could be thought of as the (at the time of writing of [6], not yet defined) central L-value of a weight 0 weakly holomorphic form. That formula had been suggested, in the case of the Hauptmodul, by Zagier. In [10]we interpreted those cycle integrals as values of the L-series defined in [11] and this allowed us to generalise the formulas of [6]. © The Author(s) 2022. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. 0123456789().,–: volV 64 Page 2 of 14 Diamantis, Strömberg Res Math Sci (2022) 9:64 Here, we extend that study to values of derivatives of L-series of weakly holomorphic forms. To state the main theorem, we will briefly introduce the terms involved, but we will discuss them in more detail in the next section. Let k ∈ 2N. We consider the action | of SL (R) on smooth functions f : H → C on the k 2 complex upper half-plane H, given by ab −k (f | γ )(z):= j(γ,z) f (γ z), for γ = ∈ SL (R), k 2 cd where j(γ,z):= cz + d. We further recall the defining formula for the Laplace transform L of a piecewise smooth complex-valued function ϕ on R. It is given by −st (Lϕ)(s):= e ϕ(t)dt (1.1) for each s ∈ C for which the integral converges absolutely. We use the same notation Lϕ for its analytic continuation to a larger domain, if such a continuation exists. Finally, if N ∈ N, 0 −1/ N W := √ . N 0 Let now f be a weakly holomorphic cusp form of weight k for (N), i.e. a meromorphic modular form whose poles may only lie at the cusps and its Fourier expansion at each cusp has a vanishing constant term. Assume that its Fourier expansion at infinity is given by 2πinz f (z) = a (n)e . (1.2) n≥−n n =0 Then, the L-series of f is defined in [11] as the map given by (ϕ) = a (n)(Lϕ)(2πn) (1.3) f f n≥−n n =0 for each ϕ in a certain family of functions on R which will be defined in the next section. The main object of concern in this note will be the specialisation of this L-series to a specific family of test functions: For (s, w) ∈ C × H we denote w s/2 −wt s−1 ϕ (t):= 1 (t)N e t , for t > 0, (1.4) s [1/ N,∞) where 1 denotes the characteristic function of X ⊂ R. We then set (f, s):= (ϕ ) (1.5) With this notation, we have Theorem 1.1 Let k ∈ 2N and m ∈ N. For each weakly holomorphic cusp form of weight k for (N) such that f | W = f, we have 0 N √ +1 k k m i k (m) 2m− j (m−j) 2 4 (f, k/2) = i N log √ f (z)ζ 1 − ,z dz j 2 N √ j=0 (r) where ζ (s, z) stands for the classical Hurwitz zeta function and ζ (s, z) = ζ (s, z). ∂s Our approach yields new expressions for derivatives of L-series of classical cusp forms too. Specifically, classical L-series can be expressed in terms of the L-series associated Diamantis, Strömberg Res Math Sci (2022) 9:64 Page 3 of 14 64 with weakly holomorphic forms in [11] in the following way: For a classical cusp form f of weight k and level N with L-series L (s), we consider its completed L-function L (s):= (s)L (s). 2π Then, as verified in Sect. 4,wehave ∗ ix ix L (s) = lim L (ϕ − ϕ ) f k−s x→0 ix for ϕ as in (1.4). Because of this, we can apply the method that led to Theorem 1.1,to deduce Theorem 4.2, a special case of which is the following: Theorem 1.2 For each weight 2 cusp form f of level N, such that f | W = fwehave 2 N √ √ (L ) (1) = 2 Ni f (z) log( (z)) + (log( N) − πi/2)z dz. In particular, this formula interprets the central value of the first derivative as an integral with limits inside the upper half-plane. After providing the theoretical background in Sect. 2 and provide proofs of Theorems 1.1 and 1.2 in Sects. 3 and 4,wewillpresent some remarks regarding computational aspects, potential applications and numerical examples of Theorems 1.1 and 1.2 in the final section. 2 L-series evaluated at test functions In [11], a new type of L-series was associated with general harmonic Maass forms and some basic theorems about it were proved. In this section, we will provide relevant results in the special case which we need here, namely weight k weakly holomorphic cusp forms for (N). We require some additional definitions to describe the set-up. Let C(R, C) be the space of piecewise smooth complex-valued functions on R. For each function f given by an absolutely convergent series of the form 2πinz f (z) = a (n)e , (2.1) n≥−n n =0 we let G be the space of functions ϕ ∈ C(R, C) such that (i) the integral defining Lϕ converges absolutely if (s) ≥ 2πN for some N ∈ N, (ii) the function Lϕ has an analytic continuation to {s ∈ H, (s) > −2πn − } and can be continuously extended to {s = 0; s ≥−2πn } (iii) the following series converges: |a(n)|(L|ϕ|) (2πn) . (2.2) n≥N n =0 We are now able to define the L-series and recall some results from [11]. Definition 2.1 Let f be a function on H given by the Fourier expansion (2.1). The L-series of f is defined to be the map : G → C such that, for ϕ ∈ G , f f f (ϕ) = a (n)(Lϕ)(2πn). (2.3) f f n≥N n =0 64 Page 4 of 14 Diamantis, Strömberg Res Math Sci (2022) 9:64 Furthermore, for Re(z) > 0, we recall the generalised exponential integral by ∞ −zt p−1 E (z):= z (1 − p, z) = dt (2.4) The function E (z) has an analytic continuation to C\(−∞, 0] as a function of z to give the principal branch of E (z). Specifically, from now on we will always consider the prin- cipal branch of the logarithm, so that −π< arg(z) ≤ π. Then, we define the analytic continuation of E (z) as in (8.19.8) and (8.19.10) of [17]tobe: (−z) p−1 z (1 − p) − for p ∈ C − N, k!(1−p+k) 0≤k E (z) = (2.5) p−1 k (−z) (−z) ⎪ (ψ(p) − log(z)) − for p ∈ N. ⎩ (p−1)! k!(1−p+k) 0≤k =p−1 Since the two series on the right hand side of (2.5) give entire functions, we can continu- ously extend E (z)to R . By (8.11.2) of [17], we also have the bound p <0 −z E (z) = O(e ), as z →∞in the wedge arg(z) < 3π/2. (2.6) A lemma that will be crucial is the sequel is: Lemma 2.2 [11] If Im(w) > 0,thenwehave i+∞ a iwz a−1 i E (w) = e z dz. (2.7) 1−a for all a ∈ R.IfIm(w) = 0 and Re(w) > 0,then (2.7) holds for all a < 0. Let S (N) denote the space of weakly holomorphic cusp forms of weight k for (N). Suppose that f ∈ S (N) has Fourier expansion (2.1) with respect to the cusp at ∞.By[5, Lemma 3.4], there exists a constant C > 0 such that C n a (n) = O e , as n →∞. (2.8) The L-series of f is then defined to be the map : G → C given in Definition 2.1. f f To describe the L-values and derivatives which we are interested in, we consider the family of test functions given by (1.4) and then set 2πn (f, s):= (ϕ ) = a (n)E √ . (2.9) 1−s f s f n=−n n =0 Remark 2.3 Though more similar in appearance to the usual L-series than (2.3), we do not consider (f, s) as the "canonical" L-series of f , because, in contrast to (ϕ)(seeTh. 3.5 of [10]), it does not satisfy a functional equation with respect to s. We formulate our results in terms of (f, s) to incorporate it into the setting of [6] and Zagier's formula mentioned in the introduction. The choice of , rather than L in the notation hints at the analogy with the "completed" version of the classical L-series, rather than with the L-series itself. By the proof of Lemma 4.1 of [10], or directly, we see that, for Re(w) > − , ϕ ∈ G and s 2πn + w w −2πnt−wt s−1 (ϕ ) = N a (n) e t dt = a (n)E √ . f f f 1−s √ N n≥−n n≥−n 0 N 0 n =0 n =0 (2.10) Diamantis, Strömberg Res Math Sci (2022) 9:64 Page 5 of 14 64 Because of (2.6) and the trivial bound for a (n), the series a (n)E ((2πn+w)/ N) f f 1−s n>0 converges absolutely and uniformly in compact subsets of {w ∈ H;Re(w) > − }, for each fixed s ∈ C. Since, in addition, E (z) is continuous from above at each z ∈ R ,we 1−s <0 deduce, by comparing with (2.9), that ix lim (ϕ ) = (f, s). x→0 Let now s ∈ R and x > 0. By Lemma 2.2, followed by a change of variables and (2.1), the sum (2.10) becomes i+∞ (2πn+ix)iz −s s−1 i a (n) e z dz n≥−n n =0 √ +∞ −s s/2 −xz s−1 = i N e f (z)z dz. (2.11) With the periodicity of f , we see that the last integral equals i i √ √ +n+1 +1 N N ix −xz s−1 −xz e f (z)z dz = e f (z)ζ 1 − s, ,z dz, i i 2π √ √ +n n=0 N N where 2πima −s ζ (s, a, z):= e (z + m) m=0 is the Lerch zeta function, which is well defined since x > 0. Therefore, we have the following: Proposition 2.4 For each f ∈ S (N) and for each x > 0 and s ∈ R,wehave N ix ix −s −xz (ϕ ) = i N e f (z)ζ 1 − s, ,z dz. 2π 3 Derivatives of (f, s) (m) Let m be a positive integer. By (ϕ ), we denote the mth derivative with respect to s. Equation (2.10)implies that d 2πn + w (m) (ϕ )| k = a (n) E √ . (3.1) 1−s s f f s= k 2 s= ds n≥−n n =0 By the absolute and uniform, in w with Re(w) > − , convergence of the piece of this series with n > 0, we deduce that the limit as w → 0 (from above) exists and, with (2.9), we have d 2πn (m) ix (m) lim (ϕ )| k = a (n) E √ = (f, k/2). (3.2) f 1−s f s= k 2 s= x→0 ds n≥−n n =0 On the other hand, we have d ix −s (i/ N) ζ 1 − s, ,z m k ds 2π s= √ √ N m N k ix m j (m−j) = (−1) (−1) log ζ 1 − , ,z . i j i 2 2π j=0 64 Page 6 of 14 Diamantis, Strömberg Res Math Sci (2022) 9:64 Using (3.2)and Prop. 2.4, we deduce that 2 j N m i (m) m (f, k/2) = (−1) log √ i j j=0 k ix −xz (m−j) × lim e f (z)ζ 1 − , ,z dz. (3.3) 2 2π x→0 √ We now use (8) of Sect. 1.11 of [14] according to which, for z ∈ H, s ∈ / N and x > 0small enough, we have ix (−x) −xz s−1 e ζ s, ,z = (1 − s)x + ζ (s − r, z) , (3.4) 2π r! r=0 where ζ (s, w) is the Hurwitz zeta function. This gives, for every ∈ N, ix (−x) −xz ( ) j (j) s−1 j ( ) e ζ s, ,z = (−1) (1 − s)x log x + ζ (s − r, z) (3.5) 2π r! j=0 r=0 and thus, k ix k k (−x) −xz ( ) j −k/2 (j) ( ) e ζ 1 − , ,z = (−1) x ( )log x+ ζ (1 − − r, z) . 2 2π 2 2 r! j=0 r=0 This implies that, for each j ∈ N,wehave k ix −xz ( ) e f (z)ζ 1 − , ,z dz 2 2π ⎛ ⎞ √ +1 k k j (j) − j ⎝ ⎠ = (−1) x log x f (z)dz j=0 r +1 (−x) N k ( ) + f (z)ζ 1 − − r, z dz. r! 2 r=0 Since f has a zero constant term in its Fourier expansion, it follows that i/ N +1 f (z)dz = 0. (3.6) i/ N Therefore, i i √ √ +1 +1 N k ix N k −xz ( ) ( ) lim e f (z)ζ 1 − , ,z dz = f (z)ζ 1 − ,z dz. + i i x→0 2 2π 2 √ √ N N (3.7) This, combined with (4.5), proves Theorem 1.1. In the case of weight 2, it simplifies to Corollary 3.1 For each f ∈ S (N) such that f | W = f, we have 2 N √ √ (f, 1) = Ni f (z) log( (z)) + (log( N) − πi/2)z dz. Proof If k = 2and m = 1, the formula of the theorem becomes i i √ √ +1 +1 √ √ N N (f, 1) = Ni log(i/ N) f (z)ζ (0,z)dz + f (z)ζ (0,z)dz . (3.8) i i √ √ N N Diamantis, Strömberg Res Math Sci (2022) 9:64 Page 7 of 14 64 The well-known identity ζ (0,z) = 1/2 − z and (3.6) imply that the first integral equals − f (z)zdz. For the second integral, we combine (3.6) with the identity (see, e.g. (10) of 1.10 of [14]) ζ (0,z) = log( (z)) − log(2π). From those formulas for the two integrals, we deduce the corollary. Finally, we comment on the relation between Theorem 1.2 (applying to holomorphic cusp forms) and Corollary 3.1 (applying to weakly holomorphic ones). Since a holomorphic cusp form is, of course, weakly holomorphic, Corollary 3.1 applies to it too and one might expect the two formulas to agree completely. However, the subject of Theorem 1.2 is a different L-series from the (f, s) appearing in Corollary 3.1,namely L (s). They both originate in the more general (ϕ) but they are not quite the same, L (s)being simply a "symmetrised" version of (f, s). This explains why the formulas are identical except for the factor of 2 in the formula for the central derivative of L (s). 4 L-functions associated with cusp forms and their derivatives The case of classical cusp forms and their L-functions can be accounted for by the same approach. However, the setting must be slightly adjusted, ultimately because of the lack of a functional equation for (f, s) when f is weakly holomorphic, as discussed in Remark 2.3. Specifically, we let f be a holomorphic cusp form of weight k for (N)withaFourier expansion 2πinz f (z) = a (n)e , (4.1) n>0 and such that 0 −1/ N f | W = f, for W = . k N N N 0 We recall the classical integral expression for the completed L-function of f : L (s):= (s)L (s) 2π ∞ ∞ s k−s s−1 k k−1−s (4.2) 2 2 = N f (it)t dt + i N f (it)t dt √ √ 1/ N 1/ N √ √ = a (n)E (2πn/ N) + i a (n)E (2πn/ N) f 1−s f s−k+1 n>0 n>0 We observe that, thanks to (2.6), this converges for all s ∈ C. The completed L-function can be recast in terms of the L-series formalism of [10] and the family of test functions given in (1.4). Indeed, if Re(w) > − , we have, w k w −2πnt−wt s−1 L (ϕ + i ϕ ) = N a (n) e t dt f f s k−s n>0 k−s k −2πnt−wt k−1−s +i N a (n) e t dt n>0 (2πn+w)t s−1 = a (n) e t dt n>0 (2πn+w)t − √ k k−1−s +i a (n) e t dt (4.3) n>0 64 Page 8 of 14 Diamantis, Strömberg Res Math Sci (2022) 9:64 As in the previous section (but more easily, since we do not have any terms with n < 0), the series converges absolutely and uniformly in compact subsets of {w ∈ H;Re(w) > − }, for each fixed s ∈ C. Hence, comparing with (4.2), we see that ix k ix ∗ lim L (ϕ + i ϕ ) = L (s). k−s f x→0 Let now s ∈ R and w ∈ H with Re(w) > − . By Lemma 2.2, followed by a change of variables and (4.1), the sum (4.3) becomes i+∞ i+∞ (2πn+w)iz (2πn+w)t √ √ −s s−1 k s−k k−1−s N N i a (n) e z dz + i i a (n) e t dz f f i i n>0 n>0 √ √ i/ N +∞ i/ N +∞ −s s/2 iwz s−1 s (k−s)/2 iwz k−1−s = i N e f (z)z dz + i N e f (z)z dz. (4.4) √ √ i/ N i/ N This is a "symmetrised" analogue of (2.11), and therefore, working similarly to the last section, we can deduce the following analogue of Prop. 2.4: Proposition 4.1 Let f ∈ S (N) such that f | W = f . For each w ∈ H with Re(w) > − k k and each s ∈ R,wehave N s w w k w iwz −s L (ϕ + i ϕ ) = e f (z) i N ζ 1 − s, ,z f s 2−s 2π k−s w +i N ζ s − k + 1, ,z dz. 2π To pass to derivatives, we let m be a positive integer. Equation (4.3)implies that (2πn+w)t − √ (m) w k w 2m+k −1 m N 2 L (ϕ + i ϕ )| = (1 + i ) a (n) e t log tdt. s 2−s f s= n>0 which is the analogue of (3.1) and thus, we can work in an entirely analogous way to the last section to obtain 2 j k N m i ∗ (m) 2m k (L ) = (i + i ) log √ 2 i j j=0 √ +1 k ix −xz (m−j) × lim e f (z)ζ 1 − , ,z dz. (4.5) x→0 2 2π Applying (8)ofSect.1.11of[14] as in the last section implies that this equals 2 m √ +1 N m i k k 2m j (m−j) (i + i ) log √ f (z)ζ 1 − ,z dz. i j 2 N √ j=0 ∗ s Since L (s) = ( N /(2π)) (s)L (s), this gives: Theorem 4.2 Let m be a positive integer. For each f ∈ S (N) such that f | W = fand k k N (j) L (k/2) = 0 for j < m, we have k 2m k i + i m i (m) j L = (−2πi) log √ 2 j − 1 ! j=0 √ +1 N k (m−j) × f (z)ζ 1 − ,z dz. N Diamantis, Strömberg Res Math Sci (2022) 9:64 Page 9 of 14 64 Theorem 1.2 follows from this exactly as in Corollary 3.1 once we take into account that, if k = 2and f | W = f , we automatically have L (1) = 0 by the classical functional 2 N f equation for f ∈ S (N). 5 Computational and algorithmic aspects Consider first the special case of a holomorphic cusp form f of weight k = 2 and level N, which is invariant under the Fricke involution W . Suppose that f has a Fourier expansion of the form (4.1). It is clear from (4.2) and symmetry that the central value L (1) is zero and the rth central derivative is zero, if r is even, and 2πn ∗ (r) r (L ) (1) = 2r! a (n)E √ , n>0 if r is odd. Here r −zt r −s E (z) = e (log t) t dt r! is (−1) /r!times the rth derivative of E (z) with respect to s. It is initially defined for (z) > 0 and can be extended to H ∪ R via (5.4)and (5.2) below. Using integration by <0 r 1 r−1 parts, it can be shown that E (z) = E (z), which leads to the expression 0 z 1 N 1 2πn ∗ (r) r−1 (L ) (1) = r! a(n) E √ . (5.1) f 1 π n n>0 This expression was first obtained by Buhler, Gross and Zagier in [8], where the authors used the following expression to evaluate E (z) for any m ≥ 1and z > 0 n−m−1 (−1) m n E (z) = G = P (− log z) + z . (5.2) m+1 m+1 m+1 n n! n≥1 Here, P (x) is a polynomial of degree r and if we write (1 + z) = γ z then r n n≥0 P (t) = γ . r r−j j! j=0 Extending this method to weights k ≥ 4 and weakly holomorphic modular forms is immediate. If f ∈ S (N) has Fourier expansion at infinity of the form (2.1) then the analogue of (4.2)is(2.9). Upon differentiating (2.9) r times with respect to s and setting s = k/2leads to 2πn (r) r (f, k/2) = r! a (n)E √ , (5.3) 1−k/2 n≥−n n =0 ∗ (m) k+2m (m) where we note that for a holomorphic f we have (L ) (k/2) = (1 + i ) (f, k/2). It follows that we need to evaluate E where n = k/2 − 1. To compare the complexity of −n these computations with the weight 2 case, we note that Milgram [16, (2.22)] showed that n−m m (n + 1) z m −z m l−1 m−l E (z) = e ξ + ξ E (z) , (5.4) −n l,n 0,n 1 n+1 z l! l=0 l=1 where ξ are constants independent of z and can be precomputed. Using this together l,n with (5.2), it follows that the computation essentially reduces to that of a finite sum of polynomials and an infinite rapidly convergent sum. 64 Page 10 of 14 Diamantis, Strömberg Res Math Sci (2022) 9:64 It is also worth to mention here that the general algorithm to compute values and derivatives of Motivic L-functions introduced by Dokchitser in [12] and implemented in PARI/GP [19], essentially reduces to that described above in the case of holomorphic modular forms. Furthermore, in both [8]and [12] the authors make additional use of asymptotic expansions to speed up computations of E (z) for large z. −n 5.1 The new integral formula Let f ∈ S ( (N)) be a weakly holomorphic cusp form of even integral weight k and that satisfies f | W = f . Then, Theorem 1.1 implies that m i (m) 2m−k/2 k/4 j (f, k/2) = i N log √ j=0 i/ N +1 (m−j) × f (z)ζ 1 − ,z dz, i/ N where (f, s)isdefinedin(1.5). When computing these values, it is clear that the main CPU time is spent on computing integrals of the form √ √ (r) I (f ) = f (x + i/ N)ζ 1 − k/2,x + i/ N dx, 0 ≤ r ≤ m. The cusp form f is given in terms of the Fourier expansion (2.1) for some n ≥ 0. To −D evaluate f (x + i/ N) up to a precision of ε = 10 for all x ∈ [0, 1], we can truncate the Fourier series at some integer M > 0. The precise choice of M depends on the available coefficient bounds. In case f is holomorphic then Deligne's bound can be used to show that we can choose M such that √ √ √ M > c k N log M + N(c D + c log( N(k/2)!)) + c 1 2 3 4 for some explicit positive constants c ,c ,c and c , independent of N, D and k. However, 1 2 3 4 if f is not holomorphic then we only have the non-explicit bound (2.8)and M must satisfy √ √ √ M > c N M + c ND + c N log N, 1 2 3 where c ,c ,c and c are positive constants that depend on f and can be computed in 1 2 3 4 special cases using Poincaré series. In both cases we From both inequalities above it is clear that as the level or weight increases we need a larger number of coefficients, which increases the number of arithmetic operations needed. Note that the working precision might also need to be increased due to cancellation errors. To evaluate the Hurwitz zeta function and its derivatives, it is possible to use, for instance, the Euler–Maclaurin formula M−1 1−s 1 (z + M) ζ (s, z) = + (n + z) s − 1 n=0 1 1 B (s) 2l 2l−1 + + + Err(M, L), 2l−1 (z + M) 2 (2l!) (z + M) l=1 where M, L ≥ 1 and where the error term Err(M, L) can be explicitly bounded. For more details, including proof and analysis of rigorous error bounds and choice of parameters, (r) see [15], where the generalisation to derivatives ζ (s, z) is also included. In our case, s = 1 − k/2and z = x + i/ N with 0 ≤ x ≤ 1. It is easy to use Theorem 1 of [15] to show that if M > 1and L > k/4 then 2k 2M |(1 − k/2) | 2L Err(M, L) ≤ , 2L (2πM) L − k/4 Diamantis, Strömberg Res Math Sci (2022) 9:64 Page 11 of 14 64 where (s) = s(s + 1) ··· (s + m − 1) is the usual Pochhammer symbol. Furthermore, if the right-hand side above is denoted by B then it can be shown that the error in the Euler–Maclaurin formula for the rth derivative can be bounded by B · r! log(8(M + 1)) . In [15], it is observed that to obtain D digits of precision we should choose M ∼ L ∼ D, meaning that the number of terms in both sums is proportional to D. It is also clear that as k or r increases we will need larger values of M and L. Example 5.1 Consider f ∈ S (37) and standard double precision, i.e. 53 bits or 15 (deci- √ √ (r) mal) digits. Then, a single evaluation of f (x + i/ 37) takes 271μs while ζ (0,x + i/ 37) takes 2μs, 114μs, 124μs, 171μs for r = 1, 2, 3 and 20, respectively. 5.2 Comments on the implementation There are a few simple optimisations that can be applied immediately to decrease the number of necessary function evaluations. √ √ • Replace the sum of integrals by f (x + i/ N)Z (x + i/ N)dx, where m i k j (m−j) Z (z) = log ζ 1 − ,z . j 2 j=0 √ √ •If f (z) has real Fourier coefficients then f (1 − x + i/ N) = f (x + i/ N), which is very useful as we can choose the numerical integration method with nodes that are symmetric with respect to x = 1/2. (r) • If we need to compute (f, k/2) for a sequence of rs, then function values of f and (j) lower derivatives ζ can be cached in each step provided that the we use the same nodes for the numerical integration. As the main goal of this paper is to present a new formula and not to present an opti- mised efficient algorithm as such, we have implemented all algorithms in SageMath using the mpmath Python library for the Hurwitz zeta function evaluations as well as for the numerical integration using Gauss–Legendre quadrature. The implementation used to calculate the examples below can be found in a Jupyter notebook which is available from [20]. 5.3 Examples of holomorphic forms To demonstrate the veracity of the formulas in this paper, we first present a comparison of results and indicative timings between the new formula in this paper and Dokchitser's algorithm in PARI (interfaced through SageMath). Table 1 includes three holomorphic cusp forms 37.2.a.a, 127.4.a.a and 5077.2.a.a, labelled accordingtothe LMFDB[18]. These are all invariant under the Fricke involution and it is known that the analytic ranks are 1, 2 and 3, respectively. The last column gives the difference between the values computed by Dokchitser's algorithm and the integral formula. As the level increases, we find that f (x + i/ N) oscillates more and more and it is necessary to increase the degree of the Legendre polynomials used in the Gauss–Legendre quadrature. The comparison of timings in Table 1 indicates that our new formula is slower than Dokchitser's algorithm but it is important to keep in mind the latter is implemented in the PARI C library and is compiled while our formula is simply implemented directly in 64 Page 12 of 14 Diamantis, Strömberg Res Math Sci (2022) 9:64 ∗ (r) Table 1 Central derivatives (L ) (k/2) for f ∈ S ( (N)) k 0 Nk Label r Dokchitser/PARI Time (ms) Integral formula Time (ms) Error −17 37 2 37.2.a.a 1 0.296238908699801 18 0.2962389086998011 49 6 × 10 −10 127 4 127.4.a.a 2 7.83323138624802 42 7.8332313863855996+ 186 1 × 10 −11 5077 4 5077.2.a.a 3 117.837959237940 212 117.83795923792273+ 2000 2 × 10 SageMath using the mpmath Python library. All CPU times presented below are obtained on a 2GHz Intel Xeon Quad Core and we stress that the times should not be taken as absolute performance measures but simply to provide comparisons between different input and parameter values. 5.4 Examples of weakly holomorphic modular forms To construct weakly modular cusp forms, we use the Dedekind eta functions η(τ) = q 1 − q . ( ) n≥1 If we define + 8 2 3 4 5 (τ) = (η(τ)η(2τ)) = q − 8q + 12q + 64q + O(q ) and + 24 12 24 j (τ) = (η(τ)/η(2τ)) + 24 + 2 (η(2τ)/η(τ)) −1 2 3 4 = q + 4372q + 96256q + 1240002q + O(q ) + + then it can be shown that ∈ S ( (2)) and j ∈ S ( (2)) are both invariant under the 8 0 0 2 2 0 Fricke involution W . The following holomorphic and weakly holomorphic modular forms of weight 16 on (2) were introduced by Choi and Kim [9] to study weakly holomorphic Hecke eigenforms. + 2 2 3 4 f (τ) = (τ) = q − 16q + O(q ) 16,−2 + 2 + 3 4 f (τ) = (τ) (j (τ) + 16) = q + 4204q + O(q ) 16,−1 2 2 + 2 + 2 + 3 4 f (τ) = (τ) (j (τ) + 16j (τ) − 8576) = 1 + 261120q + O(q ) 16,0 2 2 2 + 2 + 3 + 2 + f (τ) = (τ) (j (τ) + 16j (τ) − 12948j (τ) − 427328) 16,1 2 2 2 2 −1 3 4 = q + 7525650q + O(q ) + 2 + 4 + 3 + 2 + f (τ) = (τ) (j (τ) + 16j (τ) − 17320j (τ) − 593536j (τ) − 27188524) 16,2 2 2 2 2 2 −2 3 4 = q + 140479808q + O(q ) and it is easy to see that all of these functions are also invariant under W . Furthermore, f ,f ∈ S ( (2)) and f ,f ∈ S ( (2)) while f is not cuspidal. 16,−2 16,−1 16 0 16,1 16,2 0 16,0 To check the accuracy of our formula in this setting, we first consider the holomorphic cusp forms. Observe that the unique newform of level 2 and weight 16 is 2 3 4 5 6 f (τ) = q − 128q + 6252q + 16384q + 90510q + O(q ) = f − 128f . 16,−1 16,−2 Using Dokchitser's algorithm, we find that L (8) = 0.0526855929956408, while using the integral formula with 53 bits precision, we obtain ∗ −20 L (8) = 0.00008045589767063483 + 6 · 10 i, 16,−2 ∗ −17 L (8) = 0.06298394789748197609 + 3 · 10 i, 16,−1 Diamantis, Strömberg Res Math Sci (2022) 9:64 Page 13 of 14 64 (r) Table 2 (f , 8) computed using the integral formula with 16,i 103 bits precision (r) ir (f , 8) T/ms Err. 16,i −31 −30 10 −0.2035186511755524285671725692737 + 1 × 10 204 6 × 10 −30 111.1597162067012225517004253561026 − 0.104294509255933530762675132394i 975 9 × 10 −30 12 −0.3329012203856171470128799683152 − 0.109371149169408369683239573058i 1790 7 × 10 −30 −27 20 −1.8934024663352144735029014555039 + 1 × 10 209 1 × 10 −28 2155.394013302380372465449909213930 − 0.000407400426780990354541699709i 996 2 × 10 −28 22 −0.1484917546377626240694524994979 + 0.000137545862921322355701592298i 1880 1 × 10 (r) Table 3 (f , 8) computed using the sum with 103 bits 16,i precision (r) ir (f , 8) T/ms Err. 16,i −17 10 −0.20351865117555238 10 4 × 10 3 −15 111.15971620670121522423 − 0.104294509255934i 11 × 10 8 × 10 3 −15 12 −0.33290122038562486306 − 0.109371149169408i 21 × 10 8 × 10 −14 20 −1.89340246633520092878 11 2 × 10 3 −14 2155.3940133023803440437 − 0.000407400426780990i 14 × 10 4 × 10 3 −14 22 −0.14849175463777442019 + 0.000137545862921322i 26 × 10 2 × 10 and ∗ ∗ −17 L (8) − 128L (8) = 0.05268559299564071785 + 2 · 10 i, f f 16,−1 16,−2 which agrees with the value of L (8) above. (r) Table 2 gives the values of (f , 8) for the weakly holomorphic modular forms 16,i f and f , computed using the integral formula with 103 bits working precision. The 16,1 16,2 table contains an indication of timings as well as a heuristic error estimate based on a comparison with the same value computed using 203 bits precision. To provide some independent verification of the algorithm in the case of weakly mod- ular forms, we also implemented the generalisation of the algorithm from [8]using (5.3) directly with E evaluated using (5.4)and (5.2). The main obstacle with the algorithm 1−k/2 modelled on [8] is that the infinite sum in (5.2) suffers from catastrophic cancellation for large z unless the working precision is temporarily increased within the sum. The (r) corresponding values of (f , 8) computed using the algorithm with 103 bits starting 16,i precision are given in Table 3 where we also give the corresponding timings as well as an error estimate based on comparison with values in Table 2. Acknowledgements We thank the referees for their insightful comments and helpful suggestions. We also thank D. Goldfeld for helpful and encouraging comments on the manuscript. Part of the work was done while the first author was visiting Max Planck Institute for Mathematics in Bonn, whose hospitality he acknowledges. Research on this work is partially supported by the authors' EPSRC Grants (ND: EP/S032460/1 FS: EP/V026321/1). Data Availability Statement All data generated and analysed during this study are included in this published article. Further data can be obtained by using the program available at [20] with different input parameters. Received: 21 September 2022 Accepted: 10 October 2022 Published online: 26 October 2022 References 1. Borcherds, R.: Automorphic forms with singularities on Grassmannians. Invent. Math. 132, 491–562 (1998) 2. Bringmann, K., Folsom, A., Ono, K., Rolen, L.: Harmonic Maass Forms and Mock Modular Forms: Theory and Applications, American Mathematical Society Colloquium Publications, vol. 64. American Mathematical Society, Providence (2017) 64 Page 14 of 14 Diamantis, Strömberg Res Math Sci (2022) 9:64 3. Bringmann, K., Fricke, K.H., Kent, Z.: Special L-values and periods of weakly holomorphic modular forms. Proc. Am. Math. Soc. 142(10), 3425–3439 (2014) 4. Bringmann, K., Ono, K.: The f (q) mock theta function conjecture and partition ranks. Invent. Math. 165(2), 243–266 (2006) 5. Bruinier, J., Funke, J.: On two geometric theta lifts. Duke Math. J. 125(1), 45–90 (2004) 6. Bruinier, J., Funke, J., Imamoglu, Ö.: Regularized theta liftings and periods of modular functions. J. Reine Angew. Math. 703, 43–93 (2015) 7. Bruinier, J., Ono, K.: Heegner divisors, L-functions and harmonic weak Maass forms. Ann. Math. (2) 172(3), 2135–2181 (2010) 8. Buhler, J.P., Gross, B.H., Zagier, D.B.: On the conjecture of Birch and Swinnerton-Dyer for an elliptic curve of rank 3. Math. Comput. 44(170), 473–481 (1985) 9. Choi, S.-Y., Kim, C.-H.: Weakly holomorphic Hecke eigenforms and Hecke eigenpolynomials. Adv. Math. 290, 144–162 (2016) 10. Diamantis, N., Rolen, L.: L-values of harmonic Maass forms (submitted). arXiv:2201.10193 11. Diamantis, N., Lee, M., Raji, W., Rolen, L.: L-series of harmonic Maass forms and a summation formula for harmonic lifts (submitted). arXiv:2107.12366 12. Dokchitser, T.: Computing special values of motivic L-functions. Exp. Math. 13(2), 137–149 (2004) 13. Duke, W., Imamoglu, Ö., Tóth, A.: Cycle integrals of the j-functions and mock modular forms. Ann. Math. (2) 173, 947–981 (2011) 14. Erdelyi, A., Magnus, W., Oberhettinger, F., Tricomi, F.G.: (The Bateman Manuscript Project), Higher Transcendental Functions, vol. I. McGraw-Hill, New York (1953) 15. Johansson, F.: Rigorous high-precision computation of the Hurwitz zeta function and its derivatives. Numer. Algo- rithms 69(2), 253–270 (2015) 16. Milgram, M.S.: The generalized integro-exponential function. Math. Comput. 44(170), 443–458 (1985) 17. Olver, F., Lozier, D., Boisvert, R., Clark, C. (eds.): NIST Handbook of Mathematical Functions. U.S. Department of Commerce, National Institute of Standards and Technology, Washington, DC (2004) 18. The LMFDB Collaboration. The L-functions and modular forms database. http://www.lmfdb.org (2022). Accessed 18 Sept 2022 19. The PARI Group, Univ. Bordeaux. PARI/GP version 2.13.4. http://pari.math.u-bordeaux.fr/ (2022) 20. Strömberg, F.: Algorithms and examples for derivatives of L-series available from https://github.com/fredstro/ derivatives_lseries 21. Zwegers, S. Mock.: θ-functions and real analytic modular forms in "q-Series with Applications to Combinatorics, Number Theory, and Physics" (Urbana: Contemporary Mathematics, 291, American Mathematical Society, Providence, 2001, 269–277 (2000) Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Research in the Mathematical Sciences – Springer Journals
Published: Dec 1, 2022
Automorphic forms with singularities on Grassmannians
Borcherds, R
Special L-values and periods of weakly holomorphic modular forms
Bringmann, K; Fricke, KH; Kent, Z
The f(q)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f(q)$$\end{document} mock theta function conjecture and partition ranks
Bringmann, K; Ono, K
On two geometric theta lifts
Bruinier, J; Funke, J
Regularized theta liftings and periods of modular functions
Bruinier, J; Funke, J; Imamoglu, Ö
Heegner divisors, L-functions and harmonic weak Maass forms
Bruinier, J; Ono, K
On the conjecture of Birch and Swinnerton-Dyer for an elliptic curve of rank 3
Buhler, JP; Gross, BH; Zagier, DB
Weakly holomorphic Hecke eigenforms and Hecke eigenpolynomials
Choi, S-Y; Kim, C-H
Computing special values of motivic L\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L$$\end{document}-functions
Dokchitser, T
Cycle integrals of the j\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$j$$\end{document}-functions and mock modular forms
Duke, W; Imamoglu, Ö; Tóth, A
Rigorous high-precision computation of the Hurwitz zeta function and its derivatives
Johansson, F
The generalized integro-exponential function
Milgram, MS
https://www.deepdyve.com/lp/springer-journals/derivatives-of-l-series-of-weakly-holomorphic-cusp-forms-6jMlFa5090?utm_source=freeShare&utm_medium=link&utm_campaign=freeShare
Diamantis, N., & Strömberg, F. (2022). Derivatives of L-series of weakly holomorphic cusp forms. Research in the Mathematical Sciences, 9(4),
Diamantis, Nikolaos, and Fredrik Strömberg. "Derivatives of L-series of weakly holomorphic cusp forms." Research in the Mathematical Sciences 9.4 (2022).
|
CommonCrawl
|
Home News USENIX ATC 2019: A retargetable system-level DBT hypervisor, an I/O scheduler for...
USENIX ATC 2019: A retargetable system-level DBT hypervisor, an I/O scheduler for LSM KVs, and more
August 1, 2019 - 2:36 am
Last month, at USENIX ATC 2019, many systems researchers presented their work on topics including real-world deployed systems, runtimes, big-data programming models, security, virtualization, and much more. This year it happened from 10-12 July at Renton, WA, USA.
The USENIX Annual Technical Conference (ATC) is considered to be one of the most prestigious systems research conferences. It covers all practical facets of systems software and aims to improve and further the knowledge of computing systems of all scales. Along with providing a platform to showcase cutting-edge systems research, it also allows researchers to gain insight into fields like virtualization, system management and troubleshooting, cloud and edge computing, security, and more.
Here are some of the remarkable papers presented at this event:
Captive – a retargetable system-level DBT hypervisor
By: Tom Spink, Harry Wagstaff, and Björn Franke from the University of Edinburgh
Why Captive is needed
To boot an operating system and execute programs compiled for an Instruction Set Architecture (ISA) other than the host machine, system-level Dynamic Binary Translation (DBT) is used. DBT is a process of translating code for one ISA to another on the fly. Due to their performance-critical nature, DBT frameworks are generally hardcoded and heavily optimized for both their guest and host ISAs. Though this ensures performance gains, it poses high engineering costs for supporting a new architecture or extending an existing one.
How Captive works
The researchers have devised a novel, retargetable system-level DBT hypervisor called Captive. It includes guest specific modules generated from high-level guest machine specifications, which simplifies retargeting of the DBT and relieves users from low-level implementation effort.
Captive enforces aggressive optimizations by combining the offline optimizations of the architecture model with online optimizations performed within the generated Just-In-Time compiler. It reduces the compilation overheads while providing high code quality. Additionally, it operates in a virtual bare-metal environment provided by a VM hypervisor. This allows you to fully exploit the underlying host architecture, especially the system-related and privileged features not accessible to other DBT systems operating as user processes.
Here's a diagram depicting how it works:
Source: Usenix ATC
The researchers evaluated the DBT based on both targeted micro-benchmarks and standard application benchmarks. They also compared it with the de-facto standard Qemu DBT system. The evaluation revealed that Captive delivers an average speedup of 2.21x over Qemu across SPEC CPU2006 integer benchmarks. In the case of floating-point applications, it shows further speedup reaching a 6.49x average. It also significantly reduces the effort required to support a new ISA, while delivering outstanding performance.
To know more about Captive, check out this USENIX ATC '19 lightning talk by the authors:
SILK – a new open-source key-value store derived from RocksDB, designed to prevent latency spikes
By: Oana Balmau, Florin Dinu, and Willy Zwaenepoel, University of Sydney; Karan Gupta and Ravishankar Chandhiramoorthi, Nutanix, Inc.; Diego Didona, IBM Research–Zurich
Why SILK is needed
Latency-critical applications demand data platforms that can provide low latency and predictable throughput. Log-structured merge key-value stores (LSM KVs) were designed to handle such write-heavy workloads and large scale data where working set does not fit in the main memory. Some of the common LSM KVs are RocksDB, LevelD, and Cassandra that are widely adopted in production environments and claim to optimize the heavy workload. Despite these claims, the researchers show that tail latencies in state-of-the-art LSM KVs can be quite poor, particularly in the case of heavy and variable client write loads.
How SILK works
To address the aforementioned limitations, the researchers have come up with the notion of an I/O scheduler for LSM KVs, which aims to achieve the following three goals:
Opportunistically allocating I/O bandwidth to internal operations
Prioritizing internal operations at the lower levels of the tree.
Preempting compactions.
This notion of I/O scheduler is implemented in SILK, a new open-source KV store derived from RocksDB. It is designed to prevent client request latency spikes. It uses this I/O scheduler to manage external client load and internal LSM maintenance work. It was tested on a production workload and synthetic benchmarks and was able to achieve up to two orders of magnitude lower 99th percentile latencies than RocksDB and TRIAD.
To know more about SILK, check out this USENIX ATC '19 lightning talk by the authors:
Transactuations and its implementation, Relacs for building reliable IoT applications
By: Aritra Sengupta, Tanakorn Leesatapornwongsa, and Masoud Saeida Ardekani, Samsung Research; Cesar A. Stuardo, University of Chicago
Why transactuations and the Relacs runtime are needed
IoT applications are responsible for reading sensors, executing application logic, and taking action with actuators accordingly. One of the challenges developers face while building an IoT application is ensuring its correctness and reliability. Though current solutions do offer simple abstractions for reading and actuating, they lack high-level abstractions for writing reliable and fault-tolerant applications. Not properly handling failures can lead to inconsistencies between the physical and application state.
How transactuations and Relacs work
In this paper, the researchers introduced "transactuations", which are similar to database transactions. These abstract the complexity of handling various failures and make it easy to maintain soft states so that they are consistent with respect to reads and writes to hard states. You need to specify dependencies among operations on soft and hard states along with sensing or actuating policy that specifies the conditions under which soft states can commit despite failures. Transactuation will then be responsible for preserving this dependence even in cases of hardware and communication failures and ensure isolation among transactuations that are executing concurrently.
The researchers have also introduced Relacs, a runtime system that implements the abstraction for a smart home platform. It first transforms an application into a serverless function and executes the application in the cloud while enforcing transactuation specific semantics.
The researchers further showed that transactuations are an effective solution for building reliable IoT applications. Using them also significantly reduces lines of code compared to manually handling failures. The Relacs runtime also guarantees reliable execution of transactuations while imposing reasonable overheads over a baseline that does not provide consistency between operations on hard states and soft states.
To know more about transactuations, check out this USENIX ATC '19 lightning talk by the authors:
Browsix-Wasm – Run unmodified WebAssembly-compiled Unix applications inside the browser in a performant way
By: Abhinav Jangda, Bobby Powers, Emery D. Berger, and Arjun Guha, University of Massachusetts Amherst
Why Browsix-Wasm is needed
Major browsers today including Mozilla, Chrome, Safari, and Edge support WebAssembly, a small binary format that promises to bring near-native performance to the web. It serves as a portable compilation target for high-level languages like C, C++, and Rust.
One of the key goals of WebAssembly is performance parity with native code. The paper that introduced WebAssembly showed that its performance is competitive with native code. However, the evaluation was limited to a suite of scientific kernels rather than full applications with each consisting of about 100 lines of code.
The researchers conducted a comprehensive performance analysis using the established SPEC CPU benchmark suite of large programs. However, using such suites also pose a challenge, that is, currently it is not possible to simply compile a sophisticated native program to WebAssembly. These need operating system services such as a filesystem, synchronous I/O, and processes, which WebAssembly and the browser do not provide.
How Browsix-Wasm works
As a solution to this challenge, the researchers have built Browsix-Wasm, an extension to Browsix that allows running unmodified WebAssembly-compiled Unix applications directly inside the browser. They used Browsix-Wasm to conduct the very first large-scale evaluation of the performance of WebAssembly vs. native.
The evaluation results show a substantial performance gap across the SPEC CPU suite of benchmarks. The applications compiled to WebAssembly were slower by an average of 45% (Firefox) to 55% (Chrome), with peak slowdowns of 2.08$\times$ (Firefox) and 2.5$\times$ (Chrome). Some of the reasons behind this performance degradation were missing optimizations and code generation issues.
Here's a chart showing the comparison between the performance analysis done on the basis of PolyBenchC (previous work) and SPEC CPU benchmarks.
To know more about Browsix-Wasm, check out this USENIX ATC '19 lightning talk by the authors:
Zanzibar – Google's global authorization system
By: Ruoming Pang, Ramon Caceres, Mike Burrows, Zhifeng Chen, Pratik Dave, Nathan Germer, Alexander Golynski, Kevin Graney, and Nina Kang, Jeffrey L. Korn, Christopher D. Richards and Mengzhi Wang, Google; Lea Kissner, Humu, Inc.; Abhishek Parmar, Carbon, Inc.
Why Zanzibar is needed
Every day online interactions involve the exchange of a lot of personal information. These interactions are authorized to ensure that a user has permission to perform an operation on a digital object. For instance, we have several web-based photo storage services that allow users to share a few photos with friends while keeping other photos private. These services must have checks in place to ensure that photos are shared before another user can view them. There are already many ways of authorization and developers constantly work on making them more robust to guarantee online privacy.
How Zanzibar works
The researchers have come up with Zanzibar, a system that allows you to store permissions and perform authorization checks based on the stored permissions. Many Google services use it including Calendar, Cloud, Drive, Maps, Photos, and YouTube.
Zanzibar takes up two roles:
A storage system for Access Control Lists (ACLs) and groups.
An authorization engine that interprets permissions.
It provides a uniform data model and language to define a wide range of access control policies. While making authorization decisions it takes into account the causal ordering of user actions to provide external consistency amid changes to access control lists and object contents.
Here's a diagram depicting its architecture:
Zanzibar is scalable to trillions of access control lists and millions of authorization requests per second to support services. In more than 3 years of production use at Google, it has maintained 95th-percentile latency of less than 10 milliseconds and availability of greater than 99.999%.
To know more about Zanzibar, check out this USENIX ATC '19 lightning talk by the authors:
These were some of the papers presented at USENIX ATC 2019. You can find other research papers on its official website.
International cybercriminals exploited Citrix internal systems for six months using password spraying technique
Google Cloud and Nvidia Tesla set new AI training records with MLPerf benchmark results
"Don't break your users and create a community culture", says Linus Torvalds, Creator of Linux, at KubeCon + CloudNativeCon + Open Source Summit China 2019
|
CommonCrawl
|
A conceptual design for deflection device in VTDP system
Yongwei Gao ORCID: orcid.org/0000-0001-7026-92731,
Jianming Zhang1,
Long Wang1,
Bingzhen Chen1 &
Binbin Wei1
This article has been updated
The effectiveness of the Vectored Thrust Ducted Propeller (VTDP) system is not high currently, especially the lateral force is not large enough. Thus, a conceptual design for a deflection device of a VTDP system was proposed to achieve effective hovering control. The magnitude of the lateral force that was applied to maintain balance while hovering was examined. A comparison between the experimental and numerical results for the 16H-1 was made to verify the numerical simulation approach. The deflection devices of the X-49 and the proposed design were analyzed using numerical simulations. The results indicated that a larger lateral force and lower power consumption were presented in the proposed design. The results of this article provide a new idea for the design of the VTDP system.
The hovering experiment and simulations of 16H-1 have been conducted.
A comparison between the experiments and calculated results was performed to verify the numerical simulation approach.
A conceptual design for a deflection device of a Vectored Thrust Ducted Propeller (VTDP) system was proposed.
The calculated results indicate that the proposed design provided a larger lateral force and lower power consumption.
A traditional helicopter with a tail rotor is limited in its forward flight speed due to the compressibility effect of the advancing blades and flow separation of the retreating blades. The level-flight cruise speed for conventional helicopters is approximately 150 kts. To further enhance the cruise speed, vectored thrust ducted propeller (VTDP) technology was proposed [1,2,3,4]. The VTDP system is installed on the aft end of the fuselage and replaces the conventional tail rotor. The VTDP system usually includes a ducted propeller [5, 6], and flow deflection devices. The flow deflection devices can be vertical vanes or other mechanisms. In addition to providing anti-torque and yaw control, the VTDP can provide extra forward thrust and trim control [7, 8]. A compound helicopter with VTDP technology has a higher forward flight speed, better controllability and agility and larger payload capabilities.
The Piasecki 16H and Piasecki X-49 use VTDP systems [2]. The Piasecki 16H was a series of compound helicopters produced in the 1960s. The first version of the Pathfinder, the 16H-1 version, first flew in 1962. A similar but larger Pathfinder II, the 16H-1A, was completed in 1965. The Piasecki X-49 "SpeedHawk" is an American four-bladed, twin-engine, experimental, high-speed compound helicopter under development by Piasecki Aircraft.
Similar deflection devices for the 16H-1 and X-49A are displayed in Figs. 1 and 2, respectively. The 16H-1 uses vertical vanes as deflection devices, and the X-49A uses a swerving sector (similar to a semi-spherical shell) as a deflection device. During flight, the vanes do not undergo deflection or sector withdrawal. While hovering or yawing, the vanes undergo deflection or sector extension to provide the proper torque.
VTDP system for 16H-1
VTDP system for X-49A
Hovering for VTDP systems poses more restrictions than forward flight in that hovering state should provide a larger lateral force and a smaller axial force. Different forms of the flow deflection devices are intended to have different effects.
Considering that the effectiveness of the Vectored Thrust Ducted Propeller (VTDP) system is not high currently, especially the lateral force is not large enough, an alternative form of the flow deflection device was proposed in this article. To verify the numerical simulations, the wind tunnel experiments on a ducted propeller and the VTDP system of the 16H-1 were carried out. After validating the simulation results, simulations were conducted on the VTDP of the X-49 and the proposed conceptual VTDP. The simulated results showed that the proposed flow deflection device was superior to the VTDP systems of the 16H-1 and X-49.
2 Computational model validation
Two examples, which were a ducted propeller and the VTDP system of the 16H-1, were used to verify the numerical simulation approach used in this article.
2.1 Experimental equipment and methods
The wind tunnel tests of the two models of the ducted propeller and the VTDP system of the 16H-1 were carried out in the NF-3 wind tunnel at Northwestern Polytechnical University, Xi'an, China.
The ducted propeller model was firstly used to verify the numerical simulation approach. An optical sensor was used to measure the rotor speed, and the voltage and current were also measured. The experimental model is shown in Figs. 3 and 4. The tip clearance ratio is defined as follows:
$$ \delta =\frac{\Delta}{R} $$
where Δ is the blade tip clearance, R is the inner radius of the duct, and δ is the tip clearance ratio. The inner diameter of the duct was denoted as D, and tip clearance ratio δ was 0.91%.
Propeller model
Model of ducted fan experiment
Another experimental model which was similar to the VTDP of the 16H-1 was also used to verify the numerical simulation approach. The experiment was performed on the ground. The slide and rear views are displayed in Figs. 5 and 6, respectively. The detailed model dimensions are listed in Table 1.
Table 1 Main parameters
The model was set on an experiment table, which contained a six-component balance system connected to the model. An electric motor was connected to the propeller by a transmission shaft. The propeller was driven by the electric motor with a power rating of 100 kW. Because the hovering state was of interest, there was no free stream. While testing, the rotation speeds of the propeller and deflection angles of the vanes were under control. The axial and lateral forces were measured by a balance system. The power of the VTDP was calculated using the electrical current and voltage measurements, and the calibration was carried out before the experiment. The purpose of the experiments was to provide data to compare with the numerical simulations to verify their reliability. After the simulations were complete, a comparison with the experiments was performed.
2.2 Calculation method and mesh generation
The numerical simulation was performed using ANSYS CFX commercial software. The steady Reynolds-averaged Navier–Stokes (RANS) equations were used to carry out the numerical study. The two-equation SST model was used to simulate the full turbulence flow around the model, and the dimensionless wall spacing y+ < 1 for all the walls. In the interface of rotation domain and static domain, the general connection interface model of Frozen-Rotor was used.
Figure 7 shows the computational model of ducted propeller. An unstructured grid was used for the calculations. Multiple coordinate systems were established to simulate the relative motion between the propeller and duct. Accordingly, the computational domain was divided into two sub-domains: a rotating domain and a stationary domain [9]. The propeller was in the rotating domain, and the duct was in the stationary domain. The two domains were used to generate computational meshes, as shown in Fig. 8.
Computational model
Mesh in the rotating domain
Considering the periodicity of the blade rotation and the symmetry of the duct, the stationary surfaces and rotating region were divided into four segments, and only one of them was used in the numerical calculations. The mesh was properly refined along the leading edge of the propeller, the lip of the duct, and the clearance of the tip, as shown in Figs. 8 and 9. In this example, 15 prismatic grids were arranged on the surface of the propeller, and the grid number in the rotation domain was about 7 million, and that in the static domain was about 8 million.
Mesh in the static domain
And the unstructured computational grid of 16H-1 model is shown in Fig. 10. In this example, the grid number of the rotating domain was 13 million, and that of the static domain was about 27 million.
Horizontal central section meshes
2.3 Comparison of experimental and calculated results
For the ducted propeller, the experimental results were in good agreement with the calculated results as shown in Figs. 11 and 12.
Thrust comparison
Power comparison
For the model of 16H-1, the deflection angle Φ of the vertical vanes was fixed at 20°, and the deflection angle of the horizontal vanes was kept at 0° in both the experiments and numerical simulations. The rotation speed of the propeller was adjusted. The comparisons of the axial force, lateral force, and power of the 16H-1 from the experiments and the numerical simulation are shown below.
Figure 13 displays the comparison of the axial force for different rotation speeds. The results from the first and second experiments, which are, respectively, labeled as "Experiment 1" and "Experiment 2" in the legend, were similar. The calculated axial force was underestimated by less than 10%. However, the trend was similar to that of the experiment. Figure 14 shows the comparison of the lateral forces for different rotation speeds. The lateral force was slightly overestimated. Nevertheless, the overall trend was the same.
Comparison of axial forces
Comparison of lateral force
Figure 15 shows a comparison of the power for different rotation speeds. The power curves for the two experiments and numerical calculations coincided.
Comparison of power between experiments and calculations
These comparisons demonstrated that the numerical simulation has the potential to be used to estimate the VTDP, at least for the axial force, lateral force, and input power. Based on the comparison, the proposed simulation method can be used to further explore the different VTDP configurations.
3 Calculation results
A new conceptual design for the VTDP system was proposed in this section. Its aerodynamic performance was calculated using the numerical simulation approach verified in previous section. The calculated results were compared with those of 16H-1 and X49.
3.1 Results of 16H-1 and X49
In the simulations of 16H-1 discussed in the previous section, the deflection angle of the vertical vanes was fixed. In the simulations discussed in this section, the rotation speed was fixed at n = 6500 rpm, and the deflection angles were varied.
Figure 16 displays the axial force or thrust at different deflection angles. The black solid line represents the total axial force for the VTDP of the 16H-1, and the red, blue, and dark cyan colors represent the axial force components from the propeller, duct, and vertical vanes, respectively. With the increase in the deflection angle, the thrust of the propeller (red color) increased, the absolute value of the drag of vertical vanes (dark cyan color) increased and its direction was opposite to that of force produced by the propeller. Meanwhile, the thrust of duct (blue color) reduced. As a result, the whole thrust of the VTDP system decreased monotonically.
Thrust at different deflection angles
Figure 17 shows the lateral force at different deflection angles. The total lateral forces of the system and the vanes increased first and then decreased. The maximum forces occurred near Φ = 40°. The lateral force of the duct decreased with the increase in Φ.
Lateral force at different deflection angles
Figure 18 shows the power variations at different deflection angles. The maximum consumed power occurred at the same deflection angle as that of the maximum lateral force in Fig. 17.
Power at different deflection angles of vanes
The numerical simulation results for the X-49 were similar to those for the 16H-1. As shown in Fig. 19, the outer deflector completely opened, and the deflection angles of the vertical vane were 50° and 60°. The numerical results are shown in Table 2.
Mesh in the horizontal central section for X-49
Table 2 Numerical results for X-49
3.2 Conceptual Design for the Deflection System of VTDP
The conceptual design for deflection system is displayed in Fig. 20. In the conceptual design, the duct was prolonged, eliminating the horizontal and vertical vanes of the 16H-1. Two rotatable slices that were parts of the prolonged duct replaced the extra outer deflector in the X-49. As displayed in Fig. 21, the first slice rotated in an anticlockwise direction, and the second slice rotated in a clockwise direction. While operating, the two slices constituted a nozzle that was similar to a vectored thrust nozzle.
Conceptual deflection system of VTDP. a Front view b Right side view c Top view d 3D view
Conceptual deflection system of VTDP at operating conditions
Figure 22 shows the grid at the horizontal central section. Figures 23, 24 and 25 show the variations of the axial force, lateral force, and power with the deflection angle Ψ of the rotating slice. In these figures, "VTDP system" represents conceptual deflection system of VTDP as shown in Fig. 20, the "Propeller" represents the propeller in the VTDP system as shown in Fig. 20(a), the "Duct" represents the duct in the VTDP system as shown in Fig. 20, and the "First rotating slice" and the "Second rotating slice" represent the rotating slices in the VTDP system as shown in Fig. 21.
Axial force versus Ψ for conceptual design
Lateral force versus Ψ for conceptual design
Power versus Ψ for conceptual design
3.3 Comparison of the results of the conceptual design, 16H-1, and X-49
3.3.1 Force and power
The lateral forces between the three deflection systems are compared in Table 3 where only the maximum lateral forces for the 16H-1 and X-49 occurred at Φ = 40°and Φ = 50°, and the lateral forces for the proposed design at different deflection angles are presented. The maximum lateral force for the 16H-1 was smaller than those of the other two deflection systems. The lateral forces of the proposed design for deflection angles from 90° to 120° were comparable with the maximum force of the X-49, although the maximum lateral force of the proposed design at Ψ = 110° was a bit larger than that of the X-49.
Table 3 Comparison of calculation results of three configurations
For hovering, a larger lateral force and smaller axial force are preferable. The axial forces for the proposed design at deflection angles less than Ψ = 120° were smaller than that of the 16H-1 at Φ = 40° and X-49 at Φ = 50°, where these angles corresponded to the maximum lateral forces. This indicated that in a range of deflection angles for the proposed design, the axial force was always smaller than those of the other two systems.
The consumed power for the proposed design was the smallest. These comparisons demonstrated that the proposed design provided a high lateral force with smaller values of the axial force and consumed power.
3.3.2 Streamlines and pressure contours
The streamlines and pressure contours are displayed in Figs. 26, 27 and 28, providing further information for the proposed design. According to the principle of momentum conservation, the forces exerted on the VTDP, including the axial and lateral forces, are determined by the pressure on the VTDP, the airflow deflection, and the mass flux, which can be explained by Fig. 29.
Streamlines and pressure contours for 16H-1 at Φ = 40°. a Streamlines b Pressure contour
Streamlines and pressure contours for X-49 at Φ = 50°. a Streamlines b Pressure contour
Streamlines and pressure contours for present design at Ψ = 110°. a Streamlines b Pressure contour
Simplified flow in the VTDP
3.4 Further analysis
The axial velocity and pressure of the free stream were V0 and P0. The velocity increased and the pressure decreased as the airflow approached the propeller. The pressure before the propeller was P′. After the flow passed through the propeller, the pressure increased to P′ + ΔP, and the axial velocity increased to V1. The area of the propeller disk was A1, and the area of the outlet was A2. At the outlet, the velocity further increased, and the pressure decreased to P0. The flow from the outlet of the duct was assumed to be deflected completely. The average deflection angle of the flow was Φ, and the velocity was V2. According to the principle of momentum conservation, the overall axial force of the VTDP system was
$$ T=\dot{m}\left({V}_2\cos \varPhi -{V}_0\right)=\rho {A}_1{V}_1\left({V}_2\cos \varPhi -{V}_0\right) $$
and the overall lateral force was
$$ Z=\dot{m}{V}_2\sin \varPhi ={A}_1{V}_1{V}_2\sin \varPhi $$
Based on Eqs. 2 and 3, the lateral force of the VTDP was related closely to the mass flux through the duct and the deflection angle of the flow. To increase the lateral force of the VTDP, the mass flux through the duct and/or the deflection angle of the flow should be increased. However, the two variables are coupled, and increasing the deflection angle of the flow would increase the blockage effect, which would lead to a reduction in the mass flux. To obtain the maximum lateral force of the VTDP system, the designer should balance the two variables.
For the VTDP, it is assumed that the deflection angle of the vanes and the deflection angle of the airflow were consistent. From the continuity equation, we obtain the following:
$$ {A}_1{V}_1={A}_2{V}_2\cos \varPhi $$
For hovering, the incoming airflow velocity is zero. The thrust of the VTDP is
$$ T=\dot{m}{V}_2\cos \varPhi =\rho {A}_2{\left({V}_2\cos \varPhi \right)}^2 $$
and the lateral force is
$$ Z=\dot{m}{V}_2\sin \varPhi =0.5\rho {A}_2{V_2}^2\sin 2\varPhi $$
As indicated by the streamlines at the horizontal central section in Figs. 26, 27, 28 (a), the 16H-1 caused less airflow deflection than the X-49 and the proposed design. Thus, the 16H-1 yielded a greater axial force and a smaller lateral force, according to the principle of momentum conservation. The deflection devices installed in the X-49 and the proposed design were more effective than the vertical vanes design in 16H-1.
The pressure contours can provide an explanation for the superiority of the designs of the X-49 and the proposed configuration compared to that of the 16H-1. In the momentum analysis, it was assumed that the exit pressure recovered. However, this assumption was not supported by the pressure contours as shown in Figs. 26, 27, 28 (b). For the 16H-1, a high pressure was present on the left side, balancing the effect of the vertical vanes, which can be seen in Fig. 26(b). In contrast, high pressures were observed on the right side of the X-49 and the proposed design, providing a higher positive lateral force. It is evident that a positive lateral force was beneficial for hovering in the present case.
Another factor for effective hovering is the mass flow. To calculate the flow through a certain surface, two rectangular sections were used to calculate the axial and lateral flow respectively as shown in Fig. 30. In the calculation process, the reference areas of the three different configurations (16H-1, X-49, and the proposed design) were consistent. Table 4 shows the mass flows of the models. The proposed design had the largest mass flow.
Reference areas for mass flow calculation
Table 4 Mass flows for different deflection systems of VTDP
The flow analyses, with the aid of the momentum theorem, revealed that the proposed design caused larger flow deflection, a favorable high pressure, and a larger mass flow, which resulted in a larger total lateral force.
A deflection device for a VTDP system was designed conceptually in this study. Part of the inspiration for the present design was drawn from careful examination of the flow in the VTDP using the principle of momentum conservation, and another was from a comparison of two existing deflection devices: the 16H-1 and X-49. The proposed design aimed to achieve effective hovering control, and therefore, the lateral force was most important. Moreover, the power for the VTDP was taken into consideration. To verify the effectiveness of the proposed design, comparisons between the experiments and numerical simulations were made for the 16H-1. Similar numerical simulations for the X-49 and the proposed design were carried out. The numerical results indicated that the present design provided a larger lateral force with a lower power consumption.
5 Nomenclature
R Inner radius of the duct, mm
Δ Tip clearance ratio, -
Cp Pressure coefficient, -
Cx Force coefficient in the x direction, -
Cy Force coefficient in the y direction, -
n Rotate speed, rpm
Ψ Deflection angle for conceptual design, °
Φ Deflection angle for 16H-1 and X-49, °
Z Lateral force, N
T Axial force, N
W Power, kW
The files supporting the result of this article are available upon request.
Publication date has been updated to 21st January 2021.
VTDP:
Vectored thrust ducted propeller
SST:
Shear-stress transport
Wang H, Gao Z (2005) Research on the scheme of a high-speed helicopter. Flight Dynamics 23(1):38–42.
Edi P, Yusoff N, Yazid AA, Catur SK, Nurkas W, Suyono WA (2008) New design approach of compound helicopter. WSEAS Trans Appl Theoret Mechan 3(9):799–808
Liu K, Ye F (2015) Review and analysis of recent developments for VTOL vehicles. Adv Aeronaut Sci Eng 6(2):127–138,159. https://doi.org/10.16615/j.cnki.1674-8190.2015.02.004
Qiu Y (2008) X-49A "speed hawk" verification prototype. Weapon Equip 1:46–47
Pereira JL (2008) Hover and wind-tunnel testing of shrouded rotors for improved micro air vehicle design. Dissertation, University of Maryland
Xu J, Fan N (2008) Research status and structural design of ducted UAV. Aerodynamic Missile J 1:10–14,19. https://doi.org/10.16338/j.issn.1009-1319.2008.01.002
Yetter JA (1995) Why do airlines want and use thrust reversers? A compilation of airline industry responses to a survey regarding the use of thrust reversers on commercial transport airplanes. NASA Technical Memorandum 109158
Rao Q, Sheng M, Han T, Hu Z, Chen Y (2014) Research on engine thrust reverser. Sci Mosaic 2:91–94. https://doi.org/10.13838/j.cnki.kjgc.2014.02.015
Zhu Z (2008) Investigation on rotor/stator interface processing method and analysis on configuration and aerodynamic of turbine. Dissertation, Nanjing University of Aeronautics and Astronautics
We would like to thank to all the experimental staff of the NF-3 Wind Tunnel for their hard work.
No funding.
School of Aeronautics, Northwestern Polytechnical University, 127 West Youyi Road, Beilin District, Xi'an, Shaanxi, 710072, People's Republic of China
Yongwei Gao, Jianming Zhang, Long Wang, Bingzhen Chen & Binbin Wei
Yongwei Gao
Jianming Zhang
Long Wang
Bingzhen Chen
Binbin Wei
All authors have participated equally during the manuscript preparation. The authors have read and approved the final manuscript.
Correspondence to Yongwei Gao.
The authors have no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Gao, Y., Zhang, J., Wang, L. et al. A conceptual design for deflection device in VTDP system. Adv. Aerodyn. 3, 2 (2021). https://doi.org/10.1186/s42774-020-00050-x
DOI: https://doi.org/10.1186/s42774-020-00050-x
Wind tunnel experiment
Numerical simulation
|
CommonCrawl
|
Predicting the natural history of metabolic syndrome with a Markov-system dynamic model: a novel approach
Abbas Rezaianzadeh ORCID: orcid.org/0000-0002-0067-06591,
Esmaeil Khedmati Morasae ORCID: orcid.org/0000-0003-2172-96532,
Davood Khalili ORCID: orcid.org/0000-0003-4956-10393,
Mozhgan Seif ORCID: orcid.org/0000-0003-2301-56034,
Ehsan Bahramali ORCID: orcid.org/0000-0001-5454-04065,
Fereidoun Azizi ORCID: orcid.org/0000-0002-6470-25176 &
Pezhman Bagheri ORCID: orcid.org/0000-0003-0920-57344,7
Markov system dynamic (MSD) model has rarely been used in medical studies. The aim of this study was to evaluate the performance of MSD model in prediction of metabolic syndrome (MetS) natural history.
Data gathered by Tehran Lipid & Glucose Study (TLGS) over a 16-year period from a cohort of 12,882 people was used to conduct the analyses. First, transition probabilities (TPs) between 12 components of MetS by Markov as well as control and failure rates of relevant interventions were calculated. Then, the risk of developing each component by 2036 was predicted once by a Markov model and then by a MSD model. Finally, the two models were validated and compared to assess their performance and advantages by using mean differences, mean SE of matrices, fit of the graphs, and Kolmogorov-Smirnov two-sample test as well as R2 index as model fitting index.
Both Markov and MSD models were shown to be adequate for prediction of MetS trends. But the MSD model predictions were closer to the real trends when comparing the output graphs. The MSD model was also, comparatively speaking, more successful in the assessment of mean differences (less overestimation) and SE of the general matrix. Moreover, the Kolmogorov-Smirnov two-sample showed that the MSD model produced equal distributions of real and predicted samples (p = 0.808 for MSD model and p = 0.023 for Markov model). Finally, R2 for the MSD model was higher than Markov model (73% for the Markov model and 85% for the MSD model).
The MSD model showed a more realistic natural history than the Markov model which highlights the importance of paying attention to this method in therapeutic and preventive procedures.
The study of natural history of chronic diseases is doubly complex due to their complex nature and multifactorial causality [1,2,3]. Because of this complexity, there are few detailed descriptions about chronic diseases natural history [4]. The aim of a study on natural history is to clarify the factors that affect the overall risk of transition from one stage to another as a diseases progresses (or regresses) [5]. Among the existing studies, some have looked at the natural history of diseases and their pathophysiology from a systems biology and complex and dynamic systems perspective [6]. Other studies, on the other hand, have illustrated the natural histories with complex statistical methods [7,8,9,10]. The most common methods for investigation of dynamic and complex situations and their progression are simulation-based statistical methods. Among these, Markov models, which pay special attention to random changes in processes (stochastic processes), are more important. Markov and system dynamics models clearly belong to two different scientific fields, but similar to the system dynamics models, the Markov models provide a powerful framework for analyzing dynamic systems [11,12,13]. However, Markov model requires a lot of computational capacity when a system becomes complex due to increase in the number of states and transitions. The MSD model is a hybrid model that combines Markov and system dynamics approaches to overcome the limitations of Markov models in modeling complex systems. Indeed, despite the difference between the Markov and the system dynamic models in terms of the stochastic and deterministic nature of the states, due to the important similarity of Markov model with system dynamic model in terms of "state" and "transition", these two models can be combined with each other or even in some cases converted to each other [14]. This hybrid model have been mainly used in non-medical fields and repairable systems for reliable analysis in a more realistic way [12, 15,16,17]. In fact, in a MSD model the failure and repair or control rate of a system, which are time-varying indexes, are considered for transient availability modeling or analysis of system reliability in calculations [15, 18]. Due to information feedback theory, easiness of tweaking parameters to test different hypotheses and possibility of providing solutions to various problems by adjusting flow rates [16], system dynamics models are a suitable solution to eliminate computational limitations of Markov models.
Metabolic Syndrome (MetS) is a global public health challenge with a plethora of increasing research around its epidemiology and physiological mechanisms [19,20,21,22,23,24]. However, there are few studies on its natural history which have provided contradictory findings [7,8,9,10, 25,26,27]. MetS is a very complicated disorder and one can have a different combination of the syndrome components at any given time, depending on their lifestyle. To be precise, one can be in one of the states of "no component", "isolated hypertension", "isolated overweight/obesity", "isolated hyperglycemia", "isolated dyslipidemia", "obesity + hypertension", "obesity + dyslipidemia", "obesity + hyperglycemia", "hypertension + dyslipidemia", "hypertension + hyperglycemia", "dyslipidemia + hyperglycemia", and a set of combinations of three or four components [28, 29]. The MetS is quite dynamic and one can transit from one state to another. This dynamicity of the disorder development in individuals makes it a proper candidate for an MSD approach. Therefore, this study was conducted to evaluate the performance of a MSD model in a context of investigating MetS natural history in a large population-based study.
Study type and participants
This retrospective study was undertaken on 4 waves of Tehran Lipid and Glucose Study (TLGS), ranging from year 1999 to year 2016 [30,31,32]. Data collection and measurement procedures, sampling processes, eligibility criteria for participants, and definitions of MetS criteria in TLGS are published elsewhere [33, 34].
The aim of the study was to evaluate the performance of a MSD model in investigation of MetS natural history (transition between 4 components of MetS, i.e. abdominal obesity, hypertension, hyperglycemia, high triglycerides with low HDL (dyslipidemia) and their combinations (12 states)). To be precise, the investigation encompassed calculating and predicting transition probabilities (TPs) between the mentioned 12 states over a period of 21 years (2015–2036) through a compartmental MSD model. The findings then were compared with those of a Markov model to see which model worked better.
Markov model
At the beginning of this section, a 12-state Markov model was designed and used to describe the natural history of MetS (Fig. 1). A Markov process is a random model for describing a sequence of probable events in which the probability of each event depends only on the present time, not preceding event. In other words, if the status of a process is known at times x1, x2…xn, then it can be said that only the latest information (that is the state of the process at the xn time), is sufficient to predict the future progression of the process (Xn + 1). Accordingly, the Markov dependency that is introduces as Markov properties (Memoryless) is assumed as follow:
$$P\left({X}_{n+1}={x}_{n+1}|{X}_1={x}_1,{X}_2={x}_2,\dots, {X}_n={x}_n\right)=P\left({X}_{n+1}={x}_{n+1}|{X}_n={x}_n\right)$$
A 12-State dynamic transition diagram for MetS natural history
Of course, it should be noted that the Markov dependency to the current state, can also be of a different order than the first, e.g., of the second-order [35]. So, in many practical situations the first-order dependency is sufficient but not always justifiable. However, this is a fundamental assumption in the Markov model which is mainly considered. Markov process can be fully described by its TP function or pij (t) which is the probability that a system is in (j) state at time (t), provided that the process starts from time (t = 0) and state (i). Hence when (i) = x (n-1) [36], then one can write a Markov process as follow:
$${P}_{ij}\ \left({X}_{n+1}= sj|{X}_n= si\right)=P\left({X}_n= sj|{X}_{n-1}= si\right)$$
That, (s) is the total number of states that a system can occupy at any given time. In our model, the time criteria for calculating the TPs in each phase across all 12 states was triennial and the final matrix of TPs was equal to the average of total values of TPs in all periods [9]. Also, the final state (i.e. MetS), was considered as an absorbing state. An absorbing state is a state in which no transition to any subsequent state will take place [37].
Based on the number of states in our model, a 12 × 12 transition matrix was used to calculate the TPs.
$$\mathrm{P}=\left[\begin{array}{ccc}{P}_{11}& \cdots & {P}_{112}\\ {}\vdots & \ddots & \vdots \\ {}{P}_{121}& \cdots & {P}_{1212}\end{array}\right]$$
MSD model design
In order to design a MSD model, control rate (CR) and failure rate (FR) indices were first calculated (section A in Additional file 1). FR and CR indices are used to evaluate the reliability of system models. They are also used to evaluate the effects of various interventions in a system model (SD). The interventions in our study were medicinal (i.e. self-reported consumption of different medications to control blood pressure, blood lipids, and blood sugar levels) and lifestyle-based interventions (i.e. TLGS phase II to reduce risk factors for non-communicable diseases in some participants [38]). A SD model performs the risk prediction process by using these two indices in a pre-fabricated model that is the product of actions-reactions between the states in a Markov model. FR (CR) indicates any progress (regress) from less (more) components towards more (less) components across the natural history of MetS.
In the CR calculation, both for lifestyle interventions and medicinal therapies, patients who were on the MetS state were not included in the calculation. Also, since no medicinal intervention was needed for healthy individuals, people with no-components state were not included in the CR calculation for medicinal interventions. Mean values of CR and FR were considered as the final values.
After calculating the CR and FR indices, based on our Markov diagram (Fig.1), causal loop and stock and flow diagrams were drawn for formulation of the SD model separately for no component, 1-component, 2-component and MetS (Figs. 2, 3 and 4). In fact, at this stage, in order to perform simulations, qualitative models (causal loop diagram) were transformed into quantitative models (stock and flow diagram).
Causal loop and stock-flow diagrams for the no-component state
Causal loop and stock-flow diagrams for 1 and 2-component states
Causal loop and stock-flow diagrams for MetS state
In these SD diagrams, the dynamic processes of transitions from each component to the preceding component (i.e. recovery or transition probability backward (TPO-B)), to the next component (i.e. disease progression or transition probability forward (TPO-F)), and lack of transition (stoppage) are shown as in-transition probabilities (TPIs) under the influences of CR and FR. B sign in the diagrams is indicative of a balancing cycle and R sign refers to a reinforcement cycle. To all of these transitions, which are in fact longitudinal (vertical) transitions, the types of lateral (horizontal) transitions must be added. Lateral transitions (Width TP = TPW) are the conversion (replacing) of each of the 1-components to other 1-components and also each of the 2-components to the other 2-components, which include a total of 21 transitions (6 transitions in 1-component and 15 in 2-component). Therefore, lateral transitions were defined in the form of TPW index, which is not affected by CR and FR. The final MSD model, which has become a quite complicated model of MetS and its components, is shown in Additional file 1 (refer to Fig. 1 in Section C of the Additional file 1).
Predictions with Markov model
In a Markov model, Pij(n) is the probability of transition from state (i) to state (j) in nth step. To calculate the matrix pn (the transition matrix of step (n)), one must multiply the matrix (P) n times by itself, which the (Pij) element in the matrix pn will be the same of Pij(n) [39]. For Risk prediction, the time horizon, based on the minimum average time of transition of individuals directly from no component to the MetS (which was approximately 2 years) in the follow-up periods, for 7 periods (3-year) (2015–2036) calculated and presented (refer to Section B of the Additional file 1). Given that transition probabilities were non-homogeneous or time dependent in our model, instead of transition probability, transition rate was used in which transitions are calculated as per unit time or instantaneous, which is equivalent to rate. Hence, the predictive rate was used in the Y axis as proportion of individuals who developed the MetS from various states over time period per total person-time.
Markov model validation
To validate Markov model performance, values of parameters in the fourth wave of the TLGS were predicted using the data from three preceding waves. The predicted values then were compared with the actual data using measures of mean differences and mean standard error of matrices. As a visual evaluation, a graph of the proportionality of the predicted data values with the empirical data in terms of trends was drawn in general.
Verification and prediction with the final MSD model
In order to validate the MSD model, the same validation steps in the case of the Markov model were repeated. Finally, for risk prediction by continuing the existing conditions such as Markov model, Time horizon, for 7 periods (3-year) (2015–2036) was calculated and presented. Since the SD model is actually a differential equations system whose its order depends on the number of variables, in the risk prediction section, to calculate the value (N) of each state, the following differential equation was designed. As an example, the equation is designed for the no component state and applies to other states as well.
$$\mathrm{No}\ \mathrm{component}\ \left(\mathrm{nc}\right)\ \mathrm{equation}=\frac{d\ (nc)}{d\ (t)}=\left({\upalpha}_{\mathrm{inflow}\ \left(\mathrm{t}0\right)}\times {\mathrm{nc}}_{\left(\mathrm{t}0\right)}\right)-\left({\upalpha}_{\mathrm{outflow}\ \left(\mathrm{t}\mathrm{n}\right)}\times {\mathrm{nc}}_{\left(\mathrm{t}\mathrm{n}\right)}\right)$$
On the other hand, based on the above differential equation, the integral equation based on the 3-year interval was written as follow:
$$nc={\int}_{\mathrm{t}0}^{\mathrm{t}3}\left[\left(\mathrm{CR}\times \mathrm{TP}\times {\sum}_{\mathrm{N}}\mathrm{n}\mathrm{c}\left(\mathrm{t}0\right)\right)+\left(\sum \mathrm{CR}\times \mathrm{TP}\times {\sum}_{\mathrm{N}}\mathrm{otherstonc}\left(\mathrm{t}\mathrm{n}\right)\right)-\right(\mathrm{FR}\times \mathrm{TP}\times {\sum}_{\mathrm{N}}\mathrm{n}\mathrm{c}\mathrm{toothers}\left(\mathrm{t}\mathrm{n}\right)\Big]$$
Finally, the integral equation for calculating the values of TPO-F and TPO-B, which are longitudinal transitions (from no component to MetS), and TPI, which are considered as insider transitions and are described in the previous sections, are as follows:
$$TPO-F=\int \left[{N}_{\left( origin\ state\right)}\times {TP}_{\left( origin\ \mathrm{state}\ \mathrm{to}\ \mathrm{next}\ \mathrm{state}\right)}\times {FR}_{\left( origin\ \mathrm{state}\right)}\right]$$
$$TPO-B\ and\ TPI=\int \left[{N}_{\left( origin\ state\right)}\times {TP}_{\left( origin\ \mathrm{state}\ \mathrm{to}\ \mathrm{next}\ \mathrm{state}\right)}\times {CR}_{\left( origin\ \mathrm{state}\right)}\right]$$
The integral equation of lateral transitions was also written as follows:
$$TPW=\int \left[{N}_{\left( origin\ state\right)}\times {TP}_{\left( origin\ \mathrm{state}\ \mathrm{to}\ \mathrm{next}\ \mathrm{state}\right)}\right]$$
Evaluation of models' performance
A triple approach was used to compare the performance of Markov and MSD models. At first, mean standard error of matrices, mean of the differences, and fit of the graphs were used to compare the two models' outputs. Then, using Kolmogorov-Smirnov two-sample test, closeness of predicted and empirical (goodness of fit) samples distributions in both Markov and MSD models was compared. Finally, in the third approach, the value of the R2 index was calculated based on a simple linear regression model between the actual and predicted values. As a result, the model with a smaller mean SE, a smaller mean difference, a more appropriate graph, and also a higher goodness of fit as well as higher R2 was selected as the desired one. Also, for quantification of uncertainty in predicted model's performance assessment in both models, standard errors for estimated transitions (predictive rates) as a measure of the accuracy of the resulting estimates that provide ability to objectively assess the quality of the reported estimates, was calculated. To estimation of the standard error associated with each transition, our approach was to use a bootstrap method [40] with 1000 iterations and combine results.
Additional analysis
Mean and percentage were used for descriptive analysis of baseline and follow-up waves of the TLGS. Also, Cochran's Q test was used to examine the significance of revealed trends in data. 0.05 was set as the significant value. IBM SPSS Statistics software for Windows version 24 (IBM Corp, Armonk, NY), excel 2016, and R-4.0.3 ("msm [41]"and "markovchain [42]"packages) were used for data analyses. The maximum likelihood method was used for parameter estimation in methods that have been implemented within "markovchain" and "msm" packages.
Ethical considerations
As this study was conducted on the TLGS data, it is ethically subject to the ethical considerations observed in the TLGS project. The study was also ethically approved by National Committee of Ethics in Iranian Biomedical Research (code# IR.SUMS.REC.1398.835).
Demographic variables description
56.16% (7235) of participants in TLGS sample (12,882) were female. At baseline, the mean of participants' age was 31.34 ± 17.3 years (median age = 29 years).
States description
Table 1 shows the status and trend of changes in 12 states of MetS during study periods. In general, the highest prevalence in baseline belonged to isolated dyslipidemia. In terms of difference between baseline and final stage values, states of "no component", "isolated dyslipidemia", "obesity + dyslipidemia", "hypertension + dyslipidemia", and "dyslipidemia + hyperglycemia" all had decreasing trends and the highest decrease was related to "hypertension + dyslipidemia". Other combinatorial states had increasing trends and the highest increase belonged to "obesity + hyperglycemia" state.
Table 1 Longitudinal change of MetS states among participants over the study period
TPs values
The overall TPs matrix is given in Table 2. Over the study period (4 follow-up periods), probability of direct and non-stop transition from "no component" to MetS was 8.6%. The highest transition probability from "no component" to other states belonged to "isolated abdominal obesity". Among isolated components, the highest TP towards MetS was related to hyperglycemia and hypertension, respectively. Among the composite components, the highest TP towards MetS belonged to obesity & hypertension which was the highest value of TP toward MetS among all components and their combinations (41.1%), which was generally introduced as the main initiator. The diagonal row in the matrix (Table 2) indicates the probability of remaining in same state, no transition, over time. In overall, people with MetS had the highest probability (60.2%) of no transition over time.
Table 2 Matrix of transition probabilities (%)
Markov predictions show that as the time continues over the years, the probability of transition towards MetS for all isolated states would first experience an upward trend until the sixth year, and then all the states would have a same probability of transition. Among the isolated states, people in "no component" state will experience the highest increase in the upward trend towards MetS and those with hyperglycemia will experience the least increase, before reaching to a constant probability (steady state) (Fig. 5). In other words, the highest rate of MetS seems to occur among people with no component state. At the same time, the highest rate of progression toward the MetS was related to hyperglycemia and the lowest was associated with no component.
Risk of progression towards MetS for isolated components in Markov model
In the case of composite states, the predictions showed that except for "obesity + hypertension" and "obesity & hyperglycemia" which would have a decreasing or a constant trend of transition towards MetS, other states would all first have an increasing trend for 6 years and then would flat-out. Before reaching to the steady probability, the highest progression toward the MetS was associated with obesity & hyperglycemia, and the highest progression was associated with dyslipidemia & hyperglycemia (Fig. 6). In other words, the highest rate of MetS among all the composite states, until reaching to the steady level, seems to occur in people with "dyslipidemia + hyperglycemia".
Risk of progression towards MetS for composite components in Markov model
Validation of the Markov model
In general, the mean of differences was 0.0562 and the mean SE of the predicted matrix from the actual matrix for the fourth period of TLGS was 0.003684. Also, the trend analysis showed that the fit between values in empirical and predicted data was favorable. Moreover, in terms of closeness of values, with an overestimation of about 5.62%, the estimated values were relatively desirable (Figs. 2 and 3 in section C Additional file 1). Generally, the evaluation was suggestive of relative adequacy of the Markov model in risk prediction.
MSD model
The overall CR and FR indices are presented in Table 3 (detailed tables can be found in Additional file 1 section D). The SD model was built to examine the progression of each component towards the MetS (separately for isolated components and composite components). For this purpose, the CR and FR values along with transition probabilities were entered into the final MSD model (Fig. 1 in Additional file 1 section C) and a risk prediction process was simulated.
Table 3 Control and failure rates in metabolic syndrome interventions
According to the MSD modeling outputs, among the isolated components, the highest progression rate towards MetS was related to hyperglycemia and obesity, respectively. The trends of other components had also a small upward slope and the lowest rate of progression belonged to dyslipidemia (Fig. 7). In the case of composite components, the rate of "obesity + hyperglycemia" progression towards MetS was higher than others composites (Fig. 8). But, in overall, progression slope of composites was greater than that of isolated components (except for obesity and hyperglycemia). The lowest progression slope was related to "hypertension + dyslipidemia".
Risk of progression towards MetS for isolated components in MSD model
Risk of progression towards MetS for composite components in MSD model
Validation of the MSD model for risk prediction
The mean difference between the predicted values and the empirical values was 0.04911 and the mean SE of the predicted matrix from the real matrix in the fourth period of TLGS was 0.002056. Also, the trend analysis showed that the fit between values in empirical and predicted data was desirable. Moreover, in terms of proximity of values, with an overestimation of about 4.9%, the estimated values were desirable (Figs. 4 and 5 in section C of Additional file 1). Overall, the evaluation indicated that the MSD model performance, in terms of risk prediction, was satisfactory.
According to the evaluation outputs, both the Markov and MSD models were shown to be desirable models for risk prediction. But, according to greater proximity of predictions made by the MSD model to the real (empirical) conditions (i.e. fitter graphs of values proportionality, lower mean difference (less overestimation), lower SE of the general matrix, and also significance of the MSD model test (p = 0.808 for MSD model and p = 0.023 for Markov model) which is indicative of equal distribution of real and predicted samples in the MSD model) and Finally, a higher R2 for the MSD model (73% for the Markov model and 85% for the MSD model), the MSD model was shown to be a more desirable model for predictions. Also, uncertainty quantifications given in section E in Additional file 1.
In this study, a MSD model was designed to model the natural history of MetS, i.e. progression from its components. The model then was compared with a Markov model to evaluate their performance. The findings showed that both the Markov and MSD models were adequate enough to predict the secular trends of the MetS. But based on the greater proximity of the predictions made by the MSD model to the real data gathered in TLGS, the MSD model was introduced as the desirable model.
The MSD model has a systemic approach and adopts a comprehensive and integrated view to the processes that lead up to MetS. For instance, a MSD model enriches one's understanding of the natural history of MetS by integration of the effectiveness of MetS-driven therapeutic and life-style interventions (i.e. control and failure rates) into the model. It also enriches the understanding by being open and inclusive to dynamicity of MetS components and the fact that one can shuffle back and forth between simple and complex components of the MetS over time. Therefore, the authors thought that a MSD model is a good match to sheer complexity of the MetS and might outperform other models, e.g. Markov model, in terms of risk predictions.
The risk prediction by Markov model in our study showed that all states/components first showed an upward trend towards MetS until the ninth year. Then, all the trends levels off at a same risk value. This pattern of trends (only the trends and not the time until leveling-off) was seen in other studies [7,8,9,10]. However, in the risk prediction process with the MSD model, which, there was no similar evidence, assuming that the existing conditions continued, the progression of all states toward the MetS (with differences between various states in different conditions) was upward which was completely different from the process observed in Markov modeling, both in our study and in other studies [7,8,9,10]. To be specific, the predictions made by the MSD model is a component-specific prediction that is not comparable with the general trends reported in other studies. In fact, in other studies, the general trend of MetS is drawn and described, while in our study, the trend of each components is described as part of the natural history of the disease. In other word, the trends shown by Markov model mainly refers to progression of the disease as a whole and lifetime, but the MSD model reveals the progression and dynamicity of each component in the natural history towards MetS. To clarify it more, it seems that since the Markov model does not systemically consider the natural history of MetS, interactions between components over time, non-linear knock-on effects of changes in each component on other components, and influence of external factors (e.g. interventions) on the natural history of the MetS are not considered in the modelling. As a result, rather than seeing the natural history of progress of components and states as a whole, the natural history of each component or state is examined and predicted separately. In this case, the real contribution of each component or state in the occurrence of MetS and its trend is probably not seen in full, and the rate of progress and change in progress are not accurately calculated. Importantly, in this study, our aim was to model the development trend of MetS components as different compounds as the natural history of MetS, rather than the development of MetS over time as a whole, that usually can be seen in other studies. Clinically and mechanically, as seen in this study, the highest rate of progression has been from no component to isolated components and from isolated components to composite components and finally to MetS, which illustrate a cumulative and ascending process over time.
In addition to Markov model, there are a few other prospective risk prediction models that have been used to predict the process of MetS development. For example, one study used a biomarker-base model [43] and the risk of MetS based on age and gender was predicted for 5 to 10 years into future. In another study, Framingham Risk Score (FRS) model was used [44] and an upward trend and an irregular trend was predicted for high and low risk people, respectively. Retrospective studies have also been another way to investigate the developmental process of MetS. For instance, it was shown that overall prevalence of MetS over a 15-year period had an upward trend, but incidence of MetS from different components had an irregular and relatively upward trend among children & adolescents [33] and adults [45]. The differences between our findings and those of others in this section are probably related to different study methodologies.
Accordingly, lack of similar studies in terms of methodology was a challenge for our study. In fact, although we showed that MSD outperformed Markov model (and probably other models) in revealing the developmental process of MetS, but unless it is widely used in various medical fields, a clear-cut judgment on functional advantages and strength of MSD model would be avoided. This is where we cordially invite researchers to work on in future.
The natural history of many chronic diseases, e.g. MetS, develops through multistate and dynamic paths. This chronicity, state-multiplicity, and dynamicity therefore calls for systemic approaches in order to understand and control these health problems. In this study, a MSD model showed to outperform a commonly used Markov model in revealing the developmental process of MetS over time. Our findings, therefore, invite researchers to adopt MSD models in investigation of chronic and complex health problems and test its practicalities.
Vaillant GE. Twelve-year follow-up of New York narcotic addicts. N Engl J Med. 1966;275(23):1282–8.
Moore JE. The natural history of chronic illness. J Chronic Dis. 1955;1(3):335–7.
Bynum B. A history of chronic diseases. Lancet. 2015;385(9963):105–6.
The Natural History of Disease. N Engl J Med. 1949;240(11):442–3.
Jewell NP. Natural history of diseases: statistical designs and issues. Clin Pharmacol Ther. 2016;100(4):353–61.
Rozendaal YJW, Wang Y, Hilbers PAJ, van Riel NAW. Computational modelling of energy balance in individuals with metabolic syndrome. BMC Syst Biol. 2019;13(1):24.
Chen X, Chen Q, Chen L, Zhang P, Xiao J, Wang S. Description and prediction of the development of metabolic syndrome in Dongying City: a longitudinal analysis using the Markov model. BMC Public Health. 2014;14:1033.
Hwang L-C, Bai C-H, You S-L, Sun C-A, Chen C-J. Description and prediction of the development of metabolic syndrome: a longitudinal analysis using a markov model approach. PLoS One. 2013;8(6):e67436–e.
PubMed PubMed Central CAS Google Scholar
Tang X, Liu Q. Prediction of the development of metabolic syndrome by the Markov model based on a longitudinal study in Dalian City. BMC Public Health. 2018;18(1):707.
Jia X, Chen Q, Wu P, Liu M, Chen X, Xiao J, et al. Dynamic development of metabolic syndrome and its risk prediction in Chinese population: a longitudinal study using Markov model. Diabetol Metab Syndr. 2018;10(1):24.
Howard RA. Dynamic Probabilistic Systems. Enabled: Dover Publications; Illustrated edition (May 4, 2012); 2012.
Rao MS, Naikan V. A hybrid Markov system dynamics approach for availability analysis of degraded systems; 2011.
Kirkwood JR. Markov processes 1, editor: CRC press; 2015.
Pratap KJ, Mohapatra RKR. System dynamics model for markov processes england system dynamics'91; 1991. p. 10–29.
Rao M, Naikan V. A novel Markov system dynamics framework for reliability analysis of systems. Econ Qual Contr. 2009;24:101–16.
Naikan MSRVNA. A system dynamics model for transient availability modeling of repairable redundant systems. Int J Performability Eng. 2015;11(3):203–11.
Schütte C, Sarich M. A critical appraisal of Markov state models. Eur Phys J Spec Top. 2015;224(12):2445–62.
Rao MS, Naikan VNA. A managerial tool for reliability analysis using a novel Markov system dynamics (MSD) approach. Int J Manage Sci Engin Manage. 2009;4(3):230–40.
Saklayen MG. The global epidemic of the metabolic syndrome. Curr Hypertens Rep. 2018;20(2):12.
Schiffer TA, Lundberg JO, Weitzberg E, Carlström M. Modulation of mitochondria and NADPH oxidase function by the nitrate-nitrite-NO pathway in metabolic disease with focus on type 2 diabetes. Biochim Biophys Acta Mol basis Dis. 1866;2020(8):165811.
Jepsen S, Suvan J, Deschner J. The association of periodontal diseases with metabolic syndrome and obesity. Periodontol. 2020;83(1):125–53.
Shaikh S, Dahani A, Arain SR, Khan F. Metabolic syndrome in young rheumatoid arthritis patients. J Ayub Med Coll Abbottabad. 2020;32(3):318–22.
Carnethon MR, Loria CM, Hill JO, Sidney S, Savage PJ, Liu K. Risk factors for the metabolic syndrome: the coronary artery risk development in young adults (CARDIA) study, 1985-2001. Diabetes Care. 2004;27(11):2707–15.
Choi SH, Yun KE, Choi HJ. Relationships between serum total bilirubin levels and metabolic syndrome in Korean adults. Nutr Metab Cardiovasc Dis. 2013;23(1):31–7.
Scuteri A, Morrell CH, Najjar SS, Muller D, Andres R, Ferrucci L, et al. Longitudinal paths to the metabolic syndrome: can the incidence of the metabolic syndrome be predicted? The Baltimore longitudinal study of aging. J Gerontol A Biol Sci Med Sci. 2009;64(5):590–8.
Cheung BM, Wat NM, Tam S, Thomas GN, Leung GM, Cheng CH, et al. Components of the metabolic syndrome predictive of its development: a 6-year longitudinal study in Hong Kong Chinese. Clin Endocrinol. 2008;68(5):730–7.
Tao LX, Wang W, Zhu HP, Huo D, Zhou T, Pan L, et al. Risk profiles for metabolic syndrome and its transition patterns for the elderly in Beijing, 1992-2009. Endocrine. 2014;47(1):161–8.
Vanlancker T, Schaubroeck E, Vyncke K, Cadenas-Sanchez C, Breidenassel C, González-Gross M, et al. Comparison of definitions for the metabolic syndrome in adolescents. The HELENA study. Eur J Pediatr. 2017;176(2):241–52.
Beilby J. Definition of metabolic syndrome: report of the National Heart, Lung, and Blood Institute/American Heart Association conference on scientific issues related to definition. Clin Biochem Rev. 2004;25(3):195–8.
Azizi F. Tehran Lipid and Glucose Study: A National Legacy. Int J Endocrinol Metab. 2018;16(4 Suppl):e84774–e.
Azizi F, Zadeh-Vakili A, Takyar M. Review of Rationale, Design, and Initial Findings: Tehran Lipid and Glucose Study. Int J Endocrinol Metab. 2018;16(4 Suppl):e84777–e.
Azizi F. Tehran lipid and glucose study: a legacy for prospective community-based research. Arch Iran Med. 2014;17(6):392–3.
Bagheri P, Khalil D, Seif M, Khedmati Morasae E, Bahramali E, Azizi F, et al. The dynamics of metabolic syndrome development from its isolated components among iranian children and adolescents: findings from 17 years of the Tehran lipid and glucose study (TLGS). Diabetes Metab Syndr. 2020;15(1):99–108.
Bagheri P, Khalili D, Seif M, Rezaianzadeh A. Dynamic behavior of metabolic syndrome progression: a comprehensive systematic review on recent discoveries. BMC Endocr Disord. 2021;21(1):54.
Kedem B. Sufficient statistics associated with a two-state second-order Markov chain. Biometrika. 1976;63(1):127–32.
Rao MS, Naikan VNA. A Markov System Dynamics Approach for Repairable Systems Reliability Modeling. Int J Reliab Qual Saf Eng. 2016;23(01):1650004.
Carswell CI. Essentials of Pharmacoeconomics. PharmacoEconomics. 2008;26(12):1065.
Azizi F, Ghanbarian A, Momenan AA, Hadaegh F, Mirmiran P, Hedayati M, et al. Prevention of non-communicable disease in a population in nutrition transition: Tehran lipid and glucose study phase II. Trials. 2009;10:5.
Banimahd SA, Khalili D. Drought Class Transition Analysis by Markov Chains and Log-Linear Models: Approach for Early Drought Warning. Iran J Watershed Manage Sci Engin. 2014;8(24):0.
Zoubir A, Iskander D. Bootstrap methods and applications. Signal Process Magazine IEEE. 2007;24:10–9.
Jackson C. Multi-state modelling with R: the msm package. U.K: MRC Biostatistics Unit Cambridge; 2019.
Spedicato G, Signorelli M, editors. The markovchain Package: A Package for Easily Handling Discrete Markov Chains in R. Computer Science; 2013.
Zhang W, Chen Q, Yuan Z, Liu J, Du Z, Tang F, et al. A routine biomarker-based risk prediction model for metabolic syndrome in urban Han Chinese population. BMC Public Health. 2015;15(1):64.
Yousefzadeh G, Shokoohi M, Najafipour H, Shadkamfarokhi M. Applying the Framingham risk score for prediction of metabolic syndrome: the Kerman coronary artery disease risk study, Iran. ARYA Atheroscler. 2015;11(3):179–85.
Khalili D, Bagheri P, Seif M, Rezaianzadeh A, Khedmati Morasae E, Bahramali E, et al. The dynamics of metabolic syndrome development from its isolated components among Iranian adults: findings from 17 years of the Tehran lipid and glucose study (TLGS). J Diabetes Metab Disord. 2021;20(1):95–105.
This article is the result of Pezhman Bagheri's PhD. Thesis in Epidemiology with registration code: SUMS.98/19936. We have to express our sincere thanks to all the personnel of The Shahid Beheshti University of Medical Sciences (SBMU) Research Institute for Endocrine Sciences for respectable cooperation in data collection phase that lead to the outcome of this project.
This study was financially supported by the Vice-Chancellor for Research and Technology of Shiraz University of Medical Sciences (SUMS), which is worthy of thanks and appreciation.
Colorectal Research Center, Shiraz University of Medical Sciences, Shiraz, Iran
Abbas Rezaianzadeh
Center for Circular Economy, Business School, University of Exeter, Exeter, UK
Esmaeil Khedmati Morasae
Prevention of Metabolic Disorders Research Center, Research Institute for Endocrine Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
Davood Khalili
Department of Epidemiology, School of Health, Shiraz University of Medical Sciences, Shiraz, Iran
Mozhgan Seif & Pezhman Bagheri
Noncommunicable Diseases Research Center, Fasa University of Medical Sciences, Fasa, Iran
Ehsan Bahramali
Endocrine Research Center, Research Institute for Endocrine Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
Fereidoun Azizi
Shiraz University of Medical Sciences, Shiraz, Iran
Pezhman Bagheri
Mozhgan Seif
P.B. developed the theory and performed the analysis, performed the literature search, assessed the literature, extracted data, wrote the manuscript with support from D.K., A.R. developed the theoretical framework, encouraged and supervised the findings of this work, design and implementation of the research, verified the analytical methods., M.S. developed the theoretical framework, verified the analytical methods., E.KH. developed the theoretical framework, design and implementation of the research, verified the analytical methods. E.B. was a contributor in developed the theory and design, F.A. was a major contributor in licensing access and use of TLGS data. All authors discussed the results and contributed to the final manuscript. All authors read and approved the final manuscript.
Correspondence to Pezhman Bagheri.
As this study was conducted on the TLGS data, it is ethically subject to the ethical considerations observed in the TLGS project. Then, informed consent for study participation was obtained from all subjects in TLGS previously. The study was also ethically approved by National Committee of Ethics in Iranian Biomedical Research (code# IR.SUMS.REC.1398.835). Also, all methods were performed in accordance with the relevant guidelines and regulations.
Rezaianzadeh, A., Morasae, E.K., Khalili, D. et al. Predicting the natural history of metabolic syndrome with a Markov-system dynamic model: a novel approach. BMC Med Res Methodol 21, 260 (2021). https://doi.org/10.1186/s12874-021-01456-x
DOI: https://doi.org/10.1186/s12874-021-01456-x
Markov-system dynamics
|
CommonCrawl
|
Numerical experiments
Three kinds of new hybrid projection methods for a finite family of quasi-asymptotically pseudocontractive mappings in Hilbert spaces
Yuanxing Liu1,
Liguo Zheng2,
Peiyuan Wang3, 4 and
Haiyun Zhou5Email author
© Liu et al. 2015
Received: 25 February 2015
Accepted: 29 June 2015
In the present paper, we propose three kinds of new algorithms for a finite family of quasi-asymptotically pseudocontractive mappings in real Hilbert spaces. By using some new analysis techniques, we prove the strong convergence of the proposed algorithms. Some numerical examples are also included to illustrate the effectiveness of the proposed algorithms. The results presented in this paper are interesting extensions of those well-known results.
a finite family of quasi-asymptotically pseudocontractive mapping
uniformly L-Lipschitz mapping
iterative algorithm
strong convergence
Throughout this paper, we assume that H is a real Hilbert space with inner product \(\langle\cdot,\cdot\rangle\) and the induced norm \(\| \cdot\|\), respectively. Let C be a nonempty, closed, and convex subset of H and \(T:C\rightarrow C\) a self-mapping of C into itself. We use \(\operatorname{Fix}(T)\) to denote the fixed point set of T, i.e., \(\operatorname{Fix}(T)=\{x\in C:x=Tx\}\).
Over the past century or so, fixed point theory of Lipschitzian and non-Lipschitzian mappings has been developed into a really important and active field of study in both pure and applied mathematics. Especially, the research on the existence and convergence of fixed points for nonexpansive mappings and pseudocontractive mappings in the framework of Hilbert and Banach spaces has made great advancements since 1965; see, for instance, [1–3] and the references therein.
As generalizations of nonexpansive mappings and pseudocontractive mappings, the classes of asymptotically nonexpansive mappings and asymptotically pseudocontractive mappings were introduced by some authors, respectively; see, for instance, [4–6].
Let E be a Banach space and C a nonempty subset of E.
Recall that a mapping \(T:C\to C\) is said to be asymptotically nonexpansive [4] if there exists a sequence \(\{k_{n}\}\) of positive real numbers with \(k_{n}\to1\) such that
$$ \bigl\Vert T^{n}x-T^{n}y\bigr\Vert \le k_{n} \|x-y\|, $$
for all \(x,y\in C\) and all \(n\ge1\).
The class of asymptotically nonexpansive mappings was introduced by Goebel and Kirk [4] in 1972. From (1.1), we know that if T is nonexpansive, then it is asymptotically nonexpansive with a constant sequence \(\{1\}\), but the converse may be not true in general, which can be seen from the example in [4] that is asymptotically nonexpansive but it is not nonexpansive, thus, the class of asymptotically nonexpansive mappings includes properly the class of nonexpansive mappings as a subclass. An early fundamental result, due to Goebel and Kirk [4], states that if C is a nonempty, bounded, closed, and convex subset of a uniformly convex Banach space E, then every asymptotically nonexpansive self-mapping T of C has a fixed point. Further, the set \(\operatorname{Fix}(T)\) of fixed points of T is closed and convex. Since 1972, many authors have studied the weak and strong convergence problems of the iterative algorithms for such a class of mappings; see, for instance, [7–9] and the references therein.
The class of asymptotically pseudocontractive mappings was introduced by Schu [5] in 1991.
Recall that a mapping \(T:C\to H\) is called asymptotically pseudocontractive if there exists a sequence \(\{k_{n}\}\subset[1,\infty )\) with \(k_{n}\to 1\) for which the following inequality holds:
$$ \bigl\langle T^{n}x-T^{n}y, x-y\bigr\rangle \le k_{n}\|x-y\|^{2}, $$
T is said to be quasi-asymptotically pseudocontractive if \(\operatorname{Fix}(T)\ne\emptyset\) and there exists a sequence \(\{k_{n}\}\subset [1,\infty)\) with \(k_{n}\to1\) for which the following inequality holds:
$$ \bigl\langle T^{n}x-p, x-p\bigr\rangle \le k_{n}\|x-p \|^{2}, $$
for all \(x\in C\), \(p\in\operatorname{Fix}(T)\) and all \(n\ge1\).
Without loss of generality, we can assume that \(1\le k_{n}<2\), for all \(n\ge1\).
In 1996, Liu [6] introduced the class of κ-strictly asymptotically pseudocontractive mappings in Hilbert spaces. A mapping \(T:C\to C\) is called κ-strictly asymptotically pseudocontractive if there exist some \(\kappa\in[0,1)\) and some real sequence \(\{k_{n}\}\subset[1,\infty)\) with \(k_{n}\to1\) such that
$$ \bigl\Vert T^{n}x-T^{n}y\bigr\Vert ^{2}\le k_{n}^{2}\|x-y\|^{2}+\kappa\bigl\Vert \bigl(I-T^{n}\bigr)x-\bigl(I-T^{n}\bigr)y\bigr\Vert ^{2}, $$
A mapping \(T:C\to C\) is called quasi-κ-strictly asymptotically pseudocontractive if \(\operatorname{Fix}(T)\ne\emptyset\), and there exist some \(\kappa\in[0,1)\) and some real sequence \(\{k_{n}\}\subset [1,\infty)\) with \(k_{n}\to1\) such that
$$ \bigl\| T^{n}x-y\bigr\| ^{2}\le k_{n}^{2}\|x-y \|^{2}+\kappa\bigl\Vert \bigl(I-T^{n}\bigr)x\bigr\Vert ^{2}, $$
for all \(x\in C\), \(y\in\operatorname{Fix}(T)\) and \(n\ge1\).
A mapping \(T:C\to C\) is said to be uniformly L-Lipschtzian if there exists some \(L>0\) such that
$$ \bigl\Vert T^{n}x-T^{n}y\bigr\Vert \le L\|x-y \|, $$
for all \(x,y\in C\) and for all \(n\ge1\).
Remark 1.1
We note that every κ-strictly asymptotically pseudocontractive mapping is uniformly L-Lipschitzian with the Lipschitz constant \(L=\frac{M+\sqrt{\kappa}}{1-\sqrt{\kappa}}\), where \(M=\sup_{n}\{k_{n}\}\). In particular, every asymptotically nonexpansive mapping is uniformly L-Lipschitzian with \(L=\sup\{k_{n}:n\ge 1\}\).
It is clear that every asymptotically nonexpansive mapping is 0-strictly asymptotically pseudocontractive; while every asymptotically pseudocontractive mapping with sequence \(\{k_{n}\}\) is 1-strictly asymptotically pseudocontractive with sequence \(\{2k_{n}-1\}\).
It is also clear that every asymptotically pseudocontractive mapping with \(\operatorname{Fix}(T)\ne\emptyset\) is quasi-asymptotically pseudocontractive, but the converse may be not true in general, which can be seen from the following counterexample.
Take \(C=[0,2\pi]\) and define a mapping \(T:C\to\mathbb{R}\) by
$$Tx=\frac{2}{3}x\cos(x), \quad x\in C. $$
Then T is quasi-asymptotically pseudocontractive, but it is not asymptotically pseudocontractive. Indeed, assume that \(x=Tx\), then \(x=0\), and hence \(\operatorname{Fix}(T)=\{0\}\).
For all \(x\in C\), we have
$$|Tx-0|=\biggl\vert \frac{2}{3}x\cos(x)\biggr\vert \le|x-0|, $$
which means that T is quasi-nonexpansive, and hence it is quasi-asymptotically pseudocontractive. On the other hand, if we take \(x=2\pi\) and \(y=\pi\), then we have
$$\langle Tx-Ty, x-y\rangle=2\pi^{2}\ge k_{1} \pi^{2}=k_{1}|x-y|^{2}, $$
which means that T is not asymptotically pseudocontractive.
The class of asymptotically pseudocontractive mappings is a generalization of the class of pseudocontractive mappings, and the former contains properly the class of asymptotically nonexpansive mappings as a subclass, which can be seen from the following example.
For \(x\in[0,1]\), define a mapping \(T:[0,1]\to[0,1]\) by
$$Tx=\bigl(1-x^{\frac{2}{3}}\bigr)^{\frac{3}{2}}, \quad x\in[0,1]. $$
Then T is asymptotically pseudocontractive but it is not asymptotically nonexpansive.
Recently, as a generalization of Haugazeau's algorithm, the so-called hybrid projection algorithm was developed rapidly for finding the nearest fixed point of certain quasi-nonexpansive mappings; see, for instance, Bauschke and Combettes [10] and the references therein.
By virtue of the hybrid projection methods, Nakajo and Takahashi [11] established some strong convergence results for nonexpansive mappings and nonexpansive semigroups in a real Hilbert space; Marino and Xu [12] proved a strong convergence theorem for strict-pseudo-contractions in a real Hilbert space; Zhou [13] extended Marino and Xu's strong convergence theorem to the more general class of Lipschitz pseudocontractive mappings; Zhou [14] generalized and extended the main results of [13] to the class of asymptotically pseudocontractive mappings; Zhou and Su [15] further extended the main results in [14] to a family of uniformly L-Lipschitz continuous and quasi-asymptotically pseudocontractive mappings.
We observe that the construction of the half-spaces \(C_{n}\) in [15] is complicated, and hence the computation of the metric projections \(P_{C_{n}}x_{1}\) is difficult.
Our concern now is the following: Can one design some simple and new hybrid projection algorithms for finding a common fixed point for a finite family of quasi-asymptotically pseudocontractive mappings?
The purpose of this paper is to propose three kinds of new hybrid projection algorithms for constructing a common fixed point of a finite family of quasi-asymptotically pseudocontractive mappings in a real Hilbert space. By using some new analysis techniques, we prove the strong convergence of the proposed algorithms. Some numerical examples are also included to illustrate the effectiveness of the proposed algorithms. The results presented in this paper improve and extend the related ones obtained by some authors.
For uniformly L-Lipschitzian mappings, the following fixed point theorem is well known; see, for example, Cassini and Maluta [16].
Theorem CM
Let E be a uniformly convex Banach space with \(N(E)>1\), C be a nonempty, bounded, and closed convex subset of E and \(T:C\to C\) be a uniformly L-Lipschitzian mapping. If \(L<\sqrt{N(E)}\), where \(N(E)\) denotes the normal structure coefficient of E, then T has a fixed point in C.
It is well known that \(N(H)=\sqrt{2}\). Thus, in the setting of a Hilbert space H, every uniformly L-Lipschitzian mapping \(T:C\to C\) from a nonempty, bounded, and closed convex subset C of H into itself has a fixed point in C provided that \(L<\sqrt[4]{2}\).
In [14], a fixed point theorem was established for asymptotically pseudocontractive mappings in Hilbert spaces.
Theorem Z
Let C be a nonempty, bounded, and closed convex subset of a real Hilbert space H and \(T:C\to C\) be a uniformly L-Lipschitzian and asymptotically pseudocontractive mapping which is also uniformly asymptotically regular, i.e., \(\lim_{n\to\infty}\sup_{x\in C}\{\|T^{n+1}x-T^{n}x\|\}=0\). Then T has a fixed point in C.
Theorem Z is the first fixed point theorem for asymptotically pseudocontractive mappings in Hilbert spaces, which is of importance and interest.
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. For every point \(x\in H\) there exists a unique nearest point in C, denoted by \(P_{C}x\), such that
$$ \|x-P_{C}x\|\leq\|x-y\|, \quad \text{for all } y\in C, $$
where \(P_{C}\) is called the metric projection of H onto C. We know that \(P_{C}\) is a nonexpansive mapping.
The following first two lemmas are well known.
Lemma 2.1
(see, e.g., [1–3])
Let C be a nonempty, closed, and convex subset of real Hilbert space H. Given \(x\in H\) and \(z\in C\). Then \(z=P_{C}x\) if and only if we have the relation
$$ \langle x-z, y-z\rangle\leq0,\quad \textit{for all } y\in C. $$
Let C be a nonempty closed convex subset of a real Hilbert space H and \(P_{C}:H\to C\) be the metric projection from H onto C. Then the following inequality holds:
$$ \|y-P_{C}x\|^{2}+\|x-P_{C}x\|^{2}\le \|x-y\|^{2},\quad \forall x\in H, \forall y\in C. $$
The next lemma is due to Zhou and Su [15]. For the sake of completeness, we include its proof here.
Let C be a nonempty, bounded, and closed convex subset of a real Hilbert space H. Let \(T:C\to C\) be a uniformly L-Lipschitzian and quasi-asymptotically pseudocontractive mapping. Then \(\operatorname{Fix}(T)\) is a closed convex subset of C.
Since T is uniformly L-Lipschitzian continuous, \(\operatorname{Fix}(T)\) is closed. We need to show that \(\operatorname{Fix}(T)\) is convex. To this aim, let \(p_{i}\in\operatorname{Fix}(T)\) (\(i=1,2\)) and write \(p=tp_{1}+(1-t)p_{2}\) for \(t\in (0,1)\). We plan to show that \(p=Tp\). To see this, we take \(\alpha\in (0,\frac{1}{1+L})\), and define \(y_{\alpha,n}=(1-\alpha)p+\alpha T^{n}p\) for each \(n\ge1\). Then, in view of the quasi-asymptotic pseudocontractiveness of T, we have, \(\forall z\in\operatorname{Fix}(T)\),
$$\begin{aligned} \bigl\Vert p-T^{n}p\bigr\Vert ^{2} =&\bigl\langle p-T^{n}p,p-T^{n}p\bigr\rangle \\ =&\frac{1}{\alpha}\bigl\langle p-y_{\alpha,n},p-T^{n}p\bigr\rangle \\ =&\frac{1}{\alpha}\bigl\langle p-y_{\alpha,n},p-T^{n}p- \bigl(y_{\alpha ,n}-T^{n}y_{\alpha,n}\bigr)\bigr\rangle +\frac{1}{\alpha}\bigl\langle p-y_{\alpha,n},y_{\alpha,n}-T^{n}y_{\alpha ,n} \bigr\rangle \\ \le&\frac{1+L}{\alpha}\|p-y_{\alpha,n}\|^{2}+\frac{1}{\alpha} \bigl\langle p-z,y_{\alpha,n}-T^{n}y_{\alpha,n}\bigr\rangle \\ &{}+ \frac{1}{\alpha}\bigl\langle z-y_{\alpha,n}, \bigl(I-T^{n} \bigr)y_{\alpha,n}\bigr\rangle \\ \le&\frac{1+L}{\alpha}\|p-y_{\alpha,n}\|^{2}+\frac{1}{\alpha} \bigl\langle p-z,\bigl(I-T^{n}\bigr)y_{\alpha,n}\bigr\rangle + \frac{1}{\alpha}(k_{n}-1) (\operatorname{diam} C)^{2} \\ =&\alpha(1+L)\bigl\Vert p-T^{n}p\bigr\Vert ^{2}+ \frac{1}{\alpha}\bigl\langle p-z,\bigl(I-T^{n}\bigr)y_{\alpha,n} \bigr\rangle \\ &{}+\frac{1}{\alpha}(k_{n}-1) (\operatorname{diam} C)^{2}, \end{aligned}$$
from which it turns out that
$$ \alpha\bigl[1-(1+L)\alpha\bigr]\bigl\Vert p-T^{n}p\bigr\Vert ^{2}\le\bigl\langle p-z, \bigl(I-T^{n}\bigr)y_{\alpha,n} \bigr\rangle +(k_{n}-1) (\operatorname{diam} C)^{2}. $$
Taking \(z=p_{i}\) (\(i=1,2\)) in (2.4), multiplying t and \((1-t)\) on both sides of (2.4), respectively, and adding up yield
$$ \alpha\bigl[1-(1+L)\alpha\bigr]\bigl\Vert p-T^{n}p\bigr\Vert ^{2}\le(k_{n}-1) (\operatorname{diam} C)^{2}. $$
Letting \(n\to\infty\) in (2.5) yields \(T^{n}p\to p\). Since T is continuous, we have \(T^{n+1}p\to Tp\) as \(n\to\infty\), so that \(p=Tp\). This proves that \(\operatorname{Fix}(T)\) is a closed convex subset of C. □
In the proof of Lemma 2.3 above, the assumption of quasi-asymptotic pseudocontractiveness of mapping T has been used.
In this section, we present three kinds of new hybrid projection algorithms for finding a common fixed point for a finite family of uniformly \(L_{i}\)-Lipschitzian and quasi-asymptotically pseudocontractive mappings in Hilbert spaces. Let N be a fixed positive integer. We put \(I=\{0,1,2,\ldots, N-1\}\). For any positive integer n, we write \(n=(h(n)-1)N+i(n)\), where \(h(n)\to\infty\) as \(n\to\infty\) and \(i(n)\in I\), for all \(n\ge 0\).
First, we prove the following strong convergence theorem for a finite family of uniformly \(L_{i}\)-Lipschitzian and quasi-asymptotically pseudocontractive mappings in Hilbert spaces.
Theorem 3.1
Let C be a bounded, closed, and convex subset of a real Hilbert space H. Let \(\{T_{i}\}_{i=0}^{N-1}:C\to C\) be a finite family of uniformly \(L_{i}\)-Lipschitzian and quasi-asymptotically pseudocontractive mappings such that \(F=\bigcap_{i=0}^{N-1}\operatorname{Fix}(T_{i})\ne\emptyset\). Assume the control sequence \(\{\alpha_{n}\}\) is chosen so that \(\alpha_{n}\in[a,b]\) for some \(a,b\in(0,\frac{1}{1+L})\), where \(L=\max\{L_{i}:0\le i\le N-1\}\). Let a sequence \(\{x_{n}\}\) be generated by the following manner:
$$ \left \{ \textstyle\begin{array}{l} x_{0}\in C \quad \textit{chosen arbitrarily}, \\ y_{n}=(1-\alpha_{n})x_{n}+\alpha_{n} T_{i(n)}^{h(n)}x_{n},\quad n\ge0, \\ C_{n}=\{z\in C:\alpha_{n}[1-(1+L)\alpha_{n}]\|x_{n}-T_{i(n)}^{h(n)}x_{n}\|^{2} \\ \hphantom{C_{n}={}}\leq \langle x_{n}-z, (y_{n}-T_{i(n)}^{h(n)}y_{n})\rangle+(k_{h(n)}-1)(\operatorname{diam} C)^{2}\}, \quad n\ge0, \\ Q_{0}=C, \\ Q_{n}=\{z\in Q_{n-1}:\langle z-x_{n}, x_{0}-x_{n}\rangle\le0\},\quad n\ge1, \\ x_{n+1}=P_{C_{n}\cap Q_{n}}x_{0},\quad n\ge0, \end{array}\displaystyle \right . $$
where \(k_{h(n)}=\max\{k_{h(n),i(n)}:0\le i(n)\le N-1\}\) and \(k_{h(n),i(n)}\) are asymptotic sequences for \(\{T_{i}\}_{i=0}^{N-1}\). Then the sequence \(\{x_{n}\}\) generated by (3.1) converges strongly to \(P_{F}x_{0}\).
We split the proof into ten steps.
Step 1. Show that \(P_{F}x_{0}\) is well defined for every \(x_{0}\in C\).
By Lemma 2.3, we know that \(\operatorname{Fix}(T_{i})\) is a closed convex subset of C for every \(i\in I\). Hence, \(F=\bigcap_{i=0}^{N-1}\operatorname{Fix}(T_{i})\) is a nonempty, closed, and convex subset of C, consequently, \(P_{F}x_{0}\) is well defined for every \(x_{0}\in C\).
Step 2. Show that both \(C_{n}\) and \(Q_{n}\) are closed and convex, for all \(n\ge0\). This follows from the constructions of \(C_{n}\) and \(Q_{n}\). We omit the details.
Step 3. Show that
$$ F\subset C_{n}\cap Q_{n}, \quad \text{for all } n\ge0. $$
To this aim, we prove first that \(F\subset C_{n}\), for all \(n\ge0\).
Using (3.2), the uniform \(L_{i}\)-Lipschitz continuity of \(T_{i}\) and quasi-asymptotic pseudocontractiveness of \(T_{i}\), we obtain, for any \(z\in F\),
$$\begin{aligned} \bigl\Vert x_{n}-T_{i(n)}^{h(n)}x_{n}\bigr\Vert ^{2} =&\bigl\langle x_{n}-T_{i(n)}^{h(n)}x_{n},x_{n}-T_{i(n)}^{h(n)}x_{n} \bigr\rangle \\ =&\frac{1}{\alpha_{n}}\bigl\langle x_{n}-y_{n},x_{n}-T_{i(n)}^{h(n)}x_{n} \bigr\rangle \\ =&\frac{1}{\alpha_{n}}\bigl\langle x_{n}-y_{n}, \bigl(I-T_{i(n)}^{h(n)}\bigr)x_{n}-\bigl(I-T_{i(n)}^{h(n)} \bigr)y_{n}\bigr\rangle \\ &{} +\frac{1}{\alpha_{n}}\bigl\langle x_{n}-y_{n}, \bigl(I-T_{i(n)}^{h(n)}\bigr)y_{n}\bigr\rangle \\ \le&\frac{1+L}{\alpha_{n}}\|x_{n}-y_{n}\|^{2}+ \frac{1}{\alpha_{n}}\bigl\langle x_{n}-z,\bigl(I-T_{i(n)}^{h(n)} \bigr)y_{n}\bigr\rangle \\ &{}+\frac{1}{\alpha_{n}}(k_{h(n)}-1) ( \operatorname{diam} C)^{2} \\ =&(1+L)\alpha_{n}\bigl\Vert x_{n}-T^{n}x_{n} \bigr\Vert ^{2}+\frac{1}{\alpha_{n}}\bigl\langle x_{n}-z, \bigl(I-T_{i(n)}^{h(n)}\bigr)y_{n}\bigr\rangle \\ &{}+ \frac{1}{\alpha_{n}}(k_{h(n)}-1) (\operatorname{diam} C)^{2}, \end{aligned}$$
$$ \alpha_{n}\bigl[1-(1+L)\alpha_{n}\bigr]\bigl\Vert x_{n}-T_{i(n)}^{h(n)}x_{n}\bigr\Vert ^{2}\le\bigl\langle x_{n}-z,\bigl(I-T_{i(n)}^{h(n)} \bigr)y_{n}\bigr\rangle +(k_{h(n)}-1) (\operatorname{diam} C)^{2}, $$
which shows that \(z\in C_{n}\), for all \(n\ge0\). This proves that \(F\subset C_{n}\), for all \(n\ge0\).
As shown in Marino and Xu [12], by a simple induction, we can show that
$$ F\subset Q_{n}, \quad \text{for all } n\ge0. $$
Because this is routine, we omit the details. We have shown that (3.2) holds. Hence \(P_{C_{n}\cap Q_{n}}x_{0}\) is well defined. Consequently, the iteration algorithm (3.1) is well defined.
Step 4. Show that \(\lim_{n\to\infty}\|x_{n}-x_{0}\|\) exists.
In view of (3.1) and Lemma 2.1, we have \(x_{n}=P_{Q_{n}}x_{0}\) and \(x_{n+1}\in Q_{n}\), which means that \(\|x_{n}-x_{0}\|\le\|x_{n+1}-x_{0}\|\), for all \(n\ge0\). As \(z\in F\subset Q_{n}\), we have also \(\|x_{n}-x_{0}\|\le\|z-x_{0}\|\), consequently, \(\lim_{n\to\infty}\|x_{n}-x_{0}\|\) exists.
Step 5. Show that \(x_{n+1}-x_{n}\to0\) as \(n\to\infty\).
By using Lemma 2.2, we have
$$\|x_{n+1}-x_{n}\|^{2}\le\|x_{n+1}-x_{0} \|^{2}-\|x_{n}-x_{0}\|^{2}\to0 $$
as \(n\to\infty\).
Step 6. Show that \(x_{n}-T_{i(n)}^{h(n)}x_{n}\to0\) as \(n\to\infty\).
It follows from Step 5 that \(x_{n+1}-x_{n}\to0\) as \(n\to\infty\). Since \(x_{n+1}\in C_{n}\), noting that \(\alpha_{n}\in[a,b]\) for \(a,b\in (0,\frac{1}{1+L})\), \(\{y_{n}\}\) and \(\{T_{i(n)}^{h(n)}y_{n}\}\) are all bounded, from the definition of \(C_{n}\), we have \(x_{n}-T_{i(n)}^{h(n)}x_{n}\to0\) as \(n\to\infty\).
Step 7. Show that \(x_{n}-T_{i(n)}x_{n}\to0\) as \(n\to\infty\).
Since \(n=(h(n)-1)+i(n)\), we have
$$n-N=\bigl(h(n)-1-1\bigr)N+i(n). $$
On the other hand, since \(n-N=(h(n-N)-1)N+i(n-N)\), we have \(h(n)-1=h(n-N)\) and \(i(n)=i(n-N)\). Observe that
$$\begin{aligned} \Vert x_{n}-T_{i(n)}x_{n}\Vert \le& \bigl\Vert x_{n}-T_{i(n)}^{h(n)}x_{n}\bigr\Vert +\bigl\Vert T_{i(n)}^{h(n)}x_{n}-T_{i(n)}x_{n} \bigr\Vert \\ \le&\bigl\Vert x_{n}-T_{i(n)}^{h(n)}x_{n} \bigr\Vert +L\bigl\Vert T_{i(n)}^{h(n)-1}x_{n}-x_{n} \bigr\Vert \\ \le&\bigl\Vert x_{n}-T_{i(n)}^{h(n)}x_{n} \bigr\Vert +L\bigl\Vert T_{i(n)}^{h(n-N)}x_{n}-T_{i(n-N)}^{h(n-N)}x_{n-N} \bigr\Vert \\ &{} +\bigl\Vert T_{i(n-N)}^{h(n-N)}x_{n-N}-x_{n-N} \bigr\Vert +\Vert x_{n-N}-x_{n}\Vert \\ \le& \bigl\Vert x_{n}-T_{i(n)}^{h(n)}x_{n} \bigr\Vert +\bigl(1+L^{2}\bigr)\Vert x_{n-N}-x_{n} \Vert +\bigl\Vert T_{i(n-N)}^{h(n-N)}x_{n-N}-x_{n-N} \bigr\Vert , \end{aligned}$$
from which it turns out that \(x_{n}-T_{i(n)}x_{n}\to0\) as \(n\to\infty\) in view of Steps 5 and 6.
Step 8. Show that \(\forall j\in I\), \(x_{n}-T_{i(n)+j}x_{n}\to0\) as \(n\to\infty\).
Observing that
$$\begin{aligned} \|x_{n}-T_{i(n)+j}x_{n}\| \le& \|x_{n}-x_{n+j} \|+\|x_{n+j}-T_{i(n)+j}x_{n+j}\| \\ &{} +\|T_{i(n)+j}x_{n+j}-T_{i(n)+j}x_{n}\| \\ \le&\|x_{n}-x_{n+j}\|+\|x_{n+j}-T_{i(n+j)}x_{n+j} \| +L\|x_{n+j}-x_{n}\| \\ =&(1+L)\|x_{n}-x_{n+j}\|+\|x_{n+j}-T_{i(n+j)}x_{n+j} \|, \end{aligned}$$
by using Steps 5 and 7, we reach the desired conclusion.
Step 9. Show that \(\forall l\in I\), \(x_{n}-T_{l}x_{n}\to0\) as \(n\to\infty\).
Indeed, for arbitrary given \(l\in I\), we can choose \(j\in I\) such that \(j=l-i(n)\) if \(l\ge i(n)\) and \(j=N+l-i(n)\) if \(l< i(n)\). Then, we have \(l=i(n+j)=i(n)+j\), for all \(n\ge0\). In view of Step 8, we obtain \(x_{n}-T_{l}x_{n}=x_{n}-T_{i(n+j)}x_{n}=x_{n}-T_{i(n)+j}x_{n}\to0\) as \(n\to\infty\).
Step 10. Show that \(x_{n}\to p\), where \(p=P_{F}x_{0}\).
For \(m>n\), by the definition of \(Q_{n}\), we see that \(Q_{m}\subset Q_{n}\). Noting that \(x_{m}=P_{Q_{m}}x_{0}\) and \(x_{n}=P_{Q_{n}}x_{0}\), by Lemma 2.3, we conclude that
$$\|x_{m}-x_{n}\|^{2}\le\|x_{m}-x_{0} \|^{2}-\|x_{n}-x_{0}\|^{2}. $$
In view of Step 4, we deduce that \(x_{m}-x_{n}\to0\) as \(m,n\to\infty\), that is, \(\{x_{n}\}\) is Cauchy. Since H is complete and C is closed, we can assume that \(x_{n}\to p\in C\) as \(n\to\infty\). It follows from Step 9 that \(p\in F\). From Step 2, we know that \(F\subset Q_{n}\), for all \(n\ge0\). Hence, for arbitrary \(z\in F\), we have
$$\langle z-x_{n}, x_{0}-x_{n}\rangle\le0. $$
This leads to
$$\langle z-p,x_{0}-p\rangle\le0, $$
for all \(z\in F\). By Lemma 2.1, we conclude that \(p=P_{F}x_{0}\). This completes the proof. □
In contrast to [15], the main difference with the paper [15] consists in the fact that the sequence \(\{y_{n}\}\) in algorithm (3.1) is globally unique for the whole family of \(\{T_{i}\}_{i=0}^{N-1}\).
In the proof of Theorem 3.1, the third step is really key. The assumption of quasi-asymptotic pseudocontractiveness of the mappings \(\{T_{i}\}_{i=0}^{N-1}\) has been used.
Next, we consider a simpler algorithm for a finite family of uniformly \(L_{i}\)-Lipschitzian and quasi-asymptotically pseudocontractive mappings in real Hilbert spaces.
Let C be a bounded, closed, and convex subset of a real Hilbert space H. Let \(\{T_{i}\}_{i=0}^{N-1}:C\to C\) be a finite family of uniformly \(L_{i}\)-Lipschitzian and quasi-asymptotically pseudocontractive mappings such that \(F=\bigcap_{i=0}^{N-1}\operatorname{Fix}(T_{i})\ne\emptyset\). Assume the control sequence \(\{\alpha_{n}\}\) is chosen so that \(\alpha_{n}\in[a,b]\) for some \(a,b\in(0,\frac{1}{1+L})\), where \(L=\max\{L_{i}:0\le i\le N-1\}\). Let a sequence \(\{x_{n}\}\) be generated in the following manner:
$$ \left \{ \textstyle\begin{array}{l} x_{0}\in H \quad \textit{chosen arbitrarily}, \\ C_{1}=C, \qquad x_{1}=P_{C_{1}}x_{0}, \\ y_{n}=(1-\alpha_{n})x_{n}+\alpha_{n} T_{i(n)}^{h(n)}x_{n}, \quad n\ge1, \\ C_{n+1}=\{z\in C_{n}:\alpha_{n}[1-(1+L)\alpha_{n}]\|x_{n}-T_{i(n)}^{h(n)}x_{n}\| ^{2} \\ \hphantom{C_{n+1}={}}\leq\langle x_{n}-z, (y_{n}-T_{i(n)}^{h(n)}y_{n})\rangle +(k_{h(n)}-1)(\operatorname{diam} C)^{2}\},\quad n\ge0, \\ x_{n+1}=P_{C_{n+1}}x_{0},\quad n\ge1, \end{array}\displaystyle \right . $$
Following the proof lines of Theorem 3.1, we can show the following.
(1) F is a nonempty closed and convex subset of C, and hence \(P_{F}x_{0}\) is well defined for every \(x_{0}\in H\).
(2) \(C_{n}\) is closed convex and \(F\subset C_{n}\) for every \(n\ge 1\).
In fact, for \(n=1\), \(C_{1}=C\) is closed convex. Assume that \(C_{n}\) is closed convex for some \(n\ge1\); from the definition \(C_{n+1}\), we know that \(C_{n+1}\) is also closed convex for the same \(n\ge1\), and hence \(C_{n}\) is closed convex for every \(n\ge1\). For \(n=1\), \(F\subset C_{1}=C\). Assume that \(F\subset C_{n}\) for some \(n\ge 1\); from the induction assumption, (3.3), and the definition of \(C_{n+1}\), we conclude that \(F\subset C_{n+1}\), and hence \(F\subset C_{n}\), for all \(n\ge1\).
(3) \(\lim_{n\to\infty}\|x_{n}-x_{0}\|\) exists.
In view of (3.5), we have \(x_{n}=P_{C_{n}}x_{0}\). Since \(C_{n+1}\subset C_{n}\) and \(x_{n+1}\in C_{n+1}\), for all \(n\ge1\), we have
$$ \|x_{n}-x_{0}\|\le\|x_{n+1}-x_{0}\|, \quad \forall n\ge1. $$
On the other hand, as \(F\subset C_{n}\) by (2), it follows that
$$ \|x_{n}-x_{0}\|\le\|z-x_{0}\|, \quad \forall z \in F, \forall n\ge1. $$
Combining (3.6) and (3.7), we see that \(\lim_{n\to\infty}\|x_{n}-x_{0}\|\) exists.
(4) \(\{x_{n}\}\) is a Cauchy sequence in C.
For \(m>n\ge1\), we have \(x_{m}=P_{C_{m}}x_{0}\in C_{m}\subset C_{n}\). By Lemma 2.2, we have
$$ \|x_{m}-x_{n}\|^{2}\le\|x_{m}-x_{0} \|^{2}-\|x_{n}-x_{0}\|^{2}. $$
Letting \(m,n\to\infty\) and taking the limit in (3.8), we get \(x_{m}-x_{n}\to0\) as \(m,n\to\infty\), which proves that \(\{x_{n}\}\) is Cauchy. We assume that \(x_{n}\to p\in C\). The remainder of the proof follows exactly from Steps 5-10 in Theorem 3.1. This completes the proof. □
Algorithm (3.5) is simpler than algorithm (3.1). Also, the sequence \(\{y_{n}\}\) in algorithm (3.5) is globally unique for the whole family of \(\{T_{i}\}_{i=0}^{N-1}\).
Finally, we present another kind of iterative algorithm for a finite family of quasi-asymptotically pseudocontractive mappings in real Hilbert spaces.
Let C be a bounded and closed convex subset of a real Hilbert space H. Let \(\{T_{i}\}_{i=0}^{N-1}:C\to C\) be a finite family of uniformly \(L_{i}\)-Lipschitzian and quasi-asymptotically pseudocontractive mappings such that \(F=\bigcap_{i=0}^{N-1}\operatorname{Fix}(T_{i})\ne\emptyset\). Assume the control sequence \(\{\alpha_{n}\}\) is chosen so that \(\alpha_{n}\in[a,b]\) for some \(a,b\in(0,\frac{1}{1+L})\), where \(L=\max\{L_{i}:0\le i\le N-1\}\). Let a sequence \(\{x_{n}\}\) be generated in the following manner:
$$ \left \{ \textstyle\begin{array}{l} x_{0}\in H \quad \textit{chosen arbitrarily}, \\ C_{1}=C,\qquad x_{1}=P_{C_{1}}x_{0}, \\ y_{n,i}=(1-\alpha_{n})x_{n}+\alpha_{n} T_{i}^{n}x_{n},\quad n\ge1, i\in I, \\ C_{n+1}=\{z\in C_{n}:\alpha_{n}[1-(1+L)\alpha_{n}]\sum_{i=0}^{N-1}\| x_{n}-T_{i}^{n}x_{n}\|^{2} \\ \hphantom{C_{n+1}={}}\leq\langle x_{n}-z, \sum_{i=0}^{N-1}(y_{n,i}-T_{i}^{n}y_{n,i})\rangle+\sum_{i=0}^{N-1}(k_{n,i}-1)(\operatorname{diam} C)^{2}\}, \quad n\ge0, \\ x_{n+1}=P_{C_{n+1}}x_{0},\quad n\ge1, \end{array}\displaystyle \right . $$
where \(k_{n,i}\) are asymptotic sequences for \(\{T_{i}\}_{i=0}^{N-1}\). Then the sequence \(\{x_{n}\}\) generated by (3.9) converges strongly to \(P_{F}x_{0}\).
As shown in Theorem 3.2, we easily show that \(P_{F}x_{0}\) is well defined for every \(x_{0}\in H\), \(C_{n}\) is closed convex and \(F\subset C_{n}\) for every \(n\ge1\). Thus, \(\{x_{n}\}\) is well defined, for all \(n\ge1\). Further, \(\{x_{n}\}\) is a Cauchy sequence in C. Therefore, \(x_{n}\to p\in C\) as \(n\to\infty\). In particular, we have \(x_{n+1}-x_{n}\to0\) as \(n\to\infty\). Since \(0< a\le\alpha_{n}\le b<\frac{1}{1+L}\), \(x_{n+1}\in C_{n+1}\), \(\{\sum_{i=0}^{N-1}\|(I-T_{i}^{n})y_{n,i}\|\}\) is bounded and \(\sum_{i=0}^{N-1}(k_{n,i}-1)\to0\), from the definition of \(C_{n+1}\), we see that \(x_{n}-T_{i}^{n}x_{n}\to0\) as \(n\to\infty\), for all \(i\in I\). Observe that
$$\begin{aligned} \Vert x_{n+1}-T_{i}x_{n+1}\Vert \le&\bigl\Vert x_{n+1}-T_{i}^{n+1}x_{n+1}\bigr\Vert +\bigl\Vert T_{i}^{n+1}x_{n+1}-T_{i}^{n+1}x_{n} \bigr\Vert \\ &{} +\bigl\Vert T_{i}^{n+1}x_{n}-T_{i}x_{n} \bigr\Vert +\Vert T_{i}x_{n}-T_{i}x_{n+1} \Vert \\ \le&\bigl\Vert x_{n+1}-T_{i}^{n+1}x_{n+1} \bigr\Vert +2L\Vert x_{n+1}-x_{n}\Vert +L\bigl\Vert x_{n}-T^{n}x_{n}\bigr\Vert , \end{aligned}$$
so that \(x_{n}-T_{i}x_{n}\to0\) as \(n\to\infty\), for all \(i\in I\). Since \(x_{n}\to p\), we have \(p=T_{i}p\), for all \(i\in I\) and hence \(p\in F\). The remainder of the proof follows exactly from Step 10 of Theorem 3.1. This completes the proof. □
Algorithm (3.9) used in Theorem 3.3 is different from the ones existing in literature.
It is interesting to extend the algorithms of this paper to an infinite family of quasi-asymptotically pseudocontractive mappings.
The work related to other iterative methods for asymptotically pseudocontractive mappings can be found in [17–21].
4 Numerical experiments
In this section, we provide some numerical experiments to show our algorithms are effective. In our numerical experiments, we consider the case of \(N=2\). We take \(T_{0}=I\), the identity mapping on \(\mathbb{R}\), and use the example given in Remark 1.3 as \(T_{1}\). For such a family \(\{ {T_{i} } \}_{i = 0}^{1} \), we have \(L_{0} = 1\) and \(L_{1} = \frac{2 + 4\pi}{3}\), therefore, \(L = \frac{2 + 4\pi}{3}\). It is easy to see that \(k_{h(n)} = 1\), for all \(n \ge0\). Moreover, we know also that \(F=\bigcap_{i=0}^{1}\operatorname{Fix}(T_{i})=\{0\}\ne\emptyset\). We take \({\alpha_{n}} = \frac{1}{{n + 56}} + \frac{1}{{2 + L}}\), for all \(n\ge0\). For algorithms (3.1), (3.5), and (3.9), each of them iterates 70 steps.
Firstly, for algorithm (3.1), we choose \(x_{0} \in[0,2\pi]\) arbitrarily, then for 51 different initial values, we can see all the results are convergent in Figure 1.
The iterative curves of algorithm ( 3.1 ) under different initial value.
Secondly, for algorithm (3.5), we choose \(x_{0} \in[ - 2,10]\) arbitrarily, then for 61 different initial values, we can see all the results are convergent in Figure 2.
The iterative curves of algorithm ( 3.5 ) under different initial values.
Finally, for algorithm (3.9), we also choose \(x_{0} \in[ - 2,10]\) arbitrarily, then for 61 different initial values, we can also see all the results are convergent in Figure 3.
In addition, for Figures 1, 2, and 3, we can also find that the algorithms need more iterative steps with the nonnegative initial value becoming larger in the majority situation.
This work contains our dedicated study aimed to develop and complement hybrid projection algorithms for finding the common fixed points of a finite family of quasi-asymptotically pseudocontractive mappings in Hilbert spaces. We introduced three kinds of new hybrid projection algorithms for this class of problems, and we have proven their strong convergence. Numerical examples have been given to illustrate the effectiveness of the proposed algorithms. The results presented in the paper are a generalization and complement of the well-known ones existing in the literature.
This research was supported by the National Natural Science Foundation of China (11071053) and Key Project of Science and Research of Hebei University of Economics and Business (2015KYZ03).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
College of Mathematics and Statistics, Hebei University of Economics and Business, Shijiazhuang, 050061, China
Department of Mathematics and Information, Langfang Normal University, Langfang, 065000, China
Postdoctoral Workstation, Naval Equipment Institute, Beijing, 102249, China
Naval Aviation Institution, Huludao, 125001, China
Department of Mathematics, Shijiazhuang Mechanical Engineering College, Shijiazhuang, 050003, China
Goebel, K, Kirk, WA: Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics, vol. 28. Cambridge University Press, Cambridge (1990) View ArticleGoogle Scholar
Chidume, CE: Geometric Properties of Banach Spaces and Nonlinear Iterations. Springer, Berlin (2008) Google Scholar
Agarwal, RP, O'Regan, D, Sahu, DR: Fixed Point Theory for Lipschitzian-Type Mappings with Applications. Springer, Berlin (2009) Google Scholar
Goebel, K, Kirk, WA: A fixed point theorem for asymptotically nonexpansive mappings. Proc. Am. Math. Soc. 35, 171-174 (1972) MathSciNetView ArticleGoogle Scholar
Schu, J: Iteration construction of fixed points of asymptotically nonexpansive mappings. J. Math. Anal. Appl. 158, 407-413 (1991) MathSciNetView ArticleGoogle Scholar
Liu, QH: Convergence theorems of the sequence of iterates for asymptotically demicontractive and hemicontractive mappings. Nonlinear Anal. 26, 1835-1842 (1996) MathSciNetView ArticleGoogle Scholar
Cho, YJ, Zhou, H, Guo, G: Weak and strong convergence theorems for three-step iterations with errors for asymptotically nonexpansive mappings. Comput. Math. Appl. 47, 707-717 (2004) MathSciNetView ArticleGoogle Scholar
Kim, TH, Xu, HK: Strong convergence of modified Mann iterations for asymptotically nonexpansive mappings and semigroups. Nonlinear Anal. 64, 1140-1152 (2006) MathSciNetView ArticleGoogle Scholar
Chidume, CE, Ali, B: Weak and strong convergence theorems for finite families of asymptotically nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 330, 377-387 (2007) MathSciNetView ArticleGoogle Scholar
Bauschke, HH, Combettes, PL: A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert space. Math. Oper. Res. 26, 248-264 (2001) MathSciNetView ArticleGoogle Scholar
Nakajo, K, Takahashi, W: Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 279, 372-379 (2003) MathSciNetView ArticleGoogle Scholar
Marino, G, Xu, HK: Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces. J. Math. Anal. Appl. 329, 336-349 (2007) MathSciNetView ArticleGoogle Scholar
Zhou, HY: Convergence theorems of fixed points for Lipschitz pseudo-contractions in Hilbert spaces. J. Math. Anal. Appl. 343, 546-556 (2008) MathSciNetView ArticleGoogle Scholar
Zhou, HY: Demiclosedness principle with applications for asymptotically pseudo-contractions in Hilbert spaces. Nonlinear Anal. 70, 3140-3145 (2009) MathSciNetView ArticleGoogle Scholar
Zhou, HY, Su, YF: Strong convergence theorems for a family of quasi-asymptotic pseudo-contractions in Hilbert spaces. Nonlinear Anal. 70, 4047-4052 (2009) MathSciNetView ArticleGoogle Scholar
Cassini, E, Maluta, E: Fixed points for uniformly Lipschitzian mappings in spaces with uniformly normal structure. Nonlinear Anal. 9, 103-108 (1985) MathSciNetView ArticleGoogle Scholar
Chidume, CE, Zegeye, H: Approximate fixed point sequences and convergence theorems for asymptotically pseudocontractive mappings. J. Math. Anal. Appl. 278, 354-366 (2003) MathSciNetView ArticleGoogle Scholar
Zhou, HY: Weak convergence theorems for strict pseudo-contractions in Banach spaces. Acta Math. Sin. Engl. Ser. 30, 755-766 (2014) MathSciNetView ArticleGoogle Scholar
Zhou, HY: A new iteration method for variational inequalities on the set of common fixed points for a finite family of quasi-pseudo-contractions in Hilbert spaces. J. Inequal. Appl. 2014, 218 (2014) View ArticleGoogle Scholar
Zegeye, H, Shahzad, N: An algorithm for a common fixed point of a family of pseudocontractive mappings. Fixed Point Theory Appl. 2013, 234 (2013) View ArticleGoogle Scholar
Yao, YH, Postolache, M, Kang, SM: Strong convergence of approximated iterations for asymptotically pseudocontractive mappings. Fixed Point Theory Appl. 2014, 100 (2014) MathSciNetView ArticleGoogle Scholar
|
CommonCrawl
|
Beyond comparisons of means: understanding changes in gene expression at the single-cell level
Catalina A. Vallejos1,2,
Sylvia Richardson1 &
John C. Marioni2,3
Genome Biology volume 17, Article number: 70 (2016) Cite this article
Traditional differential expression tools are limited to detecting changes in overall expression, and fail to uncover the rich information provided by single-cell level data sets. We present a Bayesian hierarchical model that builds upon BASiCS to study changes that lie beyond comparisons of means, incorporating built-in normalization and quantifying technical artifacts by borrowing information from spike-in genes. Using a probabilistic approach, we highlight genes undergoing changes in cell-to-cell heterogeneity but whose overall expression remains unchanged. Control experiments validate our method's performance and a case study suggests that novel biological insights can be revealed. Our method is implemented in R and available at https://github.com/catavallejos/BASiCS.
The transcriptomics revolution – moving from bulk samples to single-cell (SC) resolution – provides novel insights into a tissue's function and regulation. In particular, single-cell RNA sequencing (scRNA-seq) has led to the identification of novel sub-populations of cells in multiple contexts [1–3]. However, compared to bulk RNA-seq, a critical aspect of scRNA-seq data sets is an increased cell-to-cell variability among the expression counts. Part of this variance inflation is related to biological differences in the expression profiles of the cells (e.g., changes in mRNA content and the existence of cell sub-populations or transient states), which disappears when measuring bulk gene expression as an average across thousands of cells. Nonetheless, this increase in variability is also due in part to technical noise arising from the manipulation of small amounts of starting material, which is reflected in weak correlations between technical replicates [4]. Such technical artifacts are confounded with genuine transcriptional heterogeneity and can mask the biological signal.
Among others, one objective of RNA-seq experiments is to characterize transcriptional differences between pre-specified populations of cells (given by experimental conditions or cell types). This is a key step for understanding a cell's fate and functionality. In the context of bulk RNA-seq, two popular methods for this purpose are edgeR [5] and DESeq2 [6]. However, these are not designed to capture features that are specific to scRNA-seq data sets. In contrast, SCDE [7] has been specifically developed to deal with scRNA-seq data sets. All of these methods target the detection of differentially expressed genes based on log-fold changes (LFCs) of overall expression between the populations. However, restricting the analysis to changes in overall expression does not take full advantage of the rich information provided by scRNA-seq. In particular – and unlike bulk RNA-seq – scRNA-seq can also reveal information about cell-to-cell expression heterogeneity. Critically, traditional approaches will fail to highlight genes whose expression is less stable in any given population but whose overall expression remains unchanged between populations.
More flexible approaches, capable of studying changes that lie beyond comparisons of means, are required to characterize differences between distinct populations of cells better. In this article, we develop a quantitative method to fill this gap, allowing the identification of genes whose cell-to-cell heterogeneity pattern changes between pre-specified populations of cells. In particular, genes with less variation in expression levels within a specific population of cells might be under more stringent regulatory control. Additionally, genes having increased biological variability in a given population of cells could suggest the existence of additional sub-groups within the analyzed populations. To the best of our knowledge, this is the first probabilistic tool developed for this purpose in the context of scRNA-seq analyses. We demonstrate the performance of our method using control experiments and by comparing expression patterns of mouse embryonic stem cells (mESCs) between different stages of the cell cycle.
A statistical model to detect changes in expression patterns for scRNA-seq data sets
We propose a statistical approach to compare expression patterns between P pre-specified populations of cells. It builds upon BASiCS [8], a Bayesian model for the analysis of scRNA-seq data. As in traditional differential expression analyses, for any given gene i, changes in overall expression are identified by comparing population-specific expression rates \(\mu ^{(p)}_{i}\) (p=1,…,P), defined as the relative abundance of gene i within the cells in population p. However, the main focus of our approach is to assess differences in biological cell-to-cell heterogeneity between the populations. These are quantified through changes in population- and gene-specific biological over-dispersion parameters \(\delta ^{(p)}_{i}\) (p=1,…,P), designed to capture residual variance inflation (after normalization and technical noise removal) while attenuating the well-known confounding relationship between mean and variance in count-based data sets [9] (a similar concept was defined in the context of bulk RNA-seq by [10], using the term biological coefficient of variation). Importantly, such changes cannot be uncovered by standard differential expression methods, which are restricted to changes in overall expression. Hence, our approach provides novel biological insights by highlighting genes that undergo changes in cell-to-cell heterogeneity between the populations despite the overall expression level being preserved.
To disentangle technical from biological effects, we exploit spike-in genes that are added to the lysis buffer and thence theoretically present at the same amount in every cell (e.g., the 92 ERCC molecules developed by the External RNA Control Consortium [11]). These provide an internal control or gold standard to estimate the strength of technical variability and to aid normalization. In particular, these control genes allow inference on cell-to-cell differences in mRNA content, providing additional information about the analyzed populations of cells [12]. These are quantified through changes between cell-specific normalizing constants \(\phi ^{(p)}_{j}\) (for the jth cell within the pth population). Critically, as described in Additional file 1: Note S1 and Fig. S1, global shifts in mRNA content between populations do not induce spurious differences when comparing gene-specific parameters (provided the offset correction described in 'Methods' is applied).
A graphical representation of our model is displayed in Fig. 1 (based on a two-group comparison). It illustrates how our method borrows information across all cells and genes (biological transcripts and spike-in genes) to perform inference. Posterior inference is implemented via a Markov chain Monte Carlo (MCMC) algorithm, generating draws from the posterior distribution of all model parameters. Post-processing of these draws allows quantification of supporting evidence regarding changes in expression patterns (mean and over-dispersion). These are measured using a probabilistic approach based on tail posterior probabilities associated with decision rules, where a probability cut-off is calibrated through the expected false discovery rate (EFDR) [13].
Graphical representation of our model for detecting changes in expression patterns (mean and over-dispersion) based on comparing two predefined population of cells. The diagram considers expression counts of two genes (i is biological and i ′ is technical) and two cells (j p and \(j^{\prime }_{p}\)) from each population p=1,2. Observed expression counts are represented by square nodes. The central rhomboid node denotes the known input number of mRNA molecules for a technical gene i ′, which is assumed to be constant across all cells. The remaining circular nodes represent unknown elements, using black to denote random effects and red to denote model parameters (fixed effects) that lie on the top of the model's hierarchy. Here, \(\phi ^{(p)}_{j}\)'s and \(s^{(p)}_{j}\)'s act as normalizing constants that are cell-specific and θ p 's are global over-dispersion parameters capturing technical variability, which affect the expression counts of all genes and cells within each population. In this diagram, \(\nu ^{(p)}_{j}\)'s and \(\rho ^{(p)}_{ij}\)'s represent random effects related to technical and biological variability components, whose variability is controlled by θ p 's and \(\delta ^{(p)}_{i}\)'s, respectively (see Additional file 1: Note 6.1). Finally, \(\mu ^{(p)}_{i}\)'s and \(\delta ^{(p)}_{i}\)'s, respectively, measure the overall expression of a gene i and its residual biological cell-to-cell over-dispersion (after normalization, technical noise removal and adjustment for overall expression) within each population. Colored areas highlight elements that are shared within a gene and/or cell. The latter emphasizes how our model borrows information across all cells to estimate parameters that are gene-specific and all genes to estimate parameters that are cell-specific. More details regarding the model setup can be found in the 'Methods' section of this article
Our strategy is flexible and can be combined with a variety of decision rules, which can be altered to reflect the biological question of interest. For example, if the aim is to detect genes whose overall expression changes between populations p and p ′, a natural decision rule is \(|\log (\mu ^{(p)}_{i}/\mu ^{(p')}_{i})| > \tau _{0}\), where τ 0≥0 is an a priori chosen biologically significant threshold for LFCs in overall expression, to avoid highlighting genes with small changes in expression that are likely to be less biologically relevant [6, 14]. Alternatively, changes in biological cell-to-cell heterogeneity can be assessed using \(|\log (\delta ^{(p)}_{i}/\delta ^{(p')}_{i})| > \omega _{0}\), for a given minimum tolerance threshold ω 0≥0. This is the main focus of this article. As a default option, we suggest setting τ 0=ω 0=0.4, which roughly coincides with a 50 % increase in overall expression or over-dispersion in whichever group of cells has the largest value (this choice is also supported by the control experiments shown in this article). To improve the interpretation of the genes highlighted by our method, these decision rules can also be complemented by, e.g., requiring a minimum number of cells where the expression of a gene is detected.
More details regarding the model setup and the implementation of posterior inference can be found in 'Methods'.
Alternative approaches for identifying changes in mean expression
To date, most differential expression analyses of scRNA-seq data sets have borrowed methodology from bulk RNA-seq literature (e.g., DESeq2 [6] and edgeR [5]). However, such methods are not designed to capture features that are specific to SC-level experiments (e.g., the increased levels of technical noise). Instead, BASiCS, SCDE [7] and MAST [15] have been specifically developed with scRNA-seq data sets in mind. SCDE is designed to detect changes in mean expression while accounting for dropout events, where the expression of a gene is undetected in some cells due to biological variability or technical artifacts. For this purpose, SCDE employs a two-component mixture model where negative binomial and low-magnitude Poisson components model amplified genes and the background signal related to dropout events, respectively. MAST is designed to capture more complex changes in expression, using a hurdle model to study both changes in the proportion of cells where a gene is expressed above background and in the positive expression mean, defined as a conditional value – given than the gene is expressed above background levels. Additionally, MAST uses the fraction of genes that are detectably expressed in each cell (the cellular detection rate or CDR) as a proxy to quantify technical and biological artifacts (e.g., cell volume). SCDE and MAST rely on pre-normalized expression counts. Moreover, unlike BASiCS, SCDE and MAST use a definition of changes in expression mean that is conceptually different to what would be obtained based on a bulk population (which would consider all cells within a group, regardless of whether a gene is expressed above background or not).
The performance of these methods is compared in Additional file 1: Note S2 using real and simulated data sets. While control of the false discovery rate (FDR) is not well calibrated for BASiCS when setting τ 0=0, this control is substantially improved when increasing the LFC threshold to τ 0=0.4 – which is the default option we recommend (Additional file 1: Table S1). Not surprisingly, the higher FDR rates of BASiCS lead to higher sensitivity. In fact, our simulations suggest that BASiCS can correctly identify more genes that are differentially expressed than other methods. While this conclusion is based on synthetic data, it is also supported by the analysis of the cell-cycle data set described in [16] (see Additional file 1: Fig. S2), where we observe that SCDE and MAST fail to highlight a large number of genes for which a visual inspection suggests clear changes in overall expression (Additional file 1: Figs. S3 and S4). We hypothesize that this is partly due to conceptual differences in the definition of overall expression and, for MAST, the use of CDR as a covariate.
Alternative approaches for identifying changes in heterogeneity of expression
To the best of our knowledge, BASiCS is the first probabilistic tool to quantify gene-specific changes in the variability of expression between populations of cells. Instead, previous literature has focused on comparisons based on the coefficient of variation (CV), calculated from pre-normalized expression counts (e.g., [17]), for which no quantitative measure of differential variability has been obtained. More recently, [9] proposed a mean-corrected measure of variability to avoid the confounding effect between mean expression and CV. Nonetheless, the latter was designed to compare expression patterns for sets of genes, rather than for individual genes.
Not surprisingly, our analysis suggests that a quantification of technical variability is critical when comparing variability estimates between cell populations (Additional file 1: Note S3 and Fig. S5). In particular, comparisons based on CV estimates can mask the biological signal if the strength of technical variability varies between populations.
A control experiment: comparing single cells vs pool-and-split samples
To demonstrate the efficacy of our method, we use the control experiment described in [17], where single mESCs are compared against pool-and-split (P&S) samples, consisting of pooled RNA from thousands of mESCs split into SC equivalent volumes. Such a controlled setting provides a situation where substantial changes in overall expression are not expected as, on average, the overall expression of SCs should match the levels measured in P&S samples. Additionally, the design of P&S samples should remove biological variation, leading to a homogeneous set of samples. Hence, P&S samples are expected to show a genuine reduction in biological cell-to-cell heterogeneity compared to SCs.
Here, we display the analysis of samples cultured in a 2i media. Hyper-parameter values for \(\mu _{i}^{(p)}\)'s and \(\delta _{i}^{(p)}\)'s were set to \(a^{2}_{\mu } = a^{2}_{\delta } = 0.5\), so that extreme LFC estimates are shrunk towards (−3,3) (see 'Methods'). However, varying \(a^{2}_{\mu }\) and \(a^{2}_{\delta }\) leads to almost identical results (not shown), suggesting that posterior inference is in fact dominated by the data. In these data, expression counts correspond to the number of molecules mapping to each gene within each cell. This is achieved by using unique molecular identifiers (UMIs), which remove amplification biases and reduce sources of technical variation [18]. Our analysis includes 74 SCs and 76 P&S samples (same inclusion criteria as in [17]) and expression counts for 9378 genes (9343 biological and 35 ERCC spikes) defined as those with at least 50 detected molecules in total across all cells. The R code used to perform this analysis is provided in Additional file 2.
To account for potential batch effects, we allowed different levels of technical variability to be estimated in each batch (see Additional file 1: Note S4 and Fig. S6). Moreover, we also performed an independent analysis of each batch of cells. As seen in Additional file 1: Fig. S7, the results based on the full data are roughly replicated in each batch, suggesting that our strategy is able to remove potential artifacts related to this batch effect.
As expected, our method does not reveal major changes in overall expression between SCs and P&S samples as the distribution of LFC estimates is roughly symmetric with respect to the origin (see Fig. 2 a) and the majority of genes are not classified as differentially expressed at 5 % EFDR (see Fig. 3 b). However, this analysis suggests that setting the minimum LFC tolerance threshold τ 0 equal to 0 is too liberal as small LFCs are associated with high posterior probabilities of changes in expression (see Fig. 3 a) and the number of differentially expressed genes is inflated (see Fig. 3 b). In fact, counter-intuitively, 4710 genes (≈50 % of all analyzed genes) are highlighted to have a change in overall expression when using τ 0=0. This is partially explained by the high nominal FDR rates displayed in Additional file 1: Note S2.1 where, for τ 0=0, FDR is poorly calibrated when simulating under the null model. In addition, we hypothesize this heavy inflation is also due to small but statistically significant differences in expression that are not biologically meaningful. In fact, the number of genes whose overall expression changes is reduced to 559 (≈6 % of all analyzed genes) when setting τ 0=0.4. As discussed earlier, this minimum threshold roughly coincides with a 50 % increase in overall expression and with the 90th percentile of empirical LFC estimates when simulating under the null model (no changes in expression). Posterior inference regarding biological over-dispersion is consistent with the experimental design, where the P&S samples are expected to have more homogeneous expression patterns. In fact, as shown in Fig. 2 b, the distribution of estimated LFCs in biological over-dispersion is skewed towards positive values (higher biological over-dispersion in SCs). This is also supported by the results shown in Fig. 3 b, where slightly more than 2000 genes exhibit increased biological over-dispersion in SCs and almost no genes (≈60 genes) are highlighted to have higher biological over-dispersion in the P&S samples (EFDR = 5 %). In this case, the choice of ω 0 is less critical (within the range explored here). This is illustrated by the left panels in Fig. 3 a, where tail posterior probabilities exceeding the cut-off defined by EFDR = 5 % correspond to similar ranges of LFC estimates.
Estimated LFCs in expression (mean and over-dispersion) when comparing SCs vs P&S samples (2i serum culture). Posterior medians of LFC in (a) overall expression log(μ i(SC)/μ i(P&S)) and (b) biological over-dispersion log(δ i(SC)/δ i(P&S)) against the average between estimates of overall expression rates for SCs and P&S samples. Average values are defined as a weighted average between groups, with weights given by the number of samples within each group of cells. As expected, our analysis does not reveal major changes in expression levels between SC and P&S samples. In fact, the distribution of estimated LFCs in overall expression is roughly symmetric with respect to the origin. In contrast, we infer a substantial decrease in biological over-dispersion in the P&S samples. This is reflected by a skewed distribution of estimated LFCs in biological over-dispersion towards positive values. LFC log-fold change, P&S pool-and-split, SC single cell
Summary of changes in expression patterns (mean and over-dispersion) for SCs vs P&S samples (EFDR = 5 %). a Volcano plots showing posterior medians of LFCs against estimated tail posterior probabilities. Left panels relate to the test where we assess if the absolute LFC in overall expression between SCs and P&S samples exceeds a minimum threshold τ 0. Estimates for LFCs in overall expression are truncated to the range (−1.5,1.5). Pink and green dots represent genes highlighted to have higher overall expression in the SC and P&S samples, respectively. Right panels relate to the test where we assess if the absolute LFC in biological over-dispersion between SC and P&S samples exceeds a minimum threshold ω 0. In all cases, horizontal dashed lines are located at probability cut-offs defined by EFDR = 5 %. Pink and green dots represent genes highlighted to have higher biological over-dispersion in the SC and P&S samples, respectively. b Bins in the horizontal axis summarize changes in overall expression between the groups. We use SC+ and P&S+ to denote that higher overall expression was detected in SC and P&S samples, respectively [the central group of bars (No diff.) corresponds to those genes where no significant differences were found]. Colored bars within each group summarize changes in biological over-dispersion between the groups. We use pink and green bars to denote higher biological over-dispersion in SC and P&S+ samples, respectively (and gray to denote no significant differences were found). The numbers of genes are displayed in log-scale. LFC log-fold change, P&S pool-and-split, SC single cell
mESCs across different cell-cycle stages
Our second example shows the analysis of the mESC data set presented in [16], which contains cells where the cell-cycle phase is known (G1, S and G2M). After applying the same quality control criteria as in [16], our analysis considers 182 cells (59, 58 and 65 cells in stages G1, S and G2M, respectively). To remove genes with consistently low expression across all cells, we excluded those genes with less than 20 reads per million (RPM), on average, across all cells. After this filter, 5,687 genes remain (including 5,634 intrinsic transcripts, and 53 ERCC spike-in genes). The R code used to perform this analysis is provided in Additional file 3.
As a proof of concept, to demonstrate the efficacy of our approach under a negative control, we performed permutation experiments, where cell labels were randomly permuted into three groups (containing 60, 60 and 62 samples, respectively). In this case, our method correctly infers that mRNA content as well as gene expression profiles do not vary across groups of randomly permuted cells (Fig. 4).
Posterior estimates of model parameters based on random permutations of the mESC cell-cycle data set. For a single permuted data set: a Empirical distribution of posterior medians for mRNA content normalizing constants \(\phi _{j_{p}}\) across all cells. b Empirical distribution of posterior medians for gene-specific expression rates μ ip across all genes. c Empirical distribution of posterior medians for gene-specific biological over-dispersion parameters δ ip across all genes. d As an average across ten random permutations. Upper diagonal panels compare estimates for gene-specific expression rates μ ip between groups of cells. Lower diagonal panels compare gene-specific biological over-dispersion parameters δ ip between groups of cells
As cells progress through the cell cycle, cellular mRNA content increases. In particular, our model infers that mRNA content is roughly doubled when comparing cells in G1 vs G2M, which is consistent with the duplication of genetic material prior to cell division (Fig. 5 a). Our analysis suggests there are no major shifts in expression levels between cell-cycle stages (Fig. 5 b and upper triangular panels in Fig. 5 d). Nonetheless, a small number of genes are identified as displaying changes in overall expression between cell-cycle phases at 5 % EFDR for τ 0=0.4 (Fig. 6). To validate our results, we performed gene ontology (GO) enrichment analysis within those genes classified as differentially expressed between cell-cycle phases (see Additional file 3). Not surprisingly, we found an enrichment of mitotic genes among the 545 genes classified as differentially expressed between G1 and G2M cells. In addition, the 209 differentially expressed genes between S and G2M are enriched for regulators of cytokinesis, which is the final stage of the cell cycle where a progenitor cell divides into two daughter cells [19].
Posterior estimates of model parameters for mESCs across different cell-cycle phases. a Empirical distribution of posterior medians for mRNA content normalizing constants \(\phi ^{(p)}_{j}\) across all cells. b Empirical distribution of posterior medians for gene-specific expression rates \(\mu ^{(p)}_{i}\) across all genes. c Empirical distribution of posterior medians for gene-specific biological over-dispersion parameters \(\delta ^{(p)}_{i}\) across all genes. d Upper diagonal panels compare estimates for gene-specific expression rates \(\mu ^{(p)}_{i}\) between groups of cells. Lower diagonal panels compare gene-specific biological over-dispersion parameters \(\delta ^{(p)}_{i}\) between groups of cells. While our results suggest there are no major shifts in mean expression between cell-cycle stages, our results suggest a substantial decrease in biological over-dispersion when cells move from G1 to the S phase, followed by a slight increase after the transition from S to the G2M phase (to give a rough quantification of this statement, panel (d) includes the percentage of point estimates that lie on each side of the diagonal line)
Summary of changes in expression patterns (mean and over-dispersion) for the mESC cell-cycle data set (EFDR = 5 %). Bins in the horizontal axis summarize changes in overall expression between each pair of groups. We use G1+, S+ and G2M+ to denote that higher overall expression was detected in cell-cycle phase G1, S and G2M, respectively [the central group of bars (No diff.) corresponds to those genes where no significant differences were found]. Colored bars within each group summarize changes in biological over-dispersion between the groups. We use pink, green and yellow bars to denote higher biological over-dispersion in cell-cycle phases G1, S and G2M, respectively (and gray to denote no significant differences were found). The numbers of genes are displayed in log-scale
Our method suggests a substantial decrease in biological over-dispersion when cells move from G1 to the S phase, followed by a slight increase after the transition from S to the G2M phase (see Fig. 5 c and the lower triangular panels in Fig. 5 d). This is consistent with the findings in [19], where the increased gene expression variability observed in G2M cells is attributed to an unequal distribution of genetic material during cytokinesis and the S phase is shown to have the most stable expression patterns within the cell cycle. Here, we discuss GO enrichment of those genes whose overall expression rate remains constant (EFDR = 5 %, τ 0=0.4) but that exhibit changes in biological over-dispersion between cell-cycle stages (EFDR = 5 %, ω 0=0.4). Critically, these genes will not be highlighted by traditional differential expression tools, which are restricted to differences in overall expression rates. For example, among the genes with higher biological over-dispersion in G1 with respect to the S phase, we found an enrichment of genes related to protein dephosphorylation. These are known regulators of the cell cycle [20]. Moreover, we found that genes with lower biological over-dispersion in G2M cells are enriched for genes related to DNA replication checkpoint regulation (which delays entry into mitosis until DNA synthesis is completed [21]) relative to G1 cells and mitotic cytokinesis when comparing to S cells. Both of these processes are likely to be more tightly regulated in the G2M phase. A full table with GO enrichment analysis of the results described here is provided in Additional file 3.
Our method provides a quantitative tool to study changes in gene expression patterns between pre-specified populations of cells. Unlike traditional differential expression analyses, our model is able to identify changes in expression that are not necessarily reflected by shifts in the mean. This allows a better understanding of the differences between distinct populations of cells. In particular, we focus on the detection of genes whose residual biological heterogeneity (after normalization and technical noise removal) varies between the populations. This is quantified through biological over-dispersion parameters, which capture variance inflation with respect to the level that would be expected in a homogeneous population of cells while attenuating the well-known confounding relationship between mean and variance in count-based data sets. Despite this, several case studies (including the ones displayed in the manuscript and other examples analyzed throughout model development) suggest that – for a homogeneous population of cells – there is a strong relationship between posterior estimates of overall expression parameters \(\mu ^{(p)}_{i}\) and over-dispersion parameters \(\delta ^{(p)}_{i}\) (this is broken when analyzing heterogeneous populations, see Section S8 in [8]). This is illustrated in Additional file 1: Note S5 using the cell-cycle data set analyzed here (Additional file 1: Figs. S8 and S9). Due to this interplay between overall expression and over-dispersion, the interpretation of over-dispersion parameters \(\delta ^{(p)}_{i}\) requires careful consideration. In particular, it is not trivial to interpret differences between \(\delta ^{(p)}_{i}\)'s when the \(\mu ^{(p)}_{i}\)'s also change. As a consequence, our analysis focuses on genes undergoing changes in over-dispersion but whose overall expression remains unchanged. This set of genes can provide novel biological insights that would not be uncovered by traditional differential expression analysis tools.
A decision rule to determine changes in expression patterns is defined through a probabilistic approach based on tail posterior probabilities and calibrated using the EFDR. The performance of our method was demonstrated using a controlled experiment where we recovered the expected behavior of gene expression patterns.
One caveat of our approach is the limited interpretation of the over-dispersion parameter when a gene is not expressed in a given population of cells or when the expression of a gene is only detected in a small proportion of cells (e.g., high expression in a handful of cells but no expression in the remaining cells). These situations will be reflected in low and high estimates of \(\delta _{i}^{(p)}\), respectively. However, the biological relevance of these estimates is not clear. Hence, to improve the interpretation of the genes highlighted by our method, we suggest complementing the decision rules presented here by conditioning the results of the test on a minimum number of cells where the expression of a gene is detected.
Currently, our approach requires predefined populations of cells (e.g., defined by cell types or experimental conditions). However, a large number of scRNA-seq experiments involve a mixed population of cells, where cell types are not known a priori (e.g., [1–3]). In such cases, expression profiles can be used to cluster cells into distinct groups and to characterize markers for such sub-populations. Nonetheless, unknown group structures introduce additional challenges for normalization and quantification of technical variability since, e.g., noise levels can vary substantially between different cell populations. A future extension of our work is to combine the estimation procedure within our model with a clustering step, propagating the uncertainty associated with each of these steps into downstream analysis. In the meantime, if the analyzed population of cells contains a sub-population structure, we advise the user to cluster cells first (e.g., using a rank-based correlation, which is more robust to normalization), thus defining groups of cells that can be used as an input for BASiCS. This step will also aid the interpretation of model parameters that are gene-specific.
Until recently, most scRNA-seq data sets consisted of hundreds (and sometimes thousands) of cells. However, droplet-based approaches [22, 23] have recently allowed parallel sequencing of substantially larger numbers of cells in an effective manner. This brings additional challenges to the statistical analysis of scRNA-seq data sets (e.g., due to the existence of unknown sub-populations, requiring unsupervised approaches). In particular, current protocols do not allow the addition of technical spike-in genes. As a result, the deconvolution of biological and technical artifacts has become less straightforward. Moreover, the increased sample sizes emphasize the need for more computationally efficient approaches that are still able to capture the complex structure embedded within scRNA-seq data sets. To this end, we foresee the use of parallel programming as a tool for reducing computing times. Additionally, we are also exploring approximated posterior inference based, for example, on an integrated nested Laplace approximation [24].
Finally, our approach lies within a generalized linear mixed model framework. Hence, it can be easily extended to include additional information such as covariates (e.g., cell-cycle stage, gene length and GC content) and experimental design (e.g., batch effects) using fixed and/or random effects.
In this article, we introduce a statistical model for identifying genes whose expression patterns change between predefined populations of cells (given by experimental conditions or cell types). Such changes can be reflected via the overall expression level of each gene as well as through changes in cell-to-cell biological heterogeneity. Our method is motivated by features that are specific to scRNA-seq data sets. In this context, it is essential to normalize and remove technical artifacts appropriately from the data before extracting the biological signal. This is particularly critical when there are substantial differences in cellular mRNA content, amplification biases and other sources of technical variation. For this purpose, we exploit technical spike-in genes, which are added at the (theoretically) same quantity to each cell's lysate. A typical example is the set of 92 ERCC molecules developed by the External RNA Control Consortium [11]. Our method builds upon BASiCS [8] and can perform comparisons between multiple populations of cells using a single model. Importantly, our strategy avoids stepwise procedures where data sets are normalized prior to any downstream analysis. This is an advantage over methods using pre-normalized counts, as the normalization step can be distorted by technical artifacts.
We assume that there are P groups of cells to be compared, each containing n p cells (p=1,…,P). Let \(X^{(p)}_{ij}\) be a random variable representing the expression count of a gene i (i=1,…,q) in the jth cell from group p. Without loss of generality, we assume the first q 0 genes are biological and the remaining q−q 0 are technical spikes. Extending the formulation in BASiCS, we assume that
$$ \text{E}\left(X^{(p)}_{ij}\right) = \left\{ \begin{array}{ll} \phi^{(p)}_{j} s^{(p)}_{j} \mu^{(p)}_{i}, & i = 1, \ldots, q_{0}; \\ s^{(p)}_{j} \mu^{(p)}_{i}, & i = q_{0}+1, \ldots, q. \end{array} \right. \text{and} $$
((1))
$$ {\begin{aligned} \text{CV}^{2}\left(X^{(p)}_{ij}\right) = \left\{ \begin{array}{ll} (\phi^{(p)}_{j} s^{(p)}_{j} \mu^{(p)}_{i})^{-1} + \theta_{p} + \delta^{(p)}_{i} (\theta_{p} + 1), & i = 1, \ldots, q_{0}; \\ (s^{(p)}_{j} \mu^{(p)}_{i})^{-1} + \theta_{p}, & i = q_{0}+1, \ldots, q, \end{array} \right. \end{aligned}} $$
with \(\mu ^{(p)}_{i} \equiv \mu _{i}\) for i=q 0+1,…,q and where CV stands for coefficient of variation (i.e., the ratio between standard deviation and mean). These expressions are the result of a Poisson hierarchical structure (see Additional file 1: Note S6.1). Here, \(\phi ^{(p)}_{j}\)'s act as cell-specific normalizing constants (fixed effects), capturing differences in input mRNA content across cells (reflected by the expression counts of intrinsic transcripts only). A second set of normalizing constants, \(s^{(p)}_{j}\)'s, capture cell-specific scale differences affecting the expression counts of all genes (intrinsic and technical). Among others, these differences can relate to sequencing depth, capture efficiency and amplification biases. However, a precise interpretation of the \(s^{(p)}_{j}\)'s varies across experimental protocols, e.g., amplification biases are removed when using UMIs [18]. In addition, θ p 's are global technical noise parameters controlling the over-dispersion (with respect to Poisson sampling) of all genes within group p. The overall expression rate of a gene i in group p is denoted by \(\mu ^{(p)}_{i}\). These are used to quantify changes in the overall expression of a gene across groups. Similarly, the \(\delta ^{(p)}_{i}\)'s capture residual over-dispersion (beyond what is due to technical artifacts) of every gene within each group. These so-called biological over-dispersion parameters relate to heterogeneous expression of a gene across cells. For each group, stable housekeeping-like genes lead to \(\delta ^{(p)}_{i} \approx 0\) (low residual variance in expression across cells) and highly variable genes are linked to large values of \(\delta ^{(p)}_{i}\). A novelty of our approach is the use of \(\delta ^{(p)}_{i}\) to quantify changes in biological over-dispersion. Importantly, this attenuates confounding effects due to changes in overall expression between the groups.
A graphical representation of this model is displayed in Fig. 1. To ensure identifiability of all model parameters, we assume that \(\mu ^{(p)}_{i}\)'s are known for the spike-in genes (and given by the number of spike-in molecules that are added to each well). Additionally, we impose the identifiability restriction
$$ \frac{1}{n_{p}}\sum\limits_{j=1}^{n_{p}} \phi^{(p)}_{j} = 1, \text{for}~ p = 1,\ldots, P. $$
Here, we discuss the priors assigned to parameters that are gene- and group-specific (see Additional file 1: Note S6.2 for the remaining elements of the prior). These are given by
$$ \begin{aligned} \mu^{(p)}_{i} \stackrel{\text{iid}}{\sim} \log\text{N}\left(0, a^{2}_{\mu}\right)~ \text{and}&~ \delta^{(p)}_{i} \stackrel{\text{iid}}{\sim} {\log\text{N}}\left(0, a^{2}_{\delta}\right)~\\&\text{for}~ i = 1, \ldots, q_{0}. \end{aligned} $$
Hereafter, without loss of generality, we simplify our notation to focus on two-group comparisons. This is equivalent to assigning Gaussian prior distributions for LFCs in overall expression (τ i ) or biological over-dispersion (ω i ). In such a case, it follows that
$$ \begin{aligned} \tau_{i} &\equiv \log \left(\mu^{(1)}_{i} \big/ \mu^{(2)}_{i}\right) \sim~\text{N}\left(0, 2 a^{2}_{\mu}\right)~\text{and}~ \\&\!\!\!\!\!\omega_{i} \equiv \log\left(\delta^{(1)}_{i} \big/ \delta^{(2)}_{i}\right) \sim~\text{N}\left(0, 2 a^{2}_{\delta}\right). \end{aligned} $$
Hence, our prior is symmetric, meaning that we do not a priori expect changes in expression to be skewed towards either group of cells. Values for \(a^{2}_{\mu }\) and \(a^{2}_{\delta }\) can be elicited using an expected range of values for LFC in expression and biological over-dispersion, respectively. The latter is particularly useful in situations where a gene is not expressed (or very lowly expressed) in one of the groups, where, e.g., LFCs in overall expression are undefined (the maximum likelihood estimate of τ i would be ±∞, the sign depending on which group expresses gene i). A popular solution to this issue is the addition of pseudo-counts, where an arbitrary number is added to all expression counts (in all genes and cells). This strategy is also adopted in models that are based on log-transformed expression counts (e.g., [15]). While the latter guarantees that τ i is well defined, it leads to artificial estimates for τ i (see Table 1). Instead, our approach exploits an informative prior (indexed by \(a^{2}_{\mu }\)) to shrink extreme estimates of τ i towards an expected range. This strategy leads to a meaningful shrinkage strength, which is based on prior knowledge. Importantly – and unlike the addition of pseudo-counts – our approach is also helpful when comparing biological over-dispersion between the groups. In fact, if a gene i is not expressed in one of the groups, this will lead to a non-finite estimate of ω i (if all expression counts in a group are equal to zero, the corresponding estimate of the biological over-dispersion parameters would be equal to zero). Adding pseudo-counts cannot resolve this issue, but imposing an informative prior for ω i (indexed by \(a^{2}_{\omega }\)) will shrink estimates towards the appropriate range.
Table 1 Synthetic example to illustrate the effect of addition of pseudo-counts over the estimation of LFCs in overall expression
Generally, posterior estimates of τ i and ω i are robust to the choice of \(a^{2}_{\mu }\) and \(a^{2}_{\delta }\), as the data is informative and dominates posterior inference. In fact, these values are only influential when shrinkage is needed, e.g., when there are zero total counts in one of the groups. In such cases, posterior estimates of τ i and ω i are dominated by the prior, yet the method described below still provides a tool to quantify evidence of changes in expression. As a default option, we use \(a^{2}_{\mu } = a^{2}_{\delta } = 0.5\) leading to τ i ,ω i ∼ N(0,1). These default values imply that approximately 99 % of the LFCs in overall expression and over-dispersion are expected a priori to lie in the interval (−3,3). This range seems reasonable in light of the case studies we have explored. If a different range is expected, this can be easily modified by the user by setting different values for \(a^{2}_{\mu }\) and \(a^{2}_{\delta }\).
Posterior samples for all model parameters are generated via an adaptive Metropolis within a Gibbs sampling algorithm [25]. A detailed description of our implementation can be found in Additional file 1: Note S6.3.
Post hoc correction of global shifts in input mRNA content between the groups
The identifiability restriction in Eq. 3 applies only to cells within each group. As a consequence, if they exist, global shifts in cellular mRNA content between groups (e.g., if all mRNAs were present at twice the level in one population related to another) are absorbed by the \(\mu ^{(p)}_{i}\)'s. To assess changes in the relative abundance of a gene, we adopt a two-step strategy where: (1) model parameters are estimated using the identifiability restriction in Eq. 3 and (2) global shifts in endogenous mRNA content are treated as a fixed offset and corrected post hoc. For this purpose, we use the sum of overall expression rates (intrinsic genes only) as a proxy for the total mRNA content within each group. Without loss of generality, we use the first group of cells as a reference population. For each population p (p=1,…,P), we define a population-specific offset effect:
$$ \Lambda_{p} = \left(\sum\limits_{i=1}^{q_{0}} \mu^{(p)}_{i} \right) \bigg/ \left(\sum\limits_{i=1}^{q_{0}} \mu^{(1)}_{i} \right) $$
and perform the following offset correction:
$$ \begin{aligned} \tilde{\mu}^{(p)}_{i} &= \mu^{(p)}_{i} \big/ \Lambda_{p}, \quad\tilde{\phi}^{(p)}_{j} = \phi^{(p)}_{j} \times \Lambda_{p},\\& \!\! i = 1,\ldots,q_{0}; \quad\quad j_{p} = 1, \ldots, n_{p}. \end{aligned} $$
This is equivalent to replacing the identifiability restriction in Eq. 3 by
$$ \frac{1}{n_{p}}\sum\limits_{j=1}^{n_{p}} \phi^{(p)}_{j} = \Lambda_{p}, \quad \text{for}~ p = 1,\ldots, P. $$
Technical details regarding the implementation of this post hoc offset correction are explained in Additional file 1: Note S6.4. The effect of this correction is illustrated in Fig. 7 using the cell-cycle data set described in the main text. As an alternative, we also explored the use of the ratio between the total intrinsic counts over total spike-in counts to define a similar offset correction based on
$$ {\begin{aligned} \Lambda'_{p} = \left(\underset{j = 1, \ldots, n_{p}}{\text{median}} \left\{ \frac{\sum_{i=1}^{q_{0}} X^{(p)}_{ij}}{\sum_{i=q_{0} + 1}^{q} X^{(p)}_{ij}} \right\} \right) \bigg/ \left(\underset{j = 1, \ldots, n_{1}}{\text{median}} \left\{ \frac{\sum_{i=1}^{q_{0}} X^{(1)}_{ij}}{\sum_{i=q_{0} + 1}^{q} X^{(1)}_{ij}} \right\} \right). \end{aligned}} $$
Post hoc offset correction for cell-cycle data set. Upper panels display posterior medians for LFC in overall expression against the weighted average between estimates of overall expression rates for G1, S and G2M cells (weights defined by the number of cells in each group). Lower panels illustrate the effect of the offset correction upon the empirical distribution of posterior estimates for mRNA content normalizing constants \(\phi ^{(p)}_{j}\). These figures illustrate a shift in mRNA content throughout cell-cycle phases. In particular, our model infers that cellular mRNA is roughly duplicated when comparing G1 to G2M cells. LFC log-fold change
For the cell-cycle data set, both alternatives are equivalent. Nonetheless, the first option is more robust in cases where a large number of differentially expressed genes are present. Hereafter, we use \(\mu ^{(p)}_{i}\) and \(\phi ^{(p)}_{j}\) to denote \(\tilde {\mu }^{(p)}_{i}\) and \(\tilde {\phi }^{(p)}_{j}\), respectively.
A probabilistic approach to quantify evidence of changes in expression patterns
A probabilistic approach is adopted, assessing changes in expression patterns (mean and over-dispersion) through a simple and intuitive scale of evidence. Our strategy is flexible and can be combined with a variety of decision rules. In particular, here we focus on highlighting genes whose absolute LFC in overall expression and biological over-dispersion between the populations exceeds minimum tolerance thresholds τ 0 and ω 0, respectively (τ 0,ω 0≥0), set a priori. The usage of such minimum tolerance levels for LFCs in expression has also been discussed in [14] and [6] as a tool to improve the biological significance of detected changes in expression and to improve upon FDRs.
For a given probability threshold \(\alpha _{_{M}}\) (\(0.5 < \alpha _{_{M}} < 1\)), a gene i is identified as exhibiting a change in overall expression between populations p and p ′ if
$$ \begin{aligned} \pi^{M}_{i p p'} (\tau_{0}) &\equiv \text{P}(|\log(\mu^{(p)}_{i}/\mu^{(p')}_{i})| > \tau_{0} | \{\text{data}\}) > \alpha_{_{M}},\\& \quad i = 1,\ldots, q_{0}. \end{aligned} $$
If τ 0→0, \({\pi ^{M}_{i}}(\tau _{0}) \rightarrow 1\) becoming uninformative to detect changes in expression. As in [26], in the limiting case where τ 0=0, we define
$$ \pi^{M}_{i p p'} (0) = 2 \max\left\{\tilde{\pi}^{M}_{i p p'}, 1- \tilde{\pi}^{M}_{i p p'}\right\} - 1 $$
$$ \tilde{\pi}^{M}_{i p p'} = \mathrm{P}\left(\log\left(\mu^{(p)}_{i}/\mu^{(p')}_{i}\right) > 0 \mid \{\text{data}\}\right). $$
A similar approach is adopted to study changes in biological over-dispersion between populations p and p ′, using
$$ \pi^{D}_{i p p'} (\omega_{0}) \equiv \text{P}\left(|\log\left(\delta^{(p)}_{i}/\delta^{(p')}_{i}\right)| > \omega_{0} | \{\text{data}\}\right) > \alpha_{_{D}}, $$
for a fixed probability threshold \(\alpha _{_{D}}\) (\(0.5 < \alpha _{_{D}} < 1\)). In line with Eqs. 11 and 12, we also define
$$ \pi^{D}_{i p p'} (0) = 2 \max\left\{\tilde{\pi}^{D}_{i p p'}, 1-\tilde{\pi}^{D}_{i p p'}\right\} - 1 $$
$$ \tilde{\pi}^{D}_{i p p'} = \text{P}\left(\log\left(\delta^{(p)}_{i}/\delta^{(p')}_{i} \right) > 0 \mid \{\text{data}\}\right). $$
Evidence thresholds \(\alpha _{_{M}}\) and \(\alpha _{_{D}}\) can be fixed a priori. Otherwise, these can be defined by controlling the EFDR [13]. In our context, these are given by
$$ \text{EFDR}_{\alpha_{_{M}}}(\tau_{0})= \frac{\sum_{i=1}^{q_{0}} \left(1-\pi^{M}_{i} (\tau_{0})\right) \text{I}\left(\pi^{M}_{i} (\tau_{0}) > \alpha_{_{M}}\right)}{\sum_{i=1}^{q_{0}} I\left(\pi^{M}_{i} (\tau_{0}) > \alpha_{_{M}}\right)} $$
$$ \text{EFDR}_{\alpha_{_{D}}}(\omega_{0})= \frac{\sum_{i=1}^{q_{0}} \left(1-\pi^{D}_{i} (\omega_{0})\right) \text{I}\left(\pi^{D}_{i} (\omega_{0}) > \alpha_{_{D}}\right)}{\sum_{i=1}^{q_{0}} I\left(\pi^{D}_{i} (\omega_{0}) > \alpha_{_{D}}\right)}, $$
where I(A)=1 if event A is true, 0 otherwise. Critically, the usability of this calibration rule relies on the existence of genes under both the null and the alternative hypothesis (i.e., with and without changes in expression). While this is not a practical limitation in real case studies, this calibration might fail to return a value in benchmark data sets (e.g., simulation studies), where there are no changes in expression. As a default, if EFDR calibration is not possible, we set \(\alpha _{_{M}} = \alpha _{_{D}} = 0.90\).
The posterior probabilities in Eqs. 10, 11, 13 and 14 can be easily estimated – as a post-processing step – once the model has been fitted (see Additional file 1: Note S6.5). In addition, our strategy is flexible and can be easily extended to investigate more complex hypotheses, which can be defined post hoc, e.g., to identify those genes that show significant changes in cell-to-cell biological over-dispersion but that maintain a constant level of overall expression between the groups, or conditional decision rules where we require a minimum number of cells where the expression of a gene is detected.
Our implementation is freely available as an R package [27], using a combination of R and C++ functions through the Rcpp library [28]. This can be found in https://github.com/catavallejos/BASiCS, released under the GPL license.
Availability of supporting data
All data sets analyzed in this article are publicly available in the cited references.
Bayesian analysis of single-cell sequencing data
bulk RNA-seq:
bulk RNA sequencing
CDR:
cellular detection rate
EFDR:
expected false discovery rate
ERCC:
External RNA Control Consortium
LFC:
log-fold change
MCMC:
Markov chain Monte Carlo
mESC:
mouse embryonic stem cell
P&S:
pool-and-split
single cell
scRNA-seq:
single-cell RNA sequencing
UMI:
unique molecular identifier
Zeisel A, Muñoz-Manchado AB, Codeluppi S, Lönnerberg P, La Manno G, Juréus A, et al. Cell types in the mouse cortex and hippocampus revealed by single-cell RNA-seq. Science. 2015; 347(6226):1138–42.
Jaitin DA, Kenigsberg E, Keren-Shaul H, Elefant N, Paul F, Zaretsky I, et al. Massively parallel single-cell RNA-seq for marker-free decomposition of tissues into cell types. Science. 2014; 343(6172):776–9.
Patel AP, Tirosh I, Trombetta JJ, Shalek AK, Gillespie SM, Wakimoto H, et al. Single-cell RNA-seq highlights intratumoral heterogeneity in primary glioblastoma. Science. 2014; 344(6190):1396–401.
Brennecke P, Anders S, Kim JK, Kołodziejczyk AA, Zhang X, Proserpio V, et al. Accounting for technical noise in single-cell RNA-seq experiments. Nat Methods. 2013; 10(11):1093–5.
Robinson MD, McCarthy DJ, Smyth GK. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010; 26(1):139–40.
Love MI, Huber W, Anders S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 2014; 15(12):550.
Kharchenko PV, Silberstein L, Scadden DT. Bayesian approach to single-cell differential expression analysis. Nat Methods. 2014; 11(7):740–2.
Vallejos CA, Marioni JC, Richardson S. BASiCS: Bayesian analysis of single-cell sequencing data. PLoS Comput Biol. 2015; 11(6):1004333.
Kolodziejczyk AA, Kim JK, Tsang JC, Ilicic T, Henriksson J, Natarajan KN, et al. Single cell RNA-sequencing of pluripotent states unlocks modular transcriptional variation. Cell Stem Cell. 2015; 17(4):471–85.
McCarthy DJ, Chen Y, Smyth GK. Differential expression analysis of multifactor RNA-Seq experiments with respect to biological variation. Nucleic Acids Res. 2012;40(10).
Jiang L, Schlesinger F, Davis CA, Zhang Y, Li R, Salit M, et al.Synthetic spike-in standards for RNA-seq experiments. Genome Res. 2011; 21(9):1543–51.
Lovén J, Orlando DA, Sigova AA, Lin CY, Rahl PB, Burge CB, et al.Revisiting global gene expression analysis. Cell. 2012; 151(3):476–82.
Newton MA, Noueiry A, Sarkar D, Ahlquist P. Detecting differential gene expression with a semiparametric hierarchical mixture method. Biostatistics. 2004; 5(2):155–76.
McCarthy DJ, Smyth GK. Testing significance relative to a fold-change threshold is a treat. Bioinformatics. 2009; 25(6):765–71.
Finak G, McDavid A, Yajima M, Deng J, Gersuk V, Shalek AK, et al.Mast: a flexible statistical framework for assessing transcriptional changes and characterizing heterogeneity in single-cell RNA sequencing data. Genome Biol. 2015; 16(1):1–13.
Buettner F, Natarajan KN, Casale FP, Proserpio V, Scialdone A, Theis FJ, et al.Computational analysis of cell-to-cell heterogeneity in single-cell RNA-sequencing data reveals hidden subpopulations of cells. Nat Biotechnol. 2015; 33:155–60.
Grün D, Kester L, van Oudenaarden A. Validation of noise models for single-cell transcriptomics. Nat Methods. 2014; 11(6):637–40.
Islam S, Zeisel A, Joost S, La Manno G, Zajac P, Kasper M, et al.Quantitative single-cell RNA-seq with unique molecular identifiers. Nat Methods. 2014; 11(2):163–6.
Darzynkiewicz Z, Crissman H, Traganos F, Steinkamp J. Cell heterogeneity during the cell cycle. J Cell Physiol. 1982; 113(3):465–74.
Clemens A. Protein phosphorylation in cell growth regulation, 1st ed. Amsterdam: Harwood Academic Publishers; 1996.
Boddy MN, Russell P. DNA replication checkpoint. Curr Biol. 2001; 11(23):953–6.
Klein AM, Mazutis L, Akartuna I, Tallapragada N, Veres A, Li V, et al.Droplet barcoding for single-cell transcriptomics applied to embryonic stem cells. Cell. 2015; 161(5):1187–201.
Macosko EZ, Basu A, Satija R, Nemesh J, Shekhar K, Goldman M, et al.Highly parallel genome-wide expression profiling of individual cells using nanoliter droplets. Cell. 2015; 161(5):1202–14.
Rue H, Martino S, Chopin N. Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations. J R Stat Soc Ser B Methodol. 2009; 71(2):319–92.
Roberts GO, Rosenthal JS. Examples of adaptive MCMC. J Comput Graph Stat. 2009; 18(2):349–67.
Bochkina N, Richardson S. Tail posterior probability for inference in pairwise and multiclass gene expression data. Biometrics. 2007; 63(4):1117–25.
R Core Team. R: a language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2014.
Eddelbuettel D, François R, Allaire J, Chambers J, Bates D, Ushey K. Rcpp: Seamless R and C++ integration. J Stat Softw. 2011; 40(8):1–18.
We acknowledge all members of the Richardson research group (Medical Research Council - Biostatistics Unit, MRC-BSU) and Marioni laboratory (European Molecular Biology Laboratory - European Bioinformatics Institute, EMBL-EBI; Cancer Research UK - Cambridge Institute, CRUK-CI) for support and discussions during the preparation of this document. In particular, we are grateful to Nils Eling (EMBL-EBI), Antonio Scialdone (EMBL-EBI) and Aaron Lun (CRUK-CI) for numerous discussions and suggestions that enriched the final version of the manuscript. We also thank the editorial team of Genome Biology and two independent reviewers for the positive feedback and many insightful and constructive comments provided.
MRC Biostatistics Unit, Cambridge Institute of Public Health, Cambridge, UK
Catalina A. Vallejos
& Sylvia Richardson
EMBL European Bioinformatics Institute, Wellcome Trust Genome Campus, Cambridge, UK
& John C. Marioni
Cancer Research UK Cambridge Institute, University of Cambridge, Li Ka Shing Centre, Cambridge, UK
John C. Marioni
Search for Catalina A. Vallejos in:
Search for Sylvia Richardson in:
Search for John C. Marioni in:
Correspondence to Catalina A. Vallejos or Sylvia Richardson or John C. Marioni.
CAV, JCM and SR conceived and designed the methods and experiments. CAV implemented the methods and analyzed the data. All authors were involved in writing the paper and have approved the final version.
JCM and CAV acknowledge core EMBL funding. SR and CAV acknowledge core MRC funding (MRC_MC_UP_0801/1). JCM acknowledges core support from CRUK.
Additional file 1
Supplementary material. Section S1 illustrates the interaction between cell- and gene-specific model parameters. Section S2 provides a comparative analysis of BASiCS and alternative methods regarding the detection of differentially expressed genes (changes in mean). Section S3 illustrates the usage of the coefficient of variation as a measure of cellular heterogeneity. Section S4 describes the treatment of potential batch effects used for the analysis of the data set provided by [17]. Section S5 illustrates the interplay between mean and over-dispersion parameters that is typically observed in homogeneous populations of cells. Section S6 contains additional details regarding the statistical model presented in this article and the implementation of Bayesian inference. (PDF 1443 kb)
Data analysis (part 1). R code used to analyze the single cells vs pool-and-split samples data set. (PDF 3952 kb)
Data analysis (part 2). R code used to analyze the cell-cycle data set. (PDF 3051 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Vallejos, C.A., Richardson, S. & Marioni, J.C. Beyond comparisons of means: understanding changes in gene expression at the single-cell level. Genome Biol 17, 70 (2016) doi:10.1186/s13059-016-0930-3
Received: 21 December 2015
Accepted: 30 March 2016
Single-cell RNA-seq
Differential expression
Cellular heterogeneity
Single-Cell Omics
|
CommonCrawl
|
The Physics of Crazy Sleds
By drorzel on February 14, 2014.
In the Uncertain Dots hangout the other day, Rhett and I went off on a tangent about the physics of the Olympics, specifically the luge. If you're not familiar with this, it's basically psycho sledding: people riding tiny little sleds down a curved track at 80mph.
The "featured image" above shows Erin Hamlin of the US women's luge team during a training run (AP photo from here); she went on to win a bronze medal, the US's first in individual luge, so congratulations to her. The photo gives you an idea of what's involved: tiny sled, icy track, curved walls.
(In the Winter Olympics context, this actually seems relatively sane, because there's another sport, called "skeleton," which is basically the same thing, only face-first. This is basically the same progression followed by kids at the local sledding hill, so I expect in another dozen years or so, they'll make the skeletoneers look rational by introducing a sport where lunatics ride down the track backwards, or standing on the sled like a pseudo-snowboard.)
Anyway, Rhett mentioned that he'd gotten interested in this when his daughter asked why there was any difference at all in the times for this event, given that they're all riding the same sleds down the same track. That's an interesting point, and provides a decent starting point for thinking about the physics of the luge.
By a weird coincidence, I had also been paying attention to the luge, because one of the US men's team, Tucker West, is going to be starting college at Union in the Spring term, once he gets back from the Olympics. Our PR office was bragging about this on Twitter, and I started following West's feed (he finished 22nd, for the record, which might seem disappointing, but ask yourself: are you the 22nd best in the world at anything? Were you at age 18?). What made me start wondering about this was that in various retweeted photos of West and other lugers-- this one, for example-- they're bigger than I expected. For some reason I assumed they would be smaller, probably because of the little sleds, but if you look at the US team, they're typically around six feet, 180lbs, which isn't gigantic (keep in mind, I'm 6'6", 280lbs, so my perspective may be skewed), but is a clearly bigger than average. So, I started wondering if there's a physics reason why size would be advantageous in this.
That might seem like a stupid question, given that bigger athletes are, by definition, pulled downhill more strongly by gravity, but in fact, when you think carefully about the physics, it shouldn't matter. If you look closely at the forces acting on an object sliding downhill, you see something like this:
Diagram showing the forces on an object sliding down a slope.
The vertical green arrow is the force of gravity, which by definition is straight down. The up-and-to-the-right arrow is a "normal force" from the surface, "normal" here being a mathematical term of art meaning "perpendicular to the surface." And the red arrow pointing uphill is a frictional force resisting the slide-- which is present even if you're sliding down an icy track on a ridiculously small sled.
These three forces add together to give you the net force acting, but they're not completely independent of one another. The normal force is however big it needs to be to keep the object from sinking into the surface, which is determined by the force of gravity. The frictional force, in turn, depends on the normal force-- the harder the object is pressed into the surface, the bigger the force of friction.
When you work it out, then, all of the forces are related to the mass and the strength of gravity. When you look at how quickly the object moves down the slope, what matters is the net force along the slope, and when you work it all out, that comes out to:
$latex F_{net} = mg(\sin \theta - \mu \cos \theta ) $
(where $latex \mu $ is a factor that accounts for the dependence of friction on the normal force) That depends on the mass, but the speed is ultimately determined by the acceleration of the mass, which you get from Newton's second law, $latex F_{net} = m a $. And that also has a mass in it, on the other side of the equals sign from the force, so when you solve for the acceleration, you find:
$latex a = g(\sin \theta - \mu \cos \theta) $
So the acceleration is the same, regardless of mass. This is why, even though I outweigh them by nearly an order of magnitude, I don't go down a playground slide any faster than my kids, as you can see in this video that I use in intro mechanics classes:
Thus, a simple physics argument would suggest that being heavier isn't actually any benefit in downhill sliding events.
But, as I said in the video, I started wondering about whether there might be some benefit when it comes to going around corners. As you can see from the photo above and the point-of-view video in this feature from the New York Times , the track involves banked curves, and the sleds ride up the wall as they go around the curves. That's a somewhat more complicated situation, so maybe there's a benefit to being big from that-- maybe really light riders will end up going way up the wall, and thus take a longer path, leading to a slower time.
The physics in thinking about the banked curves looks like this, where I've kept the same color scheme as the previous picture:
A view of a object on a banked curve, showing the forces that act.
You can imagine this as a view down the track, from an utter lunatic following the sled down the hill. Again, there's a force of gravity straight down, and a normal force that's perpendicular to the surface, up and to the left. There's also a frictional force, in this case down and to the left, keeping the sled from sliding perpendicular to the track.
Again, the behavior here is determined by adding all these forces together. In this case, though, we know an additional piece of information, namely that the sled is going around a curve with some radius R (the center of which would be off to the left somewhere). According to Newton's laws, though, the sled really wants to move in a straight line, so there must be some net force acting to bend it onto this curve, and that "centripetal force" is:
$latex F_{cent} = m \frac{v^2}{R} $
Where v is the speed, and R the radius of the curve-- so higher speeds require bigger force, as do tighter curves (smaller R).
So, all the forces in that picture need to add together to make this centripetal force, or else the sled will bust through the track and continue in a straight line. Which would be Bad. If we want to see how this works out, we need to break these forces up into components that point straight up-down and left-right, and recognize that the centripetal force comes only from the leftward part.
A closer look at the forces on the banked curve, showing the components
The normal force has an upward piece and a leftward piece, and the frictional force has a leftward piece and a downward piece. Those leftward bits are what we really care about, and they're related to the total forces by simple trigonometry.
This might seem like a hopeless problem, though, as the normal force is whatever it needs to be to keep the sled from breaking through the wall, up to the strength of the materials involved. And the friction force is whatever it needs to be to keep the sled from sliding across the track. Which means we've got more variables floating around than we can easily deal with.
But, again, these things are related to each other in a way that lets us set a limit. That is, there's a maximum possible value for the frictional force, that's proportional to the size of the normal force. And the normal force needs to counter the downward pull of gravity, so we can set the size of the upward component, and with a bit of trigonometry relate everything to the angle between a vertical line and a line from the center of the curved track to the position of the sled. The algebra gets a little gory, but in the end, you find a net horizontal force that looks like:
$latex F_{net} = mg\frac{\sin \theta + \mu \cos \theta}{\cos \theta - \mu \sin \theta} $
That might seem a little scary, but we can do some simple checks to see if it makes sense. If the angle is zero, so the sled is flat on the bottom, then the force is just the frictional coefficient multiplied by the weight, as it should be-- the normal force can't contribute to a horizontal centripetal force, so it's all just friction. As the sled goes up the wall, the net force gets bigger, because the normal force starts to include a horizontal component, which is why they bank the curves in the first place.
So, when you go through all this, what do you get? Well, that net force depends on the mass, but when you set it equal to the centripetal force, the mass drops out again, just as it did in the initial problem. Light or heavy, it makes no difference-- the angle and the coefficient of friction are the only things that matter. You can find a maximum speed for a given angle, which is given by:
$latex v_{max}^2 = R g\frac{\sin \theta + \mu \cos \theta}{\cos \theta - \mu \sin \theta} $
Any faster than that, and the sled won't have enough inward force to stay on the curve at that angle, and will slide outward.
So, why are these guys on the large side? In the end, Rhett probably has the right answer in the hangout video: it's all about air resistance. When they're sliding down the track, the picture up above needs an extra force arrow:
The sliding object again, this time with air resistance.
The air resistance force is utterly insignificant for slow objects like me and SteelyKid going down playground slides, but when you're rocketing down an icy chute at 80mph, it's very significant-- the force generally goes like the square of the velocity. It doesn't depend on the mass, though, so the downhill acceleration becomes:
$latex a = g(\sin \theta - \mu \cos \theta) - \frac{f_{air}}{m} $
That second term tends to reduce the speed, but the effect is smaller for heavier masses. So, being somewhat bigger is an advantage, at least to a point-- the air resistance force will depend on the cross-sectional area of the sliding object, so a big fat guy isn't going to do very well, because he'll have to push a lot of air out of the way. Taller guys add mass without increasing the cross section all that much, though; thus, the initial photos that surprised me of luge team members towering over NBC reporters.
Of course, that doesn't entirely answer Rhett's daughter's question as to why there are differences in the times. After all, this would suggest that two lugers with the same mass and cross-sectional area should have exactly the same time, every time. But of course, there's more to it than that, and I have a math-y explanation of (part of) that, as well. But this is getting long as it is, so I'll stop here, and talk about steering in a later post.
(One might object that the coefficient-times-normal force model doesn't really work for the case of a sled with blades biting into ice, but I think you can still model it that way, just with a much larger $latex \mu $ than for smooth objects sliding over each other. Which explains Rhett's results in his speed skating post.)
On the Steering of Sleds
In the previous post about luge, I mentioned that there was one thing that came up when Rhett and I were talking about this, namely why there are differences in times between racers. The toy physics model I set up last time suggests that the difference between riders is only a matter of…
Long Overdue Snow Physics Post
Ages and ages ago, I posted the picture that's the "featured image" above, and asked people to submit physics comments about it. Then I got distracted by a series of shiny things, and never did anything with the handful of responses I got. Because I'm a Terrible Person. Anyway, it's long overdue,…
Science Kids, Fictitious Forces, and Frictionless Surfaces
SteelyKid has started to demand Sid the Science Kid videos, which of course we are implacably opposed to around here. One of the recent episodes available online was "Slide to the Side," talking about friction. While this partakes a bit of the Feynman "Energy makes it go" problem, it was generally…
Physics of Linerider IV: Friction?
Friction in Line Rider Is there friction in Line Rider? Does it function as physics would expect? To test this, I set up a simple track:  Basically, a slope with a flat part to start with and to end with. Let me…
Off topic, what are your thoughts on all these WVU students who appear to have been assigned your blog as homework or extra credit? Have you spoken to their prof? Did he or she let you know ahead of time or ask for permission? How long do you expect this to go on, and do you see it affecting your topics? They're ... unsophisticated, I guess is the right term, which isn't a bad thing, and is probably what you'd expect from a 1st semester physics roster.
By Tom (not verified) on 14 Feb 2014 #permalink
I have no idea whose class it is, and wasn't contacted about it. I don't really mind, though. It's kind of flattering, in a weird way.
It's not likely to have any significant impact on my choice of topics. That's entirely driven by whim, anyway.
By drorzel on 14 Feb 2014 #permalink
I think the major difference can be spelled out by looking at the fact that these are sleds with runners, not marbles or people's rear ends on the track/slide, and that means there is some steering going on. which means their velocity might be affected.
if they were doing this like a waterside, on a mat and just letting gravity do the work, then the simple math and air resistance might have more to do with it. but if you watch the lugers, skeletonites (?) and bobsledders, they all have some sort of steering mechanism. (those horns on the luge are not just there for looks...)
so like any course dictated racing, the racing line and the skill to hit a good one has a lot to do with it.
I do wonder if variations in weight might make it harder or easier to recover from a mistake or course impedance...
By peter (not verified) on 15 Feb 2014 #permalink
Go On Till You Come to the End; Then Stop
ScienceBlogs is coming to an end. I don't know that there was ever a really official announcement of this, but the bloggers got email a while back letting us know that the site will be closing down. I've been absolutely getting crushed between work and the book-in-progress and getting Charlie the…
It's been a couple of years since we lost the Queen of Niskayuna, and we've held off getting a dog until now because we were planning a big home renovation-- adding on to the mud room, creating a new bedroom on the second floor, and gutting and replacing the kitchen. This was quite the undertaking…
Physics Blogging Round-Up: August
Another month, another set of blog posts. This one includes the highest traffic I think I've ever seen for a post, including the one that started me on the path to a book deal: -- The ALPHA Experiment Records Another First In Measuring Antihydrogen: The good folks trapping antimatter at CERN have…
The Age Math Game
I keep falling down on my duty to provide cute-kid content, here; I also keep forgetting to post something about a nerdy bit of our morning routine. So, let's maximize the bird-to-stone ratio, and do them at the same time. The Pip can be a Morning Dude at times, but SteelyKid is never very happy to…
Kid Art Update
Our big home renovation has added a level of chaos to everything that's gotten in the way of my doing more regular cute-kid updates. And even more routine tasks, like photographing the giant pile of kid art that we had to move out of the dining room. Clearing stuff up for the next big stage of the…
|
CommonCrawl
|
Blow-up phenomena for nonlinear pseudo-parabolic equations with gradient term
Convergence of global and bounded solutions of a two-species chemotaxis model with a logistic source
August 2017, 22(6): 2261-2290. doi: 10.3934/dcdsb.2017095
Seasonal forcing and exponential threshold incidence in cholera dynamics
Jinhuo Luo 1, , Jin Wang 2, and Hao Wang 3,,
College of Information Technology, Shanghai Ocean University, Shanghai 201306, China
Department of Mathematics, University of Tennessee at Chattanooga, Chattanooga, TN 37403, United States
Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, AB T6G 2G1, Canada
The third author's research was partially supported by NSERC. E-mail address: [email protected]
Received March 2015 Revised January 2017 Published March 2017
Fund Project: The second author's research was partially supported by NSF
Full Text(HTML)
Figure(15) / Table(1)
We propose a seasonal forcing iSIR (indirectly transmitted SIR) model with a modified incidence function, due to the fact that the seasonal fluctuations can be the main culprit for cholera outbreaks. For this nonautonomous system, we provide a sufficient condition for the persistence and the existence of a periodic solution. Furthermore, we provide a sufficient condition for the global stability of the periodic solution. Finally, we present some simulation examples for both autonomous and nonautonomous systems. Simulation results exhibit dynamical complexities, including the bistability of the autonomous system, an unexpected outbreak of cholera for the nonautonomous system, and possible outcomes induced by sudden weather events. Comparatively the nonautonomous system is more realistic in describing the indirect transmission of cholera. Our study reveals that the relative difference between the value of immunological threshold and the peak value of bacterial biomass is critical in determining the dynamical behaviors of the system.
Keywords: Cholera, nonautonomous, stability, seasonal forcing, immunological threshold, persistence, periodic solution, exponential incidence, sudden events.
Mathematics Subject Classification: 93A30, 37B55, 34D20, 34D23, 97M10, 34C25, 37D35, 34C6.
Citation: Jinhuo Luo, Jin Wang, Hao Wang. Seasonal forcing and exponential threshold incidence in cholera dynamics. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2261-2290. doi: 10.3934/dcdsb.2017095
J. R. Andrews and S. Basu, Transmission dynamics and control of cholera in Haiti: An epidemic model, Lancet, 377 (2011), 1248-1255. Google Scholar
E. Bertuzzo, L. Mari, L. Righetto, M. Gatto, R. Casagrandi, M. Blokesch, I. Rodriguez-Iturbe and A. Rinaldo, Prediction of the spatial evolution and effects of control measures for the unfolding Haiti cholera outbreak, Geophys. Res. Lett., 38 (2011), L06403. Google Scholar
D. L. Chao, M. E. Halloran and I. M. Longini, Vaccination strategies for epidemic cholera in Haiti with implications for the developing world, Proc. Natl. Acad. Sci., 108 (2011), 7081-7085. Google Scholar
C. T. Codeçco, Endemic and epidemic dynamics of cholera: The role of the aquatic reservoir, BMC Infect. Dis., 1 (2001), p1. Google Scholar
M. C. Eisenberga, G. Kujbidad, A. R. Tuited, D. N. Fismand and J. H. Tiena, Examining rainfall and cholera dynamics in Haiti using statistical and dynamic modeling approaches, Epidemics, 5 (2013), 197-207. Google Scholar
S. M. Faruque, I. B. Naser, M. J. Islam, A. S. G. Faruque, A. N. Ghosh, G. B. Nair, D. A. Sack and J. J. Mekalanos, Seasonal epidemics of cholera inversely correlate with the prevalence of environmental cholera phages, Proc. Nat. Acad. Sci., 102 (2004), 1702-1707. Google Scholar
J. K. Hale, Ordinary Differential Equations, Pure and Applied Mathematics, Vol. XXI, WileyInterscience, New York, 1969. Google Scholar
D. M. Hartley, J. G. Morris and D. L. Smith, Hyperinfectivity: A critical element in the ability of V. cholerae to cause epidemics?, PLoS Medicine, 3 (2006), e7. Google Scholar
M. A. Jensen, S. M. Faruque, J. J. Mekalanos and B. R. levin, Modeling the role of bacteriophage in the control of cholera outbreaks, PNAS, 103 (2006), 4652-4657. Google Scholar
R. I. Joh, H. Wang, H. Weiss and J. S. Weitz, Dynamics of indirectly transmitted infectious diseases with immunological threshold, Bull. Math. Bio., 71 (2009), 845-862. Google Scholar
J. D. Kong, W. Davis, X. Li and H. Wang, Stability and sensitivity analysis of the iSIR model for indirectly transmitted infectious dieases with immunological threshold, SIAM J. Appl. Math., 74 (2014), 1418-1441. Google Scholar
S. Liao and J. Wang, Stability analysis and application of a mathematical cholera model, Math. Biosci. and Eng., 8 (2011), 733-752. Google Scholar
Z. Mukandavire, S. Liao, J. Wang, H. Gaff, D. L. Smith and J. G. Morris Jr., Estimating the reproductive numbers for the 2008-2009 cholera outbreaks in Zimbabwe, Proc. Nat. Acad. Sci., 108 (2011), 8767-8772. Google Scholar
E. J. Nelson, J. B. Harris, J. G. Morris, S. B. Calderwood and A. Camilli, Cholera transmission: The host, pathogen and bacteriophage dynamics, Nature Reviews: Microbiology, 7 (2009), 693-702. Google Scholar
L. Righetto, E. Bertuzzo, L. Mari, E. Schild, R. Casagrandi, M. Gatto, I. Rodriguez-Iturbe and A. Rinaldo, Rainfall mediations in the spreading of epidemic cholera, Advances in Water Resources, 60 (2013), 34-46. Google Scholar
L. Righetto, R. Casagrandi, E. Bertuzzo, L. Mari, M. Gatto, I. Rodriguez-Iturbe and A. Rinaldo, The role of aquatic reservoir fluctuations in long-term cholera patterns, Epidemics, 4 (2012), 33-42. Google Scholar
[17] F. L. Thompson, B. Austin and J. Swings, The Biology of Vibrios, ASM Press, Washington, D.C., 2006. Google Scholar
J. P. Tian and J. Wang, Global stability for cholera epidemic models, Math. Biosci., 232 (2011), 31-41. Google Scholar
J. H. Tien and D. J. D. Earn, Multiple transmission pathways and disease dynamics in a waterborne pathogen model, Bull. Math. Bio., 72 (2010), 1502-1533. Google Scholar
A. L. Tuite, J. Tien, M. Eisenberg, D. J. D. Earn, J. Ma and D. N. Fisman, Cholera epidemic in Haiti, 2010: Using a transmission model to explain spatial spread of disease and identify optimal control interventions, Ann. Intern. Med., 154 (2011), 593-601. Google Scholar
F. Verhulst, Nonlinear Differential Equations and Dynamical Systems, Second Edition, Springer, Berlin, 1996. Google Scholar
J. Wang and S. Liao, A generalized cholera model and epidemic-endemic analysis, J. Biol. Dyn., 6 (2012), 568-589. Google Scholar
T. Yoshizawa, Stability Theory and the Existence of Periodic Solutions and Almost Periodic Solutins, Appl. Math. Science, Vol. 14, Springer-Verlag, 1975. Google Scholar
World Health Organization (WHO) web page: http://www.who.org. Google Scholar
Figure 1. Satuation function $\alpha(\cdot)$ of Holling-type
Figure Options
Download as PowerPoint slide
Figure 2. Saturation function $\alpha(\cdot)$ of exponential type
Figure 3. System (40) possesses a bistability (See column 1, column 2)
Figure 4. An example when system (40) does not approach $(1, 0, K)$ in the case that $c$ is slightly greater than $K$
Figure 5. Populations of system (5) approach a periodic solution
Figure 6. When the threshold of immunity is significantly higher than the maximum of bacterial capacity, populations of system (5) tend to a disease free periodic solution
Figure 7. Two final states of system (5) depending on different initial values
Figure 8. The third final state of system (5) and the locally enlarged figure (shown in the right column)
Figure 9. Periodic outbreak of epidemic ($\xi=0$, left column), and durative infection ($\xi=90$, right column)
Figure 10. System encounters a sudden event. Left column: $N=1\times 10^{6}$. Right column: $N=1\times 10^{7}$
Figure 11. Curves of $u(B)$ and $v(B)$ have an unique intersection $\bar{B}$
Figure 12. Curves of function $f$ and $g$ with changing threshold values $c$
Figure 13. System (40) has two equilibria
Figure 14. System (40) has three equilibria
Figure 15. System (40) has four equilibria
Table 1. Parameter values from Jensen et al. [9]
Parameter Values Description Units
$r$ 0.2-14.3 Maximum per capita pathogen growth rate day $ ^{-1}$
$K$ $10^6$ Pathogen carrying capacity cell liter $^{-1}$
$H$ $ 10^6-10^8$ Half-saturation pathogen density cell liter $ ^{-1}$
$a$ 0.08 -0.12 Maximum rate of infection day $ ^{-1}$
$\delta$ 0.1 Recovery rate day $ ^{-1}$
$\xi$ 10-100 Pathengen shed rate cell liter $ ^{-1}\text{day} ^{-1}$
$\mu$ $5\times 10^{-5}-5\times 10^{-4} $ Natural human birth/death rate day $ ^{-1}$
$N$ $10^6$ Total Population persons
$c$ $\approx 10^6$ Minimum infection dose cell liter $ ^{-1}$
Suqi Ma. Low viral persistence of an immunological model. Mathematical Biosciences & Engineering, 2012, 9 (4) : 809-817. doi: 10.3934/mbe.2012.9.809
Kazuo Yamazaki, Xueying Wang. Global stability and uniform persistence of the reaction-convection-diffusion cholera epidemic model. Mathematical Biosciences & Engineering, 2017, 14 (2) : 559-579. doi: 10.3934/mbe.2017033
Kaifa Wang, Aijun Fan. Uniform persistence and periodic solution of chemostat-type model with antibiotic. Discrete & Continuous Dynamical Systems - B, 2004, 4 (3) : 789-795. doi: 10.3934/dcdsb.2004.4.789
M. P. Moschen, A. Pugliese. The threshold for persistence of parasites with multiple infections. Communications on Pure & Applied Analysis, 2008, 7 (6) : 1483-1496. doi: 10.3934/cpaa.2008.7.1483
Ismael Maroto, Carmen Núñez, Rafael Obaya. Exponential stability for nonautonomous functional differential equations with state-dependent delay. Discrete & Continuous Dynamical Systems - B, 2017, 22 (8) : 3167-3197. doi: 10.3934/dcdsb.2017169
Zhijun Liu, Weidong Wang. Persistence and periodic solutions of a nonautonomous predator-prey diffusion with Holling III functional response and continuous delay. Discrete & Continuous Dynamical Systems - B, 2004, 4 (3) : 653-662. doi: 10.3934/dcdsb.2004.4.653
Zhenguo Bai, Yicang Zhou. Threshold dynamics of a bacillary dysentery model with seasonal fluctuation. Discrete & Continuous Dynamical Systems - B, 2011, 15 (1) : 1-14. doi: 10.3934/dcdsb.2011.15.1
Antoine Perasso. Global stability and uniform persistence for an infection load-structured SI model with exponential growth velocity. Communications on Pure & Applied Analysis, 2019, 18 (1) : 15-32. doi: 10.3934/cpaa.2019002
Xueping Li, Jingli Ren, Sue Ann Campbell, Gail S. K. Wolkowicz, Huaiping Zhu. How seasonal forcing influences the complexity of a predator-prey system. Discrete & Continuous Dynamical Systems - B, 2018, 23 (2) : 785-807. doi: 10.3934/dcdsb.2018043
Jianxin Yang, Zhipeng Qiu, Xue-Zhi Li. Global stability of an age-structured cholera model. Mathematical Biosciences & Engineering, 2014, 11 (3) : 641-665. doi: 10.3934/mbe.2014.11.641
Shu Liao, Jin Wang. Stability analysis and application of a mathematical cholera model. Mathematical Biosciences & Engineering, 2011, 8 (3) : 733-752. doi: 10.3934/mbe.2011.8.733
Mi-Young Kim. Uniqueness and stability of positive periodic numerical solution of an epidemic model. Discrete & Continuous Dynamical Systems - B, 2007, 7 (2) : 365-375. doi: 10.3934/dcdsb.2007.7.365
Wen Jin, Horst R. Thieme. An extinction/persistence threshold for sexually reproducing populations: The cone spectral radius. Discrete & Continuous Dynamical Systems - B, 2016, 21 (2) : 447-470. doi: 10.3934/dcdsb.2016.21.447
Xuewei Ju, Desheng Li. Global synchronising behavior of evolution equations with exponentially growing nonautonomous forcing. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1921-1944. doi: 10.3934/cpaa.2018091
Pierre Gabriel. Global stability for the prion equation with general incidence. Mathematical Biosciences & Engineering, 2015, 12 (4) : 789-801. doi: 10.3934/mbe.2015.12.789
Luis Barreira, Claudia Valls. Stability of nonautonomous equations and Lyapunov functions. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 2631-2650. doi: 10.3934/dcds.2013.33.2631
Guihong Fan, Yijun Lou, Horst R. Thieme, Jianhong Wu. Stability and persistence in ODE models for populations with many stages. Mathematical Biosciences & Engineering, 2015, 12 (4) : 661-686. doi: 10.3934/mbe.2015.12.661
Alan E. Lindsay, Michael J. Ward. An asymptotic analysis of the persistence threshold for the diffusive logistic model in spatial environments with localized patches. Discrete & Continuous Dynamical Systems - B, 2010, 14 (3) : 1139-1179. doi: 10.3934/dcdsb.2010.14.1139
Zhengxin Zhou. On the Poincaré mapping and periodic solutions of nonautonomous differential systems. Communications on Pure & Applied Analysis, 2007, 6 (2) : 541-547. doi: 10.3934/cpaa.2007.6.541
Yoichi Enatsu, Yukihiko Nakata. Stability and bifurcation analysis of epidemic models with saturated incidence rates: An application to a nonmonotone incidence rate. Mathematical Biosciences & Engineering, 2014, 11 (4) : 785-805. doi: 10.3934/mbe.2014.11.785
HTML views (12)
Jinhuo Luo Jin Wang Hao Wang
|
CommonCrawl
|
On the irreducibility of the hyperplane sections of Fermat varieties in $\mathbb{P}^3$ in characteristic $2$
AMC Home
Smoothness testing of polynomials over finite fields
November 2014, 8(4): 479-495. doi: 10.3934/amc.2014.8.479
Curves in characteristic $2$ with non-trivial $2$-torsion
Wouter Castryck 1, , Marco Streng 2, and Damiano Testa 3,
Departement Wiskunde, KU Leuven, Celestijnenlaan 200B, 3001 Leuven, Belgium
Mathematisch Instituut, Universiteit Leiden, Postbus 9512, 2300 RA Leiden, Netherlands
Mathematics Institute, University of Warwick, Coventry CV4 7AL, United Kingdom
Received January 2014 Revised September 2014 Published November 2014
Cais, Ellenberg and Zureick-Brown recently observed that over finite fields of characteristic two, all sufficiently general smooth plane projective curves of a given odd degree admit a non-trivial rational $2$-torsion point on their Jacobian. We extend their observation to curves given by Laurent polynomials with a fixed Newton polygon, provided that the polygon satisfies a certain combinatorial property. We also show that in each of these cases, if the curve is ordinary, then there is no need for the words ``sufficiently general''. Our treatment includes many classical families, such as hyperelliptic curves of odd genus and $C_{a,b}$ curves. In the hyperelliptic case, we provide alternative proofs using an explicit description of the $2$-torsion subgroup.
Keywords: even characteristic., Algebraic curves, toric surfaces.
Mathematics Subject Classification: 14H25,14H45,14M2.
Citation: Wouter Castryck, Marco Streng, Damiano Testa. Curves in characteristic $2$ with non-trivial $2$-torsion. Advances in Mathematics of Communications, 2014, 8 (4) : 479-495. doi: 10.3934/amc.2014.8.479
W. Bosma, J. Cannon and C. Playoust, The Magma algebra system. I. The user language, J. Symbolic Comput., 24 (1997), 235-265. doi: 10.1006/jsco.1996.0125. Google Scholar
B. Cais, J. Ellenberg and D. Zureick-Brown, Random Dieudonné modules, random $p$-divisible groups, and random curves over finite fields, J. Inst. Math. Jussieu, 12 (2013), 651-676. doi: 10.1017/S1474748012000862. Google Scholar
W. Castryck, Moving out the edges of a lattice polygon, Discrete Comp. Geometry, 47 (2012), 496-518. doi: 10.1007/s00454-011-9376-2. Google Scholar
W. Castryck and F. Cools, Linear pencils encoded in the Newton polygon,, preprint., (). Google Scholar
W. Castryck, J. Denef and F. Vercauteren, Computing zeta functions of nondegenerate curves, Int. Math. Res. Pap., 2006, (2006), 1-57. Google Scholar
W. Castryck, A. Folsom, H. Hubrechts and A. V. Sutherland, The probability that the number of points on the Jacobian of a genus $2$ curve is prime, Proc. London Math. Soc., 104 (2012), 1235-1270. doi: 10.1112/plms/pdr063. Google Scholar
W. Castryck and J. Voight, On nondegeneracy of curves, Algebra Number Theory, 3 (2009), 255-281. doi: 10.2140/ant.2009.3.255. Google Scholar
D. Cox, J. Little and H. Schenck, Toric Varieties, Springer, 2011. Google Scholar
J. Denef and F. Vercauteren, Computing zeta functions of hyperelliptic curves over finite fields of characteristic $2$, in Proc. Adv. Cryptology - CRYPTO 2002, 2003, 308-323. doi: 10.1007/3-540-45455-1_25. Google Scholar
J. Denef and F. Vercauteren, Computing zeta functions of $C_{a,b}$ curves using Monsky-Washnitzer cohomology, Finite Fields App., 12 (2006), 78-102. doi: 10.1016/j.ffa.2005.01.003. Google Scholar
A. Elkin and R. Pries, Ekedahl-Oort strata of hyperelliptic curves in characteristic $2$, Algebra Number Theory, 7 (2013), 507-532. doi: 10.2140/ant.2013.7.507. Google Scholar
S. Farnell and R. Pries, Families of Artin-Schreier curves with Cartier-Manin matrix of constant rank, Linear Algebra Appl., 439 (2013), 2158-2166. doi: 10.1016/j.laa.2013.06.012. Google Scholar
C. Haase and J. Schicho, Lattice polygons and the number $2i+7$, Amer. Math. Monthly, 116 (2009), 151-165. doi: 10.4169/193009709X469913. Google Scholar
R. Hartshorne, Generalized divisors on Gorenstein curves and a theorem of Noether, J. Math. Kyoto Univ., 26 (1986), 375-386. Google Scholar
N. Katz and P. Sarnak, Random Matrices, Frobenius Eigenvalues and Monodromy, AMS, 1998. Google Scholar
N. Koblitz, Algebraic aspects of cryptography, in Algorithms and Computation in Mathematics, Springer, 1999. doi: 10.1007/978-3-662-03642-6. Google Scholar
R. Koelman, The Number of Moduli of Families of Curves on Toric Surfaces, Ph.D thesis, Katholieke Universiteit Nijmegen, 1991. Google Scholar
Y. Manin, The Hasse-Witt matrix of an algebraic curve, Izv. Akad. Nauk SSSR Ser. Mat., 25 (1961), 153-172. Google Scholar
D. Mumford, Theta characteristics of an algebraic curve, Ann. Sci. de l'É.N.S., 4 (1971), 181-192. Google Scholar
B. Poonen, Varieties without extra automorphisms. II. Hyperelliptic curves, Math. Res. Lett., 7 (2000), 77-82. doi: 10.4310/MRL.2000.v7.n1.a7. Google Scholar
B. Poonen, Bertini theorems over finite fields, Ann. Math., 160 (2004), 1099-1127. doi: 10.4007/annals.2004.160.1099. Google Scholar
R. Pries and H. Zhu, The $p$-rank stratification of Artin-Schreier curves, Ann. l'Institut Fourier, 62 (2012), 707-726. doi: 10.5802/aif.2692. Google Scholar
J. Scholten and H. Zhu, Hyperelliptic curves in characteristic $2$, Int. Math. Res. Not., 2002 (2002), 905-917. doi: 10.1155/S1073792802111160. Google Scholar
J.-P. Serre, Sur la topologie des variétés algébriques en caractéristique $p$, in Oeuvres (collected papers), Springer, 1986, 544-568. Google Scholar
K.-O. Stöhr and J. F. Voloch, A formula for the Cartier operator on plane algebraic curves, J. reine angew. Math., 377 (1987), 49-64. doi: 10.1515/crll.1987.377.49. Google Scholar
H. Zhu, Hyperelliptic curves over $\mathbf F_2$ of every $2$-rank without extra automorphisms, Proc. Amer. Math. Soc., 134 (2006), 323-331. doi: 10.1090/S0002-9939-05-08294-8. Google Scholar
Laurenţiu Maxim, Jörg Schürmann. Characteristic classes of singular toric varieties. Electronic Research Announcements, 2013, 20: 109-120. doi: 10.3934/era.2013.20.109
Gabriele Beltramo, Primoz Skraba, Rayna Andreeva, Rik Sarkar, Ylenia Giarratano, Miguel O. Bernabeu. Euler characteristic surfaces. Foundations of Data Science, 2021 doi: 10.3934/fods.2021027
Sylvain E. Cappell, Anatoly Libgober, Laurentiu Maxim and Julius L. Shaneson. Hodge genera and characteristic classes of complex algebraic varieties. Electronic Research Announcements, 2008, 15: 1-7. doi: 10.3934/era.2008.15.1
Josep M. Miret, Jordi Pujolàs, Nicolas Thériault. Trisection for supersingular genus $2$ curves in characteristic $2$. Advances in Mathematics of Communications, 2014, 8 (4) : 375-387. doi: 10.3934/amc.2014.8.375
Michael Khanevsky. Non-autonomous curves on surfaces. Journal of Modern Dynamics, 2021, 17: 305-317. doi: 10.3934/jmd.2021010
B. Harbourne, P. Pokora, H. Tutaj-Gasińska. On integral Zariski decompositions of pseudoeffective divisors on algebraic surfaces. Electronic Research Announcements, 2015, 22: 103-108. doi: 10.3934/era.2015.22.103
Isaac A. García, Jaume Giné. Non-algebraic invariant curves for polynomial planar vector fields. Discrete & Continuous Dynamical Systems, 2004, 10 (3) : 755-768. doi: 10.3934/dcds.2004.10.755
Carlos Munuera, Wanderson Tenório, Fernando Torres. Locally recoverable codes from algebraic curves with separated variables. Advances in Mathematics of Communications, 2020, 14 (2) : 265-278. doi: 10.3934/amc.2020019
Jędrzej Śniatycki. Integral curves of derivations on locally semi-algebraic differential spaces. Conference Publications, 2003, 2003 (Special) : 827-833. doi: 10.3934/proc.2003.2003.827
Piotr Pokora, Tomasz Szemberg. Minkowski bases on algebraic surfaces with rational polyhedral pseudo-effective cone. Electronic Research Announcements, 2014, 21: 126-131. doi: 10.3934/era.2014.21.126
J. Scott Carter, Daniel Jelsovsky, Seiichi Kamada, Laurel Langford and Masahico Saito. State-sum invariants of knotted curves and surfaces from quandle cohomology. Electronic Research Announcements, 1999, 5: 146-156.
Ryutaroh Matsumoto. Strongly secure quantum ramp secret sharing constructed from algebraic curves over finite fields. Advances in Mathematics of Communications, 2019, 13 (1) : 1-10. doi: 10.3934/amc.2019001
Alex Wright. Schwarz triangle mappings and Teichmüller curves: Abelian square-tiled surfaces. Journal of Modern Dynamics, 2012, 6 (3) : 405-426. doi: 10.3934/jmd.2012.6.405
Gheorghe Craciun, Abhishek Deshpande, Hyejin Jenny Yeon. Quasi-toric differential inclusions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2343-2359. doi: 10.3934/dcdsb.2020181
Sonja Hohloch. Characterization of toric systems via transport costs. Journal of Geometric Mechanics, 2020, 12 (3) : 447-454. doi: 10.3934/jgm.2020027
Vittorio Martino. On the characteristic curvature operator. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1911-1922. doi: 10.3934/cpaa.2012.11.1911
Sonja Hohloch, Silvia Sabatini, Daniele Sepe. From compact semi-toric systems to Hamiltonian $S^1$-spaces. Discrete & Continuous Dynamical Systems, 2015, 35 (1) : 247-281. doi: 10.3934/dcds.2015.35.247
Yong Liu. Even solutions of the Toda system with prescribed asymptotic behavior. Communications on Pure & Applied Analysis, 2011, 10 (6) : 1779-1790. doi: 10.3934/cpaa.2011.10.1779
Naoki Chigira, Nobuo Iiyori and Hiroyoshi Yamaki. Nonabelian Sylow subgroups of finite groups of even order. Electronic Research Announcements, 1998, 4: 88-90.
T. Aaron Gulliver, Masaaki Harada. On the performance of optimal double circulant even codes. Advances in Mathematics of Communications, 2017, 11 (4) : 767-775. doi: 10.3934/amc.2017056
Wouter Castryck Marco Streng Damiano Testa
|
CommonCrawl
|
Mathematics Institute
Postgraduate Taught Degrees
Admissions Contacts
UG Handbook
Current Postgraduate Students
Incoming(Restricted permissions)
WMI Magazine
The Street Booking Form
Non-Symposium Workshops
PDEs and fluid mechanics
Titles and Abstracts
Enrique Fernández Cara - Some controllability results in fluid mechanics
This talk is devoted to present some recent results concerning the controllability of some linear and nonlinear equations from fluid mechanics. We will analyze the local and global exact controllability to bounded trajectories. We will first deal with the Burgers equation. Some positive and negative results will be given in this case. Then, we will consider the Navier-Stokes and other related systems.
John Gibbon - Quaternions and particle dynamics in the Euler fluid equations
More than 150 years after their invention by Hamilton, quaternions are now widely used in the aerospace and computer animation industries to track the paths of moving objects undergoing three-axis
rotations. It will be shown that they provide a natural way of selecting an appropriate ortho-normalframe -- designated the quaternion-frame -- for a particle in a Lagrangian flow, and of obtaining the
equations for its dynamics. How these ideas can be applied to the three-dimensional Euler fluid
equations will then be considered.
Martin Hairer - Long-time behaviour of the 2D stochastic Navier-Stokes equations
One of the cleanaest mathematical models of two-dimensional turbulence are the stochastic Navier-Stokes equations. We give sufficient (and in some sense close to necessary) conditions on the covariance of the driving force to obtain the uniqueness of the stationary state of these equations. It can be shown that under these conditions, the convergence in law of arbitrary solutions to the stationary one is exponential. As a consequence, one shows that the generator of the dynamics has a spectral gap in a suitable space of observables.
Darryl Holm - Non-linear non-local equations for aggregation of particles carrying geometrical properties
Aggregation of particles whose interaction potential depends on their mutual orientation is considered. The aggregation dynamics is derived using a version of Darcy's law and a variational principle depending on the geometric nature of the physical quantities. The evolution equation that results from this procedure is a combination of a nonlinear diffusion equation and a double bracket equation. The Landau-Lifshitz equation is obtained as a particular case. We also derive analytical solutions of equations which are collapsed (clumped) states and show their dynamical emergence from smooth initial conditions in numerical simuations. Finally, we compare a numerical solution of our equation with recent experiments on self-assembly of star-shaped pbjects floating on the surface of water (P.D.Weidman).
Dragos Iftimie - On the vanishing viscosity limit for Navier boundary conditions
We discuss the issue of the inviscid limit of the incompressible Navier-Stokes equations when the Navier boundary conditions are prescribed. We justify an asymptotic expansion which involves a weak amplitude boundary layer, with the same thickness as in Prandtl's theory and a linear behavior. This analysis holds for general regular domains, in both dimensions two and three.
Alexander Kiselev - Global well-posedness for the critical 2D dissipative surface quasi-geostrophic equation
Surface quasi-geostrophic equation arises in modelling rotating fluids and is relevant to studies of atmosphere and ocean. On the mathematical level, the equation can be thought of as a model intermediate between Burgers and 3D Navier-Stokes equations. Two cases are of special interest: conservative and with critical (square root of Laplacian) dissipative term. We give an elementary proof of the global well-posedness for the critical 2D dissipative surface quasi-geostrophic equation. The argument is based on a new non-local maximum principle involving appropriate moduli of continuity. The talk is based on a joint work with Fedja Nazarov and Alexander Volberg.
Igor Kukavica - Conditional regularity and thin domain results for solutions of the Navier-Stokes equations
We consider sufficient conditions for regularity f Leray-Hopf solutions of the Navier-Stokes equation. By a result of Neustupa and Panel, a Leray-Hopf weak solution is regular provided a single component of the velocity is bounded. In this talk we will survey existing and present new results on one component and one direction regularity. We will also show global regularity for a class of solutions of the Navier-Stokes equation in thin domains. This is a joint work with M. Ziane.
Grzegorz Lukaszewicz - Turbulent shear flows and their attractors
Existence of attractors and estimates of their dimension for shear turbulent flows have been studied in many papers. They follow earlier investigations on boundary driven flows between parallel plates, stability of Couette flow, and the onset of turbulence.
In our research, motivated by applications in lubrication problems, we study twodimensional Navier-Stokes flows in channel-like domains and with various boundary conditions. We consider flows in both bounded and unbounded domains, and with both time independent and quite general time dependent forcing. Our aim is to prove existence of suitable attractors for a number of flows appearing in applications and to obtain estimates of dimension of the attractors in terms of parameters of the considered flows.
In particular, we are interested in influence of the geometry of the domain (physically, roughness of the surface) and boundary conditions (physically, character of boundary driving) of the flow on the attractor dimension.
We present also some recent abstract results on existence of attractors which prove useful in our research [2], [3], [5], [6] and some results about dimension of attractors, [1], [4]. We use, e.g., a version of the Lieb-Thirring inequality, in which constantsdepend explicitly on some norms representing geometry of the boundary [1]. References: [1] M. Boukrouche, G. Lukaszewicz, An upper bound on the attractor dimension of a 2D turbulent shear flow with a free boundary condition, Regularity and other aspects of the Navier-Stokes equations, Banach Center Publications Vol. 70, Warsaw 2005.
[2] M. Boukrouche, G. Lukaszewicz, On the existence of pullback attractor for a two-dimensional shear flow with Tresca's boundary condition, submitted.
[3] T. Caraballo, G. Lukaszewicz & J. Real, Pullback attractors for asymptotically compact nonautonomous dynamical systems, Nonlinear Analysis, TMA, vol.64, no.2, (2006), 484–498.
[4] J. Langa, G. Lukaszewicz & J. Real, Finite fractal dimension of pullback attractor for non-autonomous 2-D Navier-Stokes equations in some unbounded domains, Nonlinear Analisis, TMA, vol.66, (2007), 735–749.
[5] G. Lukaszewicz, Pullback attractors and statistical solutions for 2-D Navier- Stokes equations, to appear in DCDS-A.
[6] G. Lukaszewicz, A. Tarasi´nska, Pullback attractors for reaction-diffusion equation with unbounded right-hand side, in preparation.
Josef Malek - On incompressible Navier-Stokes-Fourier equations and some of its generalizations
In the first part, we consider a complete thermodynamic model for unsteady flows of incompressible homogeneous Newtonian fluids in a fixed bounded three-dimensional domain. The model comprises evolutionary equations for the velocity, pressure and temperature fields that satisfy the the balance of mass, the balance of linear momentum and the balance of energy, and is completed by the entropy inequality. In our setting, both the viscosity and the coefficient of the thermal conductivity are functions of the temperature. We deal with Navier's slip boundary conditions for the velocity that yield a globally integrable pressure, and we consider zero heat flux across the boundary. For such a problem, we establish the large-data and long-time existence of weak as well as suitable weak solutions. It has been well documented that the viscosity and the thermal conductivity of most liquids depend also on the pressure, and the shear rate. The relevant experimental studies show that even at high pressures the variations of the values in the density are insignificant in comparison to that of the viscosity, and it is thus reasonable to assume that the liquids in question are incompressible fluids with pressure, shear rate and tmperature dependent viscosities. In the second part of the talk, we discuss physical issues relevant to such fluids and present the mathematical properties concerning unsteady three-dimensional internal flows of such incompressible fluids. Assuming that we have Navier's slip at the impermeable boundary we establish the long-time existence of a (suitable) weak solution when the data are large.
Genevieve Raugel - A hyperbolic pertubation of the Navier-Stokes equations
Jose Rodrigo - Construction of Almost Sharp Fronts for the Surface Quasi-Geostrophic Equation
Sharp fronts for the surface quasi-geostrophic equation are the analogue of vortex lines for 3D Euler. We present a construction of almost sharp-fronts (the analogue of vortex tubes for 3D Euler) of any (small) thickness, for which the time of existence is bonded below by a constant independent of the thickness. This result, together with previous work of Cordoba, Fefferman and Rodrigo provides a rigorous derivation of the equation for a sharp front that only involves tools avaialble in 3D Euler. This is joint work with Charles Fefferman.
Ricardo Rosa - Theory and applications of the statistical solutions of the Navier-Stokes equations
The concept of statistical solution is akin to the notion of ensemble average in the statistical theory of turbulence and is relevant to the mathematical theory of turbulent flows. We present some recent applications of such statistical solutions in the derivation of rigorous bounds for physical quantities associated with channel flows driven by a uniform pressure gradient. We also discuss some new results and open problems for such solutions in a more abstract sense.
Witold Sadowski - Numerical verification of regularity in the Navier-Stokes equations
Consider the 3D incompressible Navier-Stokes equations with zero forcing and periodic boundary conditions. It is known that for small enough initial data these equations have a regular solution. More precisely, for a fixed domain size and a given viscosity there exists constant C>0 such that all initial conditions with enstrophy less than C give rise to regular solutions. The value of the constant which follows from the theory of the Navier-Stokes equations is very small and in fact the enstrophy of all such solutions (those arising from initial conditions with enstrophy less than C) is decreasing in time. In the talk (based on the joint paper with James Robinson) I will present a numerical method which will verify, in a finite time, whether such a regularity result can be extended to all initial conditions with some arbitrary (but fixed) value of C.
Maria Schonbek - Decay of Polymer equations and Poincaré estimates
We study the decay and existence of solutions to some equations modeling polymeric flow. We consider the case when the drag term is corotational and the solutions are sufficiently regular to satisfy some necessary energy estimates. We analyse the decay when when the space of elongations is bounded, and the spatial domain of the polymer is either a bounded domain $\Omega \subset \mathbb{R}^n, n=2,3$ or the domain is the whole space $\mathbb{R}^n, n=2,3$. The decay is first established for the probability density $\psi$ and then this decay is used to obtain decay of the velocity $u$. Consideration also is given to solutions where the probability density is radial in the admissible elongation vectors $q$. In this case the velocity $u$, will become a solution to Navier-Stokes equation, and thus decay follows from known results for the Navier-Stokes equations.
Some questions in relation to Poincar\'e type inequalities, and fluid equations in general, will be discussed
Marco Sammartino - Slightly viscous fluids: Well posedness results and blow-up phenomena
When a high Reynolds number fluid interacts with a rigid boundary one can derive Prandtl's equations as a formal asymptotic limit of the Navier-Stokes equations. In this talk we shall review some known short time results for Prandtl's equations and investigate the process leading to the formation of a singularity in the solution. Moreover we shall show some numerical evidence of the ill posedness of Prandtl's equation in H1: in fact the presence of two counter-rotating vortices inside the boundary layer seem to produce a blow-up of the solution in an arbitrary short time.
We shall also discuss the situation when the initial datum for the 2D periodic Navier--Stokes equations are of the vortex layer type, in the sense that there is a rapid variation in the tangential component of the velocity across a curve. The vorticity is therefore concentrated in a layer whose thickness is of the order the square root of the viscosity. In the zero viscosity limit we derive (formally) the equations that rule the fluid inside the layer. Assuming the initial as well the matching (with the outer flow) data to be analytic, we shall prove that the model equations are well posed.
Undergrad and Postgrad admissions
Zeeman Building
Coventry CV4 7AL
Staff Intranet - Alumni site
Warwick Maths on Facebook Warwick Maths on Twitter
Page contact: Jose Rodrigo
Last revised: Wed 16 Nov 2011
|
CommonCrawl
|
B. Parent • AE25225 Intermediate Thermodynamics
2014 Intermediate Thermodynamics Midterm Exam
What are your favourite days to have the thermodynamics midterm quiz?
Friday April 18th 9
Sunday April 20th 4
Monday April 21st 7
Tuesday April 22nd 3
Wednesday April 23rd 6
Thursday April 24th 13
Friday April 25th 23
Sunday April 27th 38
Total votes: 103. Total voters: 59.
Please select your favourite days to have the thermodynamics midterm quiz. You can select up to 3 days. Based on your votes we will choose an optimal time for the midterm next Monday. Please vote before next Monday :)
Midterm Quiz
NO NOTES OR BOOKS; USE THERMODYNAMICS TABLES THAT WERE DISTRIBUTED; ANSWER ALL 4 QUESTIONS; ALL QUESTIONS HAVE EQUAL VALUE.
Starting from Newton's law $\vec{F}=m\vec{a}$, the first law of thermo ${\rm d}(mh) - V {\rm d}P=\delta Q -\delta W$, and the mass conservation equation in differential form, show that the 1D energy conservation in differential form corresponds to: $$ \frac{\partial}{\partial t} \rho \left( e + \frac{1}{2} v^2 + gy \right) + \frac{\partial}{\partial y} \rho v\left( h + \frac{1}{2} v^2 + gy \right) = \frac{\rho \delta Q}{m \Delta t}-\frac{\rho \delta W}{m \Delta t}$$
Consider nitrogen gas (N$_2$) at room temperature and atmospheric pressure. Do the following:
(a) Find the average number of N$_2$ molecules striking the container walls per second per square meter.
(b) A 1 m$^3$ glass bulb contains N$_2$ gas at a temperature of $300$ K and at a pressure of 1 atmosphere. The glass bulb, which is to be used in conjunction with some other experiment, is itself enclosed in a large evacuated chamber. Unfortunately the glass bulb has, unknown to the experimenter, a small pinhole about $10^{-4}$ cm radius. To assess the importance of this hole, estimate the time required for 1% of the N$_2$ molecules to escape from the bulb into the surrounding vacuum.
Consider air being heated as it flows through a constant-area duct. At the duct entrance, the air has a pressure of 2 bars, a temperature of 300 K and a speed of 90 m/s. At the duct exit, the air has a pressure of 1.5 bars and a speed of 350 m/s. Do the following:
(a) Find the temperature of the air at the duct exit
(b) Determine the heat transfer per unit mass of air flowing through the duct in J/kg.
In a gas turbine engine, the combustion products are expanded to ambient pressure through a nozzle as follows:
The gas constant and the specific heat at constant pressure of the combustion products is of $415$ J/kgK and of $1800$ J/kgK respectively and can be taken as constant throughout the nozzle. The properties at the nozzle entrance are as follows: $P_1=10$ bar, $T_1=2000$ K, $v_1=40$ m/s. Knowing that friction induces a change in specific entropy between the nozzle exit and entrance of $s_2-s_1=264.6$ J/kgK and that the pressure at the nozzle exit corresponds to $P_2=1$ bar, do the following:
(a) Find the polytropic coefficient $n$ for this process (recall the polytropic relationship $P/\rho^n={\rm constant}$).
(b) Which common thermodynamic process (isentropic, adiabatic, reversible, isochoric, isobaric, etc) is closest to the process taking place in the nozzle?
(c) For cross-sectional areas at the nozzle entrance and exit equal to $A_1=1$ m$^2$ and $A_2=0.9$ m$^2$ respectively, determine the flow speed at the nozzle exit.
2. $3.17 \times 10^{27} ~{\rm particules/m^2 s}$; 284 days.
3. 875 K, 635 kJ/kg.
4. 1.2, isentropic, 303 m/s.
|
CommonCrawl
|
Blow-up of solutions to the periodic generalized modified Camassa-Holm equation with varying linear dispersion
Bound state solutions of Schrödinger-Poisson system with critical exponent
January 2017, 37(1): 627-643. doi: 10.3934/dcds.2017026
Boundedness of solutions to a quasilinear higher-dimensional chemotaxis-haptotaxis model with nonlinear diffusion
Jiashan Zheng ,
School of Mathematics and Statistics Science, Ludong University, Yantai 264025, China
* Corresponding author: [email protected]
Received January 2016 Revised September 2016 Published November 2016
This paper deals with the Neumann problem for the coupled quasilinear chemotaxis-haptotaxis model of cancer invasion given by
$\left\{ \begin{gathered} ut = \nabla \cdot \left( {{{\left( {u + 1} \right)}^{m - 1}}\nabla u} \right) - \nabla \cdot \left( {u\nabla v} \right) - \nabla \cdot \left( {u\nabla w} \right) + u\left( {1 - u - w} \right), \hfill \\ ut = \Delta v - v + u, \hfill \\ wt = - vw, \hfill \\ \end{gathered} \right.$
where the parameter $m≥q1$ and $\mathbb{R}^N(N≥q2)$ is a bounded domain with smooth boundary. If $m>\frac{2N}{N+2}$, then for any sufficiently smooth initial data there exists a classical solution which is global in time and bounded. The results of this paper partly extend previous results of several authors.
Keywords: Boundedness, chemotaxis-haptotaxis, global existence, logistic source.
Mathematics Subject Classification: Primary:92C17, 35K55;Secondary:35K59, 35K2.
Citation: Jiashan Zheng. Boundedness of solutions to a quasilinear higher-dimensional chemotaxis-haptotaxis model with nonlinear diffusion. Discrete & Continuous Dynamical Systems, 2017, 37 (1) : 627-643. doi: 10.3934/dcds.2017026
N. D. Alikakos, $L^p $ bounds of solutions of reaction-diffusion equations, Comm. Partial Diff. Eqns., 4 (1979), 827-868. doi: 10.1080/03605307908820113. Google Scholar
N. Bellomo, N. K. Li and P. K. Maini, On the foundations of cancer modelling: Selected topics, speculations, and perspectives, Math. Models Methods Appl. Sci., 18 (2008), 593-646. doi: 10.1142/S0218202508002796. Google Scholar
N. Bellomo, A. Belloquid, Y. Tao and M. Winkler, Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues, Math. Models Methods Appl. Sci., 25 (2015), 1663-1763. doi: 10.1142/S021820251550044X. Google Scholar
X. Cao, Boundedness in a three-dimensional chemotaxis-haptotaxis model, Z. Angew. Math. Phys., 67 (2016), Art. 11-13 pp. doi: 10.1007/s00033-015-0601-3. Google Scholar
M. A. J. Chaplain and A. R. A. Anderson, Mathematical modelling of tissue invasion, in Cancer Modelling and Simulation, L. Preziosi, ed., Chapman Hall/CRC, Boca Raton, FL, (2003), 269-297. Google Scholar
M. A. J. Chaplain and G. Lolas, Mathematical modelling of cancer invasion of tissue: The role of the urokinase plasminogen activation system, Math. Models Methods Appl. Sci., 15 (2005), 1685-1734. doi: 10.1142/S0218202505000947. Google Scholar
M. A. J. Chaplain and G. Lolas, Mathematical modelling of cancer invasion of tissue: Dynamic heterogeneity, Net. Hetero. Med., 1 (2006), 399-439. doi: 10.3934/nhm.2006.1.399. Google Scholar
L. Corrias, B. Perthame and H. Zaag, A chemotaxis model motivated by angiogenesis, C. R. Acad. Sci. Paris, Ser. I., 336 (2003), 141-146. doi: 10.1016/S1631-073X(02)00008-0. Google Scholar
L. Corrias, B. Perthame and H. Zaag, Global solutions of some chemotaxis and angiogenesis systems in high space dimensions, Milan J. Math., 72 (2004), 1-28. doi: 10.1007/s00032-003-0026-x. Google Scholar
A. Friedman and G. Lolas, Analysis of a mathematical model of tumor lymphangiogenesis, Math. Models Methods Appl. Sci., 15 (2005), 95-107. doi: 10.1142/S0218202505003915. Google Scholar
D. D. Haroske and H. Triebel, Distributions, Sobolev Spaces, Elliptic Equations, European Mathematical Society, Zurich, 2008. Google Scholar
H. Hajaiej, L. Molinet, T. Ozawa and B. Wang, Necessary and sufficient conditions for the fractional Gagliardo-Nirenberg inequalities and applications to Navier-Stokes and generalized boson equations, Harmonic Analysis and Nonlinear Partial Differential Equations, in: RIMS Kôkyûroku Bessatsu, B26 (2011), 159-175. Google Scholar
M. Herrero and J. Velázquez, A blow-up mechanism for a chemotaxis model, Ann. Scuola Norm. Super. Pisa Cl. Sci., 24 (1997), 633-683. Google Scholar
T. Hillen, K. J. Painter and M. Winkler, Convergence of a cancer invasion model to a logistic chemotaxis model, Math. Models Methods Appl. Sci., 23 (2013), 165-198. doi: 10.1142/S0218202512500480. Google Scholar
D. Horstmann and M. Winkler, Boundedness vs. blow-up in a chemotaxis system, J. Diff. Eqns., 215 (2005), 52-107. doi: 10.1016/j.jde.2004.10.022. Google Scholar
J. Liu, J. Zheng and Y. Wang, Boundedness in a quasilinear chemotaxis-haptotaxis system with logistic source, Z. Angew. Math. Phys., 67 (2016), 1-33. doi: 10.1007/s00033-016-0620-8. Google Scholar
T. Nagai, T. Senba and K. Yoshida, Application of the Trudinger-Moser inequality to a parabolic system of chemotaxis, Funkc. Ekvacioj, Ser. Int., 40 (1997), 411-433. Google Scholar
S. Ishida, K. Seki and T. Yokota, Boundedness in quasilinear Keller-Segel systems of parabolic-parabolic type on non-convex bounded domains, J. Diff. Eqns., 256 (2014), 2993-3010. doi: 10.1016/j.jde.2014.01.028. Google Scholar
E. Keller and L. Segel, Initiation of slime mold aggregation viewed as an instability, J. Theor. Biol., 26 (1970), 399-415. doi: 10.1016/0022-5193(70)90092-5. Google Scholar
E. Keller and L. Segel, Traveling bands of chemotactic bacteria: A theoretical analysis, J. Theor. Biol., 30 (1971), 235-248. doi: 10.1016/0022-5193(71)90051-8. Google Scholar
G. Liţanu and C. Morales-Rodrigo, Asymptotic behavior of global solutions to a model of cell invasion, Math. Models Methods Appl. Sci., 20 (2010), 1721-1758. doi: 10.1142/S0218202510004775. Google Scholar
G. Meral, C. Stinner and C. Surulescu, On a multiscale model involving cell contractivity and its effects on tumor invasion, Discrete Contin. Dyn. Syst. Ser. B, 20 (2015), 189-213. doi: 10.3934/dcdsb.2015.20.189. Google Scholar
K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura, Exponential attractor for a chemotaxisgrowth system of equations, Nonlinear Anal. TMA., 51 (2002), 119-144. doi: 10.1016/S0362-546X(01)00815-X. Google Scholar
C. Stinner, C. Surulescu and G. Meral, A multiscale model for pH-tactic invasion with time-varying carrying capacities, IMA J. Appl. Math., 80 (2015), 1300-1321. doi: 10.1093/imamat/hxu055. Google Scholar
C. Stinner, C. Surulescu and M. Winkler, Global weak solutions in a PDE-ODE system modeling multiscale cancer cell invasion, SIAM J. Math. Anal., 46 (2014), 1969-2007. doi: 10.1137/13094058X. Google Scholar
Y. Tao, Global existence of classical solutions to a combined chemotaxis-haptotaxis model with logistic source, J. Math. Anal. Appl., 354 (2009), 60-69. doi: 10.1016/j.jmaa.2008.12.039. Google Scholar
Y. Tao, Boundedness in a two-dimensional chemotaxis-haptotaxis system, Mathematics, 70 (2014), 165-174. Google Scholar
Y. Tao and M. Wang, Global solution for a chemotactic-haptotactic model of cancer invasion, Nonlinearity,, 21 (2008), 2221-2238. doi: 10.1088/0951-7715/21/10/002. Google Scholar
Y. Tao and M. Winkler, A chemotaxis-haptotaxis model: The roles of porous medium diffusion and logistic source, SIAM J. Math. Anal., 43 (2011), 685-704. doi: 10.1137/100802943. Google Scholar
Y. Tao and M. Winkler, Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity, J. Diff. Eqns., 252 (2012), 692-715. doi: 10.1016/j.jde.2011.08.019. Google Scholar
Y. Tao and M. Winkler, Boundedness and stabilization in a multi-dimensional chemotaxis-haptotaxis mode, Proceedings of the Royal Society of Edinburgh, 144 (2014), 1067-1084. doi: 10.1017/S0308210512000571. Google Scholar
Y. Tao and M. Winkler, Dominance of chemotaxis in a chemotaxis-haptotaxis model, Nonlinearity, 27 (2014), 1225-1239. doi: 10.1088/0951-7715/27/6/1225. Google Scholar
Y. Tao and M. Winkler, Energy-type estimates and global solvability in a two-dimensional chemotaxis-haptotaxis model with remodeling of non-diffusible attractant, J. Diff. Eqns., 257 (2014), 784-815. doi: 10.1016/j.jde.2014.04.014. Google Scholar
Y. Tao and M. Winkler, Large time behavior in a multidimensional chemotaxis-haptotaxis model with slow signal diffusion, SIAM J. Math. Anal., 47 (2015), 4229-4250. doi: 10.1137/15M1014115. Google Scholar
C. Walker and G. F. Webb, Global existence of classical solutions for a haptotaxis model, SIAM J. Math. Anal., 38 (2007), 1694-1713. doi: 10.1137/060655122. Google Scholar
L. Wang, Y. Li and C. Mu, Boundedness in a parabolic-parabolic quasilinear chemotaxis system with logistic source, Discrete Contin. Dyn. Syst. Ser. A., 34 (2014), 789-802. doi: 10.3934/dcds.2014.34.789. Google Scholar
Y. Wang, Boundedness in the higher-dimensional chemotaxis-haptotaxis model with nonlinear diffusion, J. Diff. Eqns., 260 (2016), 1975-1989. doi: 10.1016/j.jde.2015.09.051. Google Scholar
M. Winkler, Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source, Comm. Partial Diff. Eqns., 35 (2010), 1516-1537. doi: 10.1080/03605300903473426. Google Scholar
M. Winkler, Aggregation vs. global diffusive behavior in the higher-dimensional Keller-Segel model, J. Diff. Eqns., 248 (2010), 2889-2905. doi: 10.1016/j.jde.2010.02.008. Google Scholar
M. Winkler, Finite-time blow-up in the higher-dimensional parabolic-parabolic Keller-Segel system, J. Math. Pures Appl., 100 (2013), 748-767. doi: 10.1016/j.matpur.2013.01.020. Google Scholar
J. Zheng, Boundedness of solutions to a quasilinear parabolic-elliptic Keller-Segel system with logistic source, J. Diff. Eqns., 259 (2015), 120-140. doi: 10.1016/j.jde.2015.02.003. Google Scholar
Ling Liu, Jiashan Zheng. Global existence and boundedness of solution of a parabolic-parabolic-ODE chemotaxis-haptotaxis model with (generalized) logistic source. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3357-3377. doi: 10.3934/dcdsb.2018324
Chunhua Jin. Boundedness and global solvability to a chemotaxis-haptotaxis model with slow and fast diffusion. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1675-1688. doi: 10.3934/dcdsb.2018069
Pan Zheng. Global boundedness and decay for a multi-dimensional chemotaxis-haptotaxis system with nonlinear diffusion. Discrete & Continuous Dynamical Systems - B, 2016, 21 (6) : 2039-2056. doi: 10.3934/dcdsb.2016035
Pan Zheng, Chunlai Mu, Xiaojun Song. On the boundedness and decay of solutions for a chemotaxis-haptotaxis system with nonlinear diffusion. Discrete & Continuous Dynamical Systems, 2016, 36 (3) : 1737-1757. doi: 10.3934/dcds.2016.36.1737
Changchun Liu, Pingping Li. Global existence for a chemotaxis-haptotaxis model with $ p $-Laplacian. Communications on Pure & Applied Analysis, 2020, 19 (3) : 1399-1419. doi: 10.3934/cpaa.2020070
Abelardo Duarte-Rodríguez, Lucas C. F. Ferreira, Élder J. Villamizar-Roa. Global existence for an attraction-repulsion chemotaxis fluid model with logistic source. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 423-447. doi: 10.3934/dcdsb.2018180
Youshan Tao, Michael Winkler. A chemotaxis-haptotaxis system with haptoattractant remodeling: Boundedness enforced by mild saturation of signal production. Communications on Pure & Applied Analysis, 2019, 18 (4) : 2047-2067. doi: 10.3934/cpaa.2019092
Guoqiang Ren, Bin Liu. Global boundedness of solutions to a chemotaxis-fluid system with singular sensitivity and logistic source. Communications on Pure & Applied Analysis, 2020, 19 (7) : 3843-3883. doi: 10.3934/cpaa.2020170
Tomomi Yokota, Noriaki Yoshino. Existence of solutions to chemotaxis dynamics with logistic source. Conference Publications, 2015, 2015 (special) : 1125-1133. doi: 10.3934/proc.2015.1125
Liangchen Wang, Yuhuan Li, Chunlai Mu. Boundedness in a parabolic-parabolic quasilinear chemotaxis system with logistic source. Discrete & Continuous Dynamical Systems, 2014, 34 (2) : 789-802. doi: 10.3934/dcds.2014.34.789
Xiangdong Zhao. Global boundedness of classical solutions to a logistic chemotaxis system with singular sensitivity. Discrete & Continuous Dynamical Systems - B, 2021, 26 (9) : 5095-5100. doi: 10.3934/dcdsb.2020334
Ke Lin, Chunlai Mu. Global dynamics in a fully parabolic chemotaxis system with logistic source. Discrete & Continuous Dynamical Systems, 2016, 36 (9) : 5025-5046. doi: 10.3934/dcds.2016018
Pan Zheng, Chunlai Mu, Xuegang Hu. Boundedness and blow-up for a chemotaxis system with generalized volume-filling effect and logistic source. Discrete & Continuous Dynamical Systems, 2015, 35 (5) : 2299-2323. doi: 10.3934/dcds.2015.35.2299
Shijie Shi, Zhengrong Liu, Hai-Yang Jin. Boundedness and large time behavior of an attraction-repulsion chemotaxis model with logistic source. Kinetic & Related Models, 2017, 10 (3) : 855-878. doi: 10.3934/krm.2017034
Lu Xu, Chunlai Mu, Qiao Xin. Global boundedness of solutions to the two-dimensional forager-exploiter model with logistic source. Discrete & Continuous Dynamical Systems, 2021, 41 (7) : 3031-3043. doi: 10.3934/dcds.2020396
Johannes Lankeit, Yulan Wang. Global existence, boundedness and stabilization in a high-dimensional chemotaxis system with consumption. Discrete & Continuous Dynamical Systems, 2017, 37 (12) : 6099-6121. doi: 10.3934/dcds.2017262
Rachidi B. Salako, Wenxian Shen. Existence of traveling wave solutions to parabolic-elliptic-elliptic chemotaxis systems with logistic source. Discrete & Continuous Dynamical Systems - S, 2020, 13 (2) : 293-319. doi: 10.3934/dcdss.2020017
Ke Lin, Chunlai Mu. Convergence of global and bounded solutions of a two-species chemotaxis model with a logistic source. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2233-2260. doi: 10.3934/dcdsb.2017094
Chunhua Jin. Global classical solution and stability to a coupled chemotaxis-fluid model with logistic source. Discrete & Continuous Dynamical Systems, 2018, 38 (7) : 3547-3566. doi: 10.3934/dcds.2018150
Langhao Zhou, Liangwei Wang, Chunhua Jin. Global solvability to a singular chemotaxis-consumption model with fast and slow diffusion and logistic source. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021122
Jiashan Zheng
|
CommonCrawl
|
Search Results: 1 - 10 of 100 matches for " "
Page 1 /100
Spin 1 particle in the magnetic monopole potential, nonrelativistic approximation. Minkowski and Lobachevski spaces [PDF]
O. V. Veko,K. V. Kazmerchuk,E. M. Ovsiyuk,V. V. Kisel,A. M. Ishkhanyan,V. M. Red'kov
Physics , 2014,
Abstract: The spin 1 particle is treated in the presence of the Dirac magnetic monopole in the Minkowski and Lobachevsky spaces. Separating the variables in the frame of the matrix 10-component Duffin-Kemer-Petiau approach (wave equation) and making a nonrelativistic approximation in the corresponding radial equations, a system of three coupled second order linear differential equations is derived for each type of geometry. For the Minkowski space, the nonrelativistic equations are disconnected using a linear transformation, which makes the mixing matrix diagonal. The resultant three unconnected equations involve three routs of a cubic algebraic equation as parameters. The approach allows extension to the case of additional external spherically symmetric fields. The Coulomb and oscillator potentials are considered and for each of these cases three series of energy spectra are derived. A special attention is given to the states with minimum value of the total angular momentum. In the case of the curved background of the Lobachevsky geometry, the mentioned linear transformation does not disconnect the nonrelativistic equations in the presence of the monopole. Nevertheless, we derive the solution of the problem in the case of minimum total angular momentum. In this case, we additionally involve a Coulomb or oscillator field. Finally, considering the case without the monopole field, we show that for both Coulomb and oscillator potentials the problem is reduced to a system of three differential equations involving a hypergeometric and two general Heun equations. Imposing on the parameters of the latter equations a specific requirement, reasonable from the physical standpoint, we derive the corresponding energy spectra.
Confluent Heun functions and the Coulomb problem for spin 1/2 particle in Minkowski space [PDF]
V. Balan,A. M. Manukyan,E. M. Ovsiyuk,V. M. Red'kov,O. V. Veko
Abstract: In the paper, the well-known quantum mechanical problem of a spin 1/2 particle in external Coulomb potential, reduced to a system of two first-order differential equations, is studied from the point of view of possible applications of the Heun function theory to treat this system. It is shown that in addition to the standard way to solve the problem in terms of the confluent hypergeometric functions (proposed in 1928 by G. Darvin and W. Gordon), there are possible several other possibilities which rely on applying the confluent Heun functions. Namely, in the paper there are elaborated two combined possibilities to construct solutions: the first applies when one equation of the pair of relevant functions is expressed trough hypergeometric functions, and another constructed in terms of confluent Heun functions. In this respect, certain relations between the two classes of functions are established. It is shown that both functions of the system may be expressed in terms of confluent Heun functions. All the ways to study this problem lead us to a single energy spectrum, which indicates their correctness.
Klauder's coherent states for the radial Coulomb problem in a uniformly curved space and their flat-space limits [PDF]
Myo Thaik,Akira Inomata
Physics , 2004, DOI: 10.1088/0305-4470/38/8/012
Abstract: First a set of coherent states a la Klauder is formally constructed for the Coulomb problem in a curved space of constant curvature. Then the flat-space limit is taken to reduce the set for the radial Coulomb problem to a set of hydrogen atom coherent states corresponding to both the discrete and the continuous portions of the spectrum for a fixed \ell sector.
The hypergeneralized Heun equation in QFT in curved space-times [PDF]
Davide Batic,Manuel Sandoval
Physics , 2008, DOI: 10.2478/s11534-009-0107-8
Abstract: In this article we show for the first time the role played by the hypergeneralized Heun equation (HHE) in the context of Quantum Field Theory in curved space-times. More precisely, we find suitable transformations relating the separated radial and angular parts of a massive Dirac equation in the Kerr-Newman-deSitter metric to a HHE.
The generalized Heun equation in QFT in curved space-times [PDF]
Davide Batic,Harald Schmid,Monika Winklmeier
Physics , 2006, DOI: 10.1088/0305-4470/39/40/019
Abstract: In this article we give a brief outline of the applications of the generalized Heun equation (GHE) in the context of Quantum Field Theory in curved space-times. In particular, we relate the separated radial part of a massive Dirac equation in the Kerr-Newman metric and the static perturbations for the non-extremal Reissner-Nordstr\"{o}m solution to a GHE.
The Coulomb problem on a 3-sphere and Heun polynomials [PDF]
Stefano Bellucci,Vahagn Yeghikyan
Physics , 2013, DOI: 10.1063/1.4817487
Abstract: The paper studies the quantum mechanical Coulomb problem on a 3-sphere. We present a special parametrization of the ellipto-spheroidal coordinate system suitable for the separation of variables. After quantization we get the explicit form of the spectrum and present an algebraic equation for the eigenvalues of the Runge-Lentz vector. We also present the wave functions expressed via Heun polynomials.
Examples of Heun and Mathieu functions as solutions of wave equations in curved spaces [PDF]
T. Birkandan,M. Hortacsu
Mathematics , 2006, DOI: 10.1088/1751-8121/40/36/C01
Abstract: We give examples where the Heun function exists in general relativity. It turns out that while a wave equation written in the background of certain metric yields Mathieu functions as its solutions in four space-time dimensions, the trivial generalization to five dimensions results in the double confluent Heun function. We reduce this solution to the Mathieu function with some transformations.
Research of Gravitation in Flat Minkowski Space [PDF]
Kostadin Trencevski,Emilija G. Celakoska,Vladimir Balan
Physics , 2004, DOI: 10.1007/s10773-010-0488-x
Abstract: In this paper it is introduced and studied an alternative theory of gravitation in flat Minkowski space. Using an antisymmetric tensor, which is analogous to the tensor of electromagnetic field, a non-linear connection is introduced. It is very convenient for studying the perihelion/periastron shift, deflection of the light rays near the Sun and the frame dragging together with geodetic precession, i.e. effects where angles are involved. Although the corresponding results are obtained in rather different way, they are the same as in the General Relativity. The results about the barycenter of two bodies are also the same as in the General Relativity. Comparing the derived equations of motion for the $n$-body problem with the Einstein-Infeld-Hoffmann equations, it is found that they differ from the EIH equations by Lorentz invariant terms of order $c^{-2}$.
Soft-core Coulomb potentials and Heun's differential equation [PDF]
Richard L. Hall,Nasser Saad,K. D. Sen
Abstract: Schroedinger's equation with the attractive potential V(r) = -Z/(r^q+ b^q)^(1/q), Z > 0, b > 0, q >= 1, is shown, for general values of the parameters Z and b, to be reducible to the confluent Heun equation in the case q=1, and to the generalized Heun equation in case q=2. In a formulation with correct asymptotics, the eigenstates are specified a priori up to an unknown factor. In certain special cases this factor becomes a polynomial. The Asymptotic Iteration Method is used either to find the polynomial factor and the associated eigenvalue explicitly, or to construct accurate approximations for them. Detail solutions for both cases are provided.
A Curved Brunn-Minkowski Inequality for the Symmetric Group [PDF]
Weerachai Neeranartvong,Jonathan Novak,Nat Sothanaphan
Mathematics , 2015,
Abstract: In this paper, we construct an injection $A \times B \rightarrow M \times M$ from the product of any two nonempty subsets of the symmetric group into the square of their midpoint set, where the metric is that corresponding to the conjugacy class of transpositions. If $A$ and $B$ are disjoint, our construction allows to inject two copies of $A \times B$ into $M \times M$. These injections imply a positively curved Brunn-Minkowski inequality for the symmetric group analogous to that obtained by Ollivier and Villani for the hypercube. However, while Ollivier and Villani's inequality is optimal, we believe that the curvature term in our inequality can be improved. We identify a hypothetical concentration inequality in the symmetric group and prove that it yields an optimally curved Brunn-Minkowski inequality.
|
CommonCrawl
|
Integration: It's more than the sum of its parts
Defining what exactly an integral is leads naturally to an explanation of how to handle approximating them.
Trapezium rule. Image created using WolframAlpha
Tom Rivlin
When you're taught integration in school, it can look weird and random. What's that curly line? Why do you have to write "dee ecks" after the function like it's a magic spell? You're taught that it finds the area under a curve, and that it's the opposite of differentiation. But what's often lost in-between learning tricks like integration by parts is a sense of what integration is and what it means. It turns out that explaining this leads naturally to an explanation of a huge area of integration overlooked in school but vital to science and engineering: numerical integration.
One definition of the integral is that it's the area under a curve for a given function. But how is this area determined? The original idea behind integration, which Isaac Newton and Gottfried Leibnitz came up with in the 1600s, and Bernhard Riemann later improved in the 1800s, is that you split the area under the curve into chunks you know the areas of, then add those areas up. This is how you'd estimate the area, but the clever ideas the founders of calculus came up with let you turn those estimates into exact values.
The Riemann Sum is one way to define integration, and to construct the sum you do the following:
Take the curve between two points $x=a$ and $x=b$, then draw $n$ evenly spaced vertical lines from the curve to the x-axis between $a$ and $b$. Then draw horizontal lines between the vertical lines to turn the $n$ lines into $n-1$ rectangles. You can work out the width of each rectangle, $\Delta x = \frac{b-a}{n-1}$ (the range of the integration divided by the number of chunks it's being split into). You can also work out the height of each rectangle: it's the value of the function at one of the corners: $f(x_i)$.
The 'rectangle rule'. Image credit: Wikimedia
So the area of the rectangle at point $x_i$ is $f(x_i)\Delta x$, so to get the total area under the curve you just add up the areas of the rectangles. In summation notation:
$$Area = \sum_{i=1}^{n}f(x_i)\Delta x.$$
But this doesn't tell the full story – the rectangles don't capture all the area under the curve in the figure above. For other setups you could imagine the rectangles capturing too much area. How can this be a definition of the integral if it doesn't give the right answer?
The solution? The more rectangles you use, the closer you get to the exact answer. So just use infinitely many rectangles! Of course, to do this, each one will need to be infinitely thin: $\Delta x$ needs to become zero, and you need to evaluate the function at every point along the curve between $a$ and $b$ (all infinity of them). This is impossible with a normal sum, so the clever ideas behind Riemann's sum (going off ideas invented by Newton) involve mathematical tricks to make sense of a sum over infinitely many infinitely thin rectangles.
In this sum, the $\Delta x$ is replaced with $dx$, which represents an infinitely thin slice, and the sum symbol $\sum$ is replaced with an integral symbol $\int$. The integration symbol was invented to look like an elongated letter s – to show that it's just a different type of sum. That's also why you need to write $dx$ in all integrals – you need to multiply the function by the infinitely thin slice. When you do all of this, your inexact approximation becomes the exact area. The everyday rules for integrating functions like $x^2$ can be derived from this definition.
Integration is hard. That means it's often not possible to work out the exact integral of a function. When that's the case, mathematicians and scientists need to approximate the integral. This is called numerical integration. It's used by everyone from weather forecasters, to quantum physicists, to stock market traders, to nuclear fusion researchers. They all use it to find solutions to equations which are impossible to solve exactly.
The Riemann sum gives an easy-to-use formula to do this approximation (it's often called the 'rectangle rule'). One problem with it, however, is that it's inefficient. You need lots of rectangles to get a good approximation. (The figure above demonstrates how much space is 'wasted' with each rectangle.) So, over the years people invented more efficient techniques to approximate integrals. The trapezium rule (or trapezoidal rule for all you Yanks reading this) is the next stage up for accuracy.
The basic idea behind the trapezium rule is you add right-angled triangles on top of all the rectangles, like so:
The trapezium rule. Image credit Wikimedia
Now, instead of adding the areas of loads of rectangles, you add loads of trapeziums (a rectangle with a triangle on top). Again, the more trapeziums you add, the more accurate your sum is, but the difference is that you need fewer trapeziums to get an accurate answer. Comparing the two figures above: the same number of trapeziums fits the curve much better than the rectangles do. This is important because we want to work out the area as accurately as possible using as few computations as possible, so any method that gives you better accuracy for the same 'price' is preferable.
There's a whole universe of weird and wild numerical integration techniques out there. The trapezium rule draws straight lines between points on the function. It might be better in some cases to draw curves between points: the simplest version of this is called Simpson's rule. More complicated versions (using higher orders of a Taylor expansion) are called Newton-Coates formulas, or Gaussian quadratures, or countless others. Some approximate areas by picking points at random and using statistics. Some use points which aren't uniformly spaced between the end points. Some are better at integrating in higher dimensions.
Implementing the right method for the function you're integrating is as much a craft as it is a science. An expert in numerical integration is like a blacksmith forging a weapon: a keen eye is needed to know the right tools for each job, and a lifetime of experience is needed to get numerical integration working just right.
This post is part of a series for the UCL Year 12 Maths Research Summer Programme where year 12 students investigate an area of maths under the guidance of a Chalkdust member and PhD student at UCL.The closing event will celebrate the work of the year 12 students and will feature a talk from Prof. Lucie Green, a space scientist and TV & radio presenter. The closing event is on Thursday 12th July and is open to the public. If you would like more information about the programme or to attend the closing event please contact Dr Luciano Rila [email protected] .
Tom is a PhD student in the UCL Physics Department, simulating atomic collisions. He likes to think that what he does 'technically counts as maths.'
@TomRivlin tomrivlin.com All articles by Tom
Digging for roots in the complex plane
Bring your shovel, and dig with us to unearth some amazing results about polynomials
On the cover: Hydrogen orbitals
Find out more about the weird shapes on the cover of Issue 08
Start your quest to conquer the planet with this introduction to the wonderful world of machine learning
The Chalkdust Christmas card
This year's Chalkdust puzzle Christmas card
The n days of Christmas
Get into the ChristMATHS spirit with the maths behind the popular Christmas carol!
What's hot and what's not, Issue 03
Maths is a fickle world. Stay à la mode with our guide to the latest trends.
← Fun phenomena in fluids
Heads and Tails →
One thought on "Integration: It's more than the sum of its parts"
Pingback: Optimizing campaign spend with the Blended Same-Month Return metric | Original Content
|
CommonCrawl
|
Stochastic modelling and analysis of harvesting model: Application to "summer fishing moratorium" by intermittent control
DCDS-B Home
Existence-uniqueness and stability of the mild periodic solutions to a class of delayed stochastic partial differential equations and its applications
doi: 10.3934/dcdsb.2020287
A flow on $ S^2 $ presenting the ball as its minimal set
Tiago Carvalho 1, and Luiz Fernando Gonçalves 2,,
Departamento de Computação e Matemática, Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto, Universidade de São Paulo, Avenida Bandeirantes, 3900, zip code 14040-901, Ribeirão Preto, SP, Brazil
Instituto Federal de Educação, Ciência e Tecnologia de Minas Gerais, Rua São Luiz Gonzaga, zip code 35577-020, Formiga, MG, Brazil
* Corresponding author: Luiz Fernando Gonçalves
Received April 2020 Revised August 2020 Published October 2020
Figure(10)
The main goal of this paper is to present the existence of a vector field tangent to the unit sphere $ S^2 $ such that $ S^2 $ itself is a minimal set. This is reached using a piecewise smooth (discontinuous) vector field and following the Filippov's convention on the switching manifold. As a consequence, none regularization process applied to the initial model can be topologically equivalent to it and we obtain a vector field tangent to $ S^2 $ without equilibria.
Keywords: Piecewise smooth vector field, Limit sets, Minimal sets, Hairy ball Theorem, Regularization.
Mathematics Subject Classification: Primary:34A36;34A12;37C10;37E35.
Citation: Tiago Carvalho, Luiz Fernando Gonçalves. A flow on $ S^2 $ presenting the ball as its minimal set. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2020287
D. C. Braga, A. F. da Fonseca and L. F. Mello, Study of limit cycles in piecewise smooth perturbations of Hamiltonian centers via regularization method, Electronic Journal of Qualitative Theory of Differential Equations, 79 (2017), 1-13. doi: 10.14232/ejqtde.2017.1.79. Google Scholar
L. E. J. Brouwer, On continuous vector distributions on surfaces, in Proceedings of the Royal Netherlands Academy of Arts and Sciences (KNAW), 11 (1909), 850–858, https://www.dwc.knaw.nl/DL/publications/PU00013599.pdf. Google Scholar
C. A. Buzzi, T. de Carvalho and R. D. Euzébio, Chaotic planar piecewise smooth vector fields with non-trivial minimal sets, Ergodic Theory and Dynamical Systems, 36 (2016), 458-469. doi: 10.1017/etds.2014.67. Google Scholar
C. A. Buzzi, T. Carvalho and R. D. Euzébio, On Poincaré-Bendixson theorem and non-trivial minimal sets in planar nonsmooth vector fields, Publicacions Matemàtiques, 62 (2018), 113-131. doi: 10.5565/PUBLMAT6211806. Google Scholar
T. Carvalho and L. F. Gonçalves, Combing the hairy ball using a vector field without equilibria, Journal of Dynamical and Control Systems, 26 (2020), 233-242. doi: 10.1007/s10883-019-09446-5. Google Scholar
R. Cristiano, T. Carvalho, D. J. Tonon and D. J. Pagano, Hopf and Homoclinic bifurcations on the sliding vector field of switching systems in $\mathbb{R}^3$: A case study in power electronics, Physica D: Nonlinear Phenomena, 347 (2017), 12-20. doi: 10.1016/j.physd.2017.02.005. Google Scholar
T. Carvalho, D. D. Novaes and L. F. Gonçalves, Sliding Shilnikov connection in Filippov-type predator-prey model, Nonlinear Dynamics, 100 (2020), 2973-2987. Google Scholar
T. de Carvalho, On the closing lemma for planar piecewise smooth vector fields, Journal de Mathématiques Pures et Appliquées, 106 (2016), 1174-1185. doi: 10.1016/j.matpur.2016.04.006. Google Scholar
T. de Carvalho and D. J. Tonon, Generic bifurcations of planar Filippov systems via geometric singular perturbations, Bull. Belg. Math. Soc. Simon Stevin, 18 (2011), 861-881. Google Scholar
A. Denjoy, Sur les courbes définies par les équations différentielles à la surface du tore, Journal de Mathématiques Pures et Appliquées, 11 (1932), 333–376, http://eudml.org/doc/234887. Google Scholar
M. di Bernardo, K. H. Johansson and F. Vasca, Self-oscillations and sliding in relay feedback systems: Symmetry and bifurcations, International Journal of Bifurcation and Chaos, 11 (2001), 1121-1140. Google Scholar
D. D. Dixon, Piecewise deterministic dynamics from the application of noise to singular equations of motion, Journal of Physics A: Mathematical and General, 28 (1995), 5539-5551. Google Scholar
N. M. Drissa, Fixed Point, Game and Selection Theory: From the Hairy Ball Theorem to A Non Hair-Pulling Conversation, PhD thesis, Université Paris 1 Panthéon-Sorbonne, 2016, http://hdl.handle.net/10579/8840. Google Scholar
A. F. Filippov, Differential Equations with Discontinuous Righthand Sides, Mathematics and its Applications, 1st edition, Springer Netherlands, 1988. doi: 10.1007/978-94-015-7793-9. Google Scholar
C. Gutiérrez, Smoothing continuous flows on two-manifolds and recurrences, Ergodic Theory and Dynamical Systems, 6 (1986), 17-44. doi: 10.1017/S0143385700003278. Google Scholar
A. Jacquemard and D. J. Tonon, Coupled systems of non-smooth differential equations, Bulletin des Sciences Mathématiques, 136 (2012), 239-255. doi: 10.1016/j.bulsci.2012.01.006. Google Scholar
T. Kousaka, T. Kido, T. Ueta, H. Kawakami and M. Abe, Analysis of border-collision bifurcation in a simple circuit, 2000 IEEE International Symposium on Circuits and Systems. Emerging Technologies for the 21st Century. Proceedings (IEEE Cat No.00CH36353), 2 (2000), 481-484. Google Scholar
V. Křivan, On the gause predator-prey model with a refuge: A fresh look at the history, Journal of Theoretical Biology, 274 (2011), 67-73. doi: 10.1016/j.jtbi.2011.01.016. Google Scholar
R. Leine and H. Nijmeijer, Dynamics and Bifurcations of Non-Smooth Mechanical Systems, Lecture Notes in Applied and Computational Mechanics, 1st edition, Springer-Verlag Berlin Heidelberg, 2004. Google Scholar
J. Llibre, P. R. Silva and M. A. Teixeira, Regularization of discontinuous vector fields on $\mathbb{R}^3$ via singular perturbation, Journal of Dynamics and Differential Equations, 19 (2007), 309-331. Google Scholar
J. Llibre and M. A. Teixeira, Regularization of discontinuous vector fields in dimension three, Discrete & Continuous Dynamical Systems - A, 3 (1997), 235-241. doi: 10.3934/dcds.1997.3.235. Google Scholar
J. Milnor, Analytic proofs of the "hairy ball theorem" and the brouwer fixed point theorem, The American Mathematical Monthly, 85 (1978), 521-524. doi: 10.2307/2320860. Google Scholar
L. Perko, Differential Equations and Dynamical Systems, Texts in Applied Mathematics, 3rd edition, Springer-Verlag New York, 2001. doi: 10.1007/978-1-4613-0003-8. Google Scholar
S. H. Piltz, M. A. Porter and P. K. Maini, Prey switching with a linear preference trade-off, SIAM Journal on Applied Dynamical Systems, 13 (2014), 658-682. doi: 10.1137/130910920. Google Scholar
D. S. Rodrigues, P. F. A. Mancera, T. Carvalho and L. F. Gonçalves, Sliding mode control in a mathematical model to chemoimmunotherapy: The occurrence of typical singularities, Applied Mathematics and Computation, 387 (2020), 124782. doi: 10.1016/j.amc.2019.124782. Google Scholar
F. D. Rossa and F. Dercole, Generic and generalized boundary operating points in piecewise-linear (discontinuous) control systems, in 2012 IEEE 51st IEEE Conference on Decision and Control (CDC), (2012), 7714–7719. Google Scholar
A. J. Schwartz, A generalization of a Poincaré-Bendixson Theorem to closed two-dimensional manifolds, American Journal of Mathematics, 85 (1963), 453-458. Google Scholar
P. A. Schweitzer, Counterexamples to the Seifert Conjecture and opening closed leaves of foliations, Annals of Mathematics, 100 (1974), 386-400. doi: 10.2307/1971077. Google Scholar
J. Sotomayor and A. L. F. Machado, Structurally stable discontinuous vector fields in the plane, Qualitative Theory of Dynamical Systems, 3 (2002), 227-250. doi: 10.1007/BF02969339. Google Scholar
J. Sotomayor and M. A. Teixeira, Regularization of discontinuous vector fields, in International Conference on Differential Equations, Lisboa, 1995, World Scientific Publishing, (1998), 207–223. Google Scholar
E. T. Whittaker and G. Robinson, The Calculus of Observations: A Treatise on Numerical Mathematics, 4th edition, Blackie & Son limited, 1954. Google Scholar
Figure 1. Sliding Vector Field
Figure 2. The $ \omega $-limit of $ p $ is disconnected
Figure 3. Item (a) shows the projection of the trajectories of $ X $ on $ S^2 $ by $ \pi_N $. In (b) we have the projection of the trajectories of $ Y $ by $ \pi_S $.
Figure 4. Trajectories in $ S^2 $
Figure 5. Displacement function
Figure 6. Trajectories of the vector field $ Z_1 $
Figure 8. Piecewise smooth vector field $ Z_1 $ and region $ K_1 $
Figure 10. Trajectory in $ S^2 $ passing through $ p = (-1,0,0) $
Meilan Cai, Maoan Han. Limit cycle bifurcations in a class of piecewise smooth cubic systems with multiple parameters. Communications on Pure & Applied Analysis, 2021, 20 (1) : 55-75. doi: 10.3934/cpaa.2020257
Huanhuan Tian, Maoan Han. Limit cycle bifurcations of piecewise smooth near-Hamiltonian systems with a switching curve. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020368
Jesús A. Álvarez López, Ramón Barral Lijó, John Hunton, Hiraku Nozawa, John R. Parker. Chaotic Delone sets. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021016
Riccarda Rossi, Ulisse Stefanelli, Marita Thomas. Rate-independent evolution of sets. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 89-119. doi: 10.3934/dcdss.2020304
Wolfgang Riedl, Robert Baier, Matthias Gerdts. Optimization-based subdivision algorithm for reachable sets. Journal of Computational Dynamics, 2021, 8 (1) : 99-130. doi: 10.3934/jcd.2021005
Vito Napolitano, Ferdinando Zullo. Codes with few weights arising from linear sets. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020129
Lisa Hernandez Lucas. Properties of sets of Subspaces with Constant Intersection Dimension. Advances in Mathematics of Communications, 2021, 15 (1) : 191-206. doi: 10.3934/amc.2020052
Héctor Barge. Čech cohomology, homoclinic trajectories and robustness of non-saddle sets. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020381
Tinghua Hu, Yang Yang, Zhengchun Zhou. Golay complementary sets with large zero odd-periodic correlation zones. Advances in Mathematics of Communications, 2021, 15 (1) : 23-33. doi: 10.3934/amc.2020040
Nicola Pace, Angelo Sonnino. On the existence of PD-sets: Algorithms arising from automorphism groups of codes. Advances in Mathematics of Communications, 2021, 15 (2) : 267-277. doi: 10.3934/amc.2020065
Jianfeng Huang, Haihua Liang. Limit cycles of planar system defined by the sum of two quasi-homogeneous vector fields. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 861-873. doi: 10.3934/dcdsb.2020145
Dan Zhu, Rosemary A. Renaut, Hongwei Li, Tianyou Liu. Fast non-convex low-rank matrix decomposition for separation of potential field data using minimal memory. Inverse Problems & Imaging, 2021, 15 (1) : 159-183. doi: 10.3934/ipi.2020076
Magdalena Foryś-Krawiec, Jiří Kupka, Piotr Oprocha, Xueting Tian. On entropy of $ \Phi $-irregular and $ \Phi $-level sets in maps with the shadowing property. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1271-1296. doi: 10.3934/dcds.2020317
Peng Luo. Comparison theorem for diagonally quadratic BSDEs. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020374
Monia Capanna, Jean C. Nakasato, Marcone C. Pereira, Julio D. Rossi. Homogenization for nonlocal problems with smooth kernels. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020385
Vivina Barutello, Gian Marco Canneori, Susanna Terracini. Minimal collision arcs asymptotic to central configurations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 61-86. doi: 10.3934/dcds.2020218
Manxue You, Shengjie Li. Perturbation of Image and conjugate duality for vector optimization. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020176
Mark F. Demers. Uniqueness and exponential mixing for the measure of maximal entropy for piecewise hyperbolic maps. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 217-256. doi: 10.3934/dcds.2020217
Kha Van Huynh, Barbara Kaltenbacher. Some application examples of minimization based formulations of inverse problems and their regularization. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020074
2019 Impact Factor: 1.27
Tiago Carvalho Luiz Fernando Gonçalves
\begin{document}$ S^2 $\end{document} presenting the ball as its minimal set" readonly="readonly">
|
CommonCrawl
|
Does Changing Speed of Light Violate Energy Conservation?
Joe Spears MS
Setterfield
One problem for young-earth creationists (those claiming that God created the universe only thousands of years ago, rather than billions of years ago) has been the apparent great age of distant galaxies.
Decay of the speed of light
Several theories have been advanced to deal with the above problem, including the idea that the speed of light, known as c, was greater in the past than it is now. Therefore, according to this idea, the faster light in the ancient universe reached earth sooner than astronomers suspect because they assume a much slower speed for light. That is to say, it did not take so long (billions of years, for example) for light from stars and galaxies to reach the earth, and, consequently, they need not be so old (billions of years, for example). The idea that the speed of light was greater in the past, and that it has decayed or slowed down to what is now currently observed, is commonly referred to as CDK, for C (speed of light) decay (which sounds like "Dee-Kay" or "DK"). In this article, "CDK" will be used to refer to the decay of the speed of light.
Energy conservation issue
There have been several arguments given both in support of and also denying the possibility of CDK. One argument is that the decay of the speed of light was not possible, because CDK would violate the law of the conservation of energy (really, conservation of mass-energy). 1
The argument against CDK, using violation of energy conservation, goes as follows. If the speed of light was faster in the past, then by the equation E = mc2 energy would have been greater in the past since increasing c in this equation results in increasing E, or energy. But, E should remain constant in order to obey the law of conservation of energy. So CDK would violate energy conservation, and thus CDK could not have happened.
However, this argument is not valid. In this article we shall examine this commonly asserted reason that CDK could not be true and show why it is not valid, with a mathematical derivation that E, energy, remains constant when c changes for some CDK models.
The result will be shown that energy conservation is not violated by CDK. (Other arguments, such as whether relativity is violated by CDK, will not be dealt with in this article due to space limits.)
In this section we shall examine the assumptions of both the energy-conservation violation argument against CDK and the assumptions used to refute that argument.
Assumptions for violation of energy conservation by CDK
We will show in this paper that the law of conservation of energy is not violated by CDK! How so?
The argument that energy conservation is violated by CDK assumes that one factor, m, in the equation E = mc2 is constant and does not change when c changes. We shall show that this assumption is false.
Below is a mathematical derivation showing that m would indeed change when c changes, and what is more, that m will change in exactly such a manner as to cancel out the effect of increasing c on energy, E. Thus, the derivation shows that energy conservation would not be violated by CDK.
Assumptions for mathematical disproof of violation of energy conservation by CDK
Assumptions 1, 2, and 3: accepted conventional formulae
One might ask regarding the assumptions used in the mathematical derivation in this article, "What are the assumptions going into this math?" Are those assumptions really valid; or are they assumed, merely to make the derivation work, without any evidence from observational data?
Let's look at the assumptions for the following mathematical derivation— the first three of which are conventional and widely accepted— and also look at the observational evidence for the other assumptions.
First, we will tacitly accept that the following three key equations for this proof are valid:
$$E = mc^2$$
$$E = hf$$
$$f = c/w$$
where E is energy, f is frequency, w is wavelength, c is the speed of light, and h is Planck's constant.
We note that these equations, though listed as assumptions, are part of accepted conventional physics and can be found in many physics textbooks.
The final two assumptions are also listed as assumptions, but there is observational data supporting both of them.
Assumption 4: hc is constant
In addition to the above, we shall also assume that Planck's constant (h) is inversely proportional to the speed of light (c). In other words, the product hc is a constant, or h = k/c, where k is some constant.
The product of h and c, or hc, has been measured to be constant to within parts per million. 2 3 4 5 6
Also, while c has been measured as decreasing over time, h has been measured as increasing over time.
Note the upward trend in the measured values of Planck's constant (h) in Figure 1 and the downward trend in the measured values of the speed of light (c) in Figure 2. The date range is not identical, but there is some small overlap. The trends show that while h was measured as increasing, c was measured to be decreasing. Figure 4 shows more recent values for the speed of light, also going back further in time, past 1750. The downward trend in c is also apparent there. Figure 3 shows measurements from the same location using the same instruments, which should remove instrumental error; yet still we see the downward trend of the measured values of the speed of light.
Figure 1. Measured values of Planck's constant (vertical axis) x 10^34 J×s.
Figure 2. Measured values of light speed (vertical axis) x 10^-8 m/s.
Figure 3. Pulkovo values of light speed (vertical axis) expressed as the deviation from the 1935 value x 10^-3 m/s.
Figure 4. More recent values of light speed (vertical axis) x 10^-3 m/s.
So, if h is shown to be increasing while c is decreasing, and their product has been found to be constant, 2 3 4 5 6 then the only obvious conclusion is that they (h and c) are inversely proportional.
We note that we saw above that this assumption is supported by evidence from measured data.
The fine structure constant
Other evidence for h being inversely proportional to c is found in measurements of the fine structure constant, which shows very little change (stable to one part in a million), 7 8 in spite of the measured changes in both h and c. The fine structure constant is composite, and its formula includes both h and c. The measured changes in h and c, and the measured extremely small variation in the fine structure constant, can be explained by, and provide support for, h and c being inversely proportional.
The formula for the fine structure constant is below:
\[ \alpha=\frac{e^2}{\epsilon}\frac{1}{2hc} \] where \( \epsilon \) is the electric permittivity of free space, and \( e \) is the charge of the electron. (The further explanation of \( \epsilon \) and the other components of the fine structure constant is outside the scope of this article.) It can be noted that if h and c are both changing, while \( \alpha \) is not changing, this suggests that h and c are inversely proportional.
Assumption 5: constant wavelength of light from galaxies
The other assumption is that the wavelength of photons of light, that travel from distant galaxies to arrive and be observed (detected) on earth, is also constant.
One reason often cited for rejecting this assumption is the conventional red shift explanations. These explain red shift by the stretching of the wavelength of light due to the expansion of the universe, and/or by the Doppler effect, which would likewise stretch out the waves of light. However, both these explanations of red shift have problems. One problem is that different red shifts have been measured for connected objects. Connected objects are obviously in close proximity and should have the same or similar red shifts. Measured different red shifts for these objects is a problem. Another observation that conventional red shift interpretations fail to explain is the apparent quantization of the red shift data.
However, these explanations are not the only explanations for red shift data! Another explanation of the red shift that avoids these problems is simple; it is merely that the light was originally redder when it was originally emitted from the distant galaxies, and that each photon (or wave) of light maintained that same (redder) wavelength while continuing on its journey to earth. According to other, more conventional, explanations of red shift, the wavelength of light changes because of the expansion of space or because of the Doppler effect. Narlikar and Arp showed that the universe need not be expanding in order to avoid collapse. 9 So it is not necessary to assume expanding space, nor the conventional explanations for red shift. Therefore, we shall assume the red shift occurs at the time of emission, rather than during transit.
Observations About Assumptions
We note that all of the assumptions made in the refutation of the violation of CDK are well accepted in conventional physics, and/or are supported by observational evidence, and/or are supported by calculations, and/or solve other problems. There are good scientific reasons for these assumptions, not just ad-hoc reasons to prove Genesis.
On the other hand, we note that there is a single assumption, that of mass being constant while c changes in the equation E = mc2, on which hinges the claim that CDK violates energy conservation! Regarding the idea that mass can never change, note that mass does indeed change according to Einstein's theory of special relativity (where m is mass and v is velocity):
\[ m_{new}=\frac{m_{old}}{\sqrt{(1-v^2/c^2)}} \]So, this assumption of unchanging mass might indeed be questioned since mass does indeed change per special relativity. (The assumptions for the proof below seem better supported than the assumption of constant mass, that the proof below disproves.)
Briefly, the first three assumptions are well established laws of physics. The fourth assumption is verified by observation. The fifth and last assumption explains red shift and resolves some mysteries associated with red shift. Also, the fifth assumption is part of at least one CDK model; and if we change the model that we are evaluating, then we are not evaluating that model, but a different one.
Now that we have elaborated the assumptions, we are ready for the proof.
Proof that E = mc2 is not violated by a changing c (by CDK)
We will use the following symbols in the proof below:
E is energy
m is mass
c is the speed of light
f is the frequency of light
w is the wavelength of light
h is Planck's constant
k is a proportionality constant
K is another constant
The key equations for this proof are the three below:
\[E=mc^2 \tag 1 \]
\[E=hf \tag 2 \]
\[f=\frac{c}{w} \tag 3 \]
The above are basic accepted formulas of physics.
We will also use the following two assertions of the CDK model (which are supported by observational data, as pointed out above):
hc is constant (based on observational measurements).
The wavelength of light does not change while traveling from the source galaxy (or star) to earth (that is based on an interpretation of red shift data that avoids problems that other interpretations have and that also explains observed red shift quantization).
From the observed data and from the model, h is inversely proportional to c:
\[h=\frac{k}{c} \tag 4 \]
Also, from assumption 5, w is constant:
\[ w = constant \tag 5 \]
Thus, equating the right sides of equations 1 and 2, we get
$$mc^2=hf \tag 6 $$
and substituting equation 3 into equation 6, we get
$$mc^{2}=hf=h(\frac{c}{w}) \tag 7 $$
Now, canceling out the common factor c from both left and right sides, we get
$$mc=\frac{h}{w} \tag 8 $$
Substituting equation 4 into equation 8, we get
$$mc=\frac{h}{w}=\frac{(k/c)}{w} \tag 9 $$
Rearranging the right side, we get
$$mc=\frac{k}{cw} \tag {10} $$
Dividing both sides by c, we get
$$m=\frac{k}{c^{2}w} \tag {11} $$
Note that k is a constant, and the light wavelength in this model is constant (per Assumption 5), so w (light wavelength) is also a constant.
We now can rearrange the equation a bit, to isolate those constants (k and w) into a separate factor, \((k/w)\).
$$m=(\frac{k}{w})(\frac{1}{c^{2}}) \tag {12} $$
Since both k and w are constant, diving one by the other is also a constant, or k/w is a constant, which we shall name capital K. Then we can replace the fraction consisting of both these constants k and w, \((k/w)\), by the constant capital K, by simply setting \(K = k/w\). We then get
$$m=K(\frac{1}{c^{2}}) \tag {13} $$
or, that mass is inversely proportional to the speed of light squared!
This means that as c increases, m decreases, and vice versa.
We can plug equation 13 into equation 1 now:
$$E=mc^2=[K(\frac{1}{c^{2}})]c^{2} \tag {14} $$
Simplifying,
$$E=K(\frac{c^{2}}{c^{2}}) \tag {15} $$
And we see that \( c^2 \) cancels out:
$$E=K(1) \tag {16} $$
or, simply
$$E=K \tag {17} $$
We see that E is the constant K, and c squared cancels out; so no matter how c changes, E still remains the constant, K!
Therefore, since energy E does not change when c changes (CDK), conservation of energy is maintained and is not violated. Thus, we have shown that CDK does not violate the conservation of energy.
Relevance & Summary
"So what?" we might ask. What is the relevance of this proof? This derivation, by showing that CDK does not violate energy conservation, removes one obstacle to the possibility of CDK. Since CDK could allow light from distant stars to reach earth in less time than the current estimates of billions of years, those stars might possibly be younger than supposed, and this in turn removes one obstacle to reconciling the apparent long ages for distant stars with a young universe, thereby removing one argument against young-earth creation.
There are other beliefs/opinions regarding CDK, which are beyond the scope of this article. However, by examining this derivation, we can see that one assumption about CDK has fallen; perhaps other beliefs, which may be only assumptions (or possibly false assumptions), ought to be examined more closely or re-examined too, to see if they might be false also.
1Since matter and energy are interconvertible, this energy conservation law has been reframed as the law of conservation of mass and energy. Energy and mass are related by the famous equation governing the transformation of matter into energy.
2 a b Setterfield BJ, Setterfield HJ (2013) Cosmology and the Zero Point Energy. Natural Philosophy Alliance Monograph Series, 53. http://www.barrysetterfield.org/GSRdvds.html#cosmology Accessed 2021 Jan 14
3 a b Bahcall JN, Salpeter EE (1965) Astrophys. J. 142:1677–1681
4 a b Baum WA, Florentin-Nielsen R (1976) Astrophys. J. 209:319–329
5 a b Solheim JE, Barnes III TG, Smith HJ (1976) Astrophys. J. 209:330–334
6 a b Noerdlinger PD (1973) Phys. Rev. Lett. 30:761–762
7Cowie LL, Songaila A (2004) Nature 428:132
8Bouchendira R, Cladé P, Guellati-Khélifa S, Nez F, Biraben F (2011) New determination of the fine structure constant and test of the quantum electrodynamics. Physical Review Letters 106(8):080801 https://hal.archives-ouvertes.fr/hal-00547525/file/MesureAlpha2010.pdf Accessed 2021 Jan 14
9Narlikar J, Arp H (1993) Flat spacetime cosmology: A unified framework for extragalactic redshifts. Astrophysical J. 405:51–56
Book traversal links for Does Changing Speed of Light Violate Energy Conservation?
The Kingdom Fungi
|
CommonCrawl
|
Experiment method
Result and discussion
Experimental and analytical investigation on resin impregnation behavior in continuous carbon fiber reinforced thermoplastic polyimide composites
Shota Kazano1,
Toshiko Osada1,
Satoshi Kobayashi1Email authorView ORCID ID profile and
Ken Goto2
Received: 6 April 2018
In molding of carbon fiber reinforced thermoplastics (CFRTP), resin impregnation behavior to fiber yarns is very important because higher viscosity of molten thermoplastics inhibites resin impregnation to the interspace among fibers. Resultant resin un-impregnation causes lower mechanical properties of CFRTP. The purpose of this study was to clarify the relation among molding method, molding conditions and resin impregnation to fiber yarns experimentally and analytically. In this study, CFRTPs using continuous carbon fiber yarn as a reinforcement and a thermoplastic polyimide which is excellent in heat resistance as a matrix resin were produced by Micro-Braiding, Film Stacking and Powder method. In addition resin impregnation was modeled based on Darcy's law and continuity condition. As a result, analytical resin impregnation prediction showed good agreements with the experimental results in all the producing methods and molding conditions. In addition, the void content in the molded CFRP could be greatly reduced by pressurizing cooling.
CFRTP
Resin impregnation
Micro-braiding
Darcy's law
Since carbon fiber reinforced plastics (CFRP) has excellent characteristics of specific strength and specific stiffness, it has been used in many fields including aerospace. The thermosetting resins generally used for CFRP are relatively brittle compared with thermoplastic resins, and is also inferior in impact resistance (Lee and Kim 2017). Furthermore, CFRP is generally molded by heat-pressurizing using a semi-cured prepreg sheet impregnated with thermosetting resin to carbon fibers, so that it takes time for polymerization reaction (Svensson et al. 1998). In addition, the prepregs need to be stored at low temperature, resulting in high cost for equipment.
On the other hand, thermoplastic resins have excellent toughness and impact resistance, and are easy to handle as compared with thermosetting resins. In addition, because carbon fiber reinforced thermoplastics (CFRTP) can be remolten, it has good formability and recyclability. There is also an advantage that it can be stored at room temperature. Moreover CFRTPs using super engineering plastics are expected to have excellent heat resistance and mechanical properties (Rattanand Bijwa 2006, Bijwa and Rattan 2007).
Thermoplastics and their composite have superior properties as mentioned above, however, they have inferior characteristics such as lower resin impregnation to reinforcing fiber yarns. The reason is higher melt viscosity of thermoplastic resin. Thus resin-un-impregnated region, referred as void, exists in the fiber yarns after molding, which fact results in lower mechanical properties.
Therefore, various molding methods to achieve sufficient thermoplastic resin impregnation to fabrics consisted of continuous fiber yarns have been studied for the practical usage of the composites in structural members.
Examples for CFRTP molding include Film Stacking (FS) method (John et al. 1999b; Fujihara and Harada 2000; Thomas et al. 2013), Powder method (Lin et al. 1994; Lin and Friedrich 1995), Co-Woven method (Clemans et al., 1987), Commingled Yarn (CY) method (Lin et al. 1995) and Micro Braiding (MB) method (Sakaguchi et al. 2000; Fujihara et al. 2003; Kamaya et al. 2001; Hung 2004). Film Stacking (FS) method (John et al., 1999; Fujihara and Harada 2000; Thomas et al. 2013) is the method to hot-compression molding with alternately laminated woven fabrics and film-like polymer sheets. Powder method (Lin et al. 1994, Lin and Friedrich 1995) uses fiber yarns attached with pulverizing thermoplastic resin. Co-Woven method (Clemans et al. 1987) is a method of alternately weaving resin fiber yarns and reinforcing fiber yarns into one fabric. Commingled Yarn (CY) method (Lin et al. 1995) uses commingled yarn with fiberized thermoplastic resin and reinforcing fiber. In Micro-Braiding (MB) method (Sakaguchi et al. 2000; Fujihara et al. 2003; Kamaya et al. 2001; Hung 2004), a braided fibrous intermediate material (Micro-Braided Yarn) composed of reinforcing fiber yarns at the center and thermoplastics fibers around them are prepared with traditional braiding technique. The resin fibers are assembled around the reinforcing fibers and are evenly adhered. Thus improvement in impregnation property is expected.
Many experimental researches have been conducted to improve the impregnation property of the thermoplastic resin to the reinforcing fiber yarns. In addition, analytical approaches about resin impregnation to fiber yarns have been conducted. As for the impregnation behavior of thermoplastic resin, Wolfrath et al. (2006) analyzed the impregnation of the polypropylene to the fiber yarns in the FS method based on the Darcy's law and discussed the impregnated state of each layer as a function of time and pressure. Bernet et al. (1999) conducted a resin impregnation analysis for the CY method to evaluated the quality of the molded article with the void content. Lin et al. (1994) also evaluates the impregnation behavior for the Powder method based on the Darcy's law analytically. Furthermore, West et al. (1991) assumed the fiber yarn to be elliptical and defined the equivalent impregnation radius corresponding to the impregnation distance of the circular model.
As described above, the Darcy's law has been used for resin impregnation analysis to fiber yarns. The Darcy's law represents apparent flow velocity with pressure gradient, viscosity of fluid and permeability. The permeability is considered to depend on the geometry of the fibers composing the fiber yarn. (Gutowski 1985, Gutowski et al, 1987) evaluated the permeability coefficient depending on the fiber volume fraction based on the fiber yarn compression model. Furthermore, permeability coefficient was also predicted with a geometric model. Gebart (1992) classifies the orientation state of circular cross section of fiber into a square array or a hexagonal array, and permeability of fiber yarn perpendicular direction was derived from Navier-Stokes equations. For thermoplastic resin, Kim et al. (1989) evaluates the permeability of PEEK for unidirectional fiber yarn with Carnam-kozeny constant using similar analytical method to Gutowski et al. (1985). Also, in the same analytical method, Hou et al. (1998) predicts the void content of molded parts of CF/PEI composites.
As described above, many researches on resin impregnation behavior in FS, CY and Powder method could be confirmed. On the other hand, there are some researches about MB method (Sakaguchi et al. 2000; Fujihara et al. 2003; Kamaya et al. 2001; Hung 2004; Kobayashi et al. 2012a, b; Kobayashi and Tanaka 2012; Kobayashi and Takada 2013) for unidirectional composites, whereas the resin impregnation behavior for continuous woven materials has been limited (Kobayashi and Morimoto 2014; Kobayashi et al. 2014, 2017) because of their complexity. However, it is important to clarify the resin impregnation behavior for continuous woven material with superior in drape property compared to unidirectional material, considering actual usage of the composites (Kobayashi et al. 2017). In the analysis of impregnation behavior in continuous fiber yarns, it has been reported that by assuming that all the fiber yarns are simultaneously impregnated in the laminate and they are all identical in geometry can be described by impregnation with only a typical single yarn (Lin et al. 1995). The cross-section of such a fiber yarn is roughly elliptical after initial compression. Since it can be assumed that this cross section exists along the entire length of the fiber yarn, it is considered that a two-dimensional analysis of resin impregnation to fiber yarn is appropriate.
In this research, we investigated the effect of molding method and molding conditions on resin impregnation to fiber yarns in CFRTP. Carbon woven fabric and thermoplastic polyimide (PI), which is a super engineering plastics with superior heat resistance, were used. CFRTPs were prepared with MB, FS and Powder methods, and the resin impregnation properties were evaluated experimentally. Furthermore, we focused on the two-dimensional resin impregnation analysis and analytical resin impregnation prediction was conducted based on the previous research (Kobayashi et al. 2017) to discuss the difference in impregnation behavior in different molding methods.
In this study, carbon fiber yarns, T300B-3000 filaments (3 K) or T700SC-12,000 (12 K) filaments (Toray) were used as the reinforcements to evaluate yarn thickness on resin impregnation, and a thermoplastic polyimide (PI, AURUM PL 450 C, Mitsui Chemicals) was used as the base material resin. Table 1 shows properties of PI. In FS and Powder methods, pre-woven fabric (CO6343B or CK6261C, Toray) consisted of T300B or T700SC described above were used. In MB methods, MBYs were fabricated with PI fiber yarns on a medium-class braider and plain woven fabrics were weaved with MBY on a hand looms. The fabrics were cut into sheets of square, 75 mm, which include 40 yarns for 3 K and 21 yarns for 12 K.
Properties of PI matrix (from Mitsui Chemical)
AURUM-PL450C
Viscosity [Pa・s]
600 (at 410 °C)
Melting Point [°C]
Density [g/cm3]
Tex for PI yarn [g/1000 m]
PI films with thickness 50 μm were used in FS methods. For PI methods, PI powder was prepared with PI pellets using a pulverizer (SM-1, HISIANGTAI). PI resin powder with diameter 50 to 200 μm obtained with a test sieve (mesh opening 425 μm) was used. In the manufacturing process of MBY, a reinforcing fiber yarn was located at the center of the braider and matrix resin fiber yarns were braided around the reinforcing fiber yarn. In the present study, fiber volume fraction was 38.4% in all methods.
CFRTP fabric textile molding
CFRTP textile composites were compression-molded with a hot press system (IMC-1837, Imoto Machinery). Molding conditions are shown in Table 2. We selected the pressure values to obtain full impregnation in molding time less than 5 min considering practical process time. Fabrics and matrix resin, or fabrics made of MBY were placed in a mold at room temperature, and the mold was placed on the lower platen of a hot press machine preheated to 350 °C and heated to a molding temperature. When mold temperature reached the test temperature, pressure was applied to the mold. The time at the beginning of pressurizing was defined as molding time 0 s. After the pressure was maintained during the molding time, the pressure was relieved and the mold was air-cooled until 50 °C.
Molding condition
Molding Method
Molding Temperature [°C]
Volume Fraction of Fiber [%]
Number of Filament
Number of Lamination
Molding Pressure [MPa]
Molding Time [s]
A Piece of Textile [mm]
Pressure Cooling (PC)
0, 20, 30, 60, 120, 180, 240, 300
0, 60, 120, 180, 240, 300
Film Stacking
0, 1, 2,4, 8
0, 300
0, 0.1, 0.3, 0.5, 0, 1, 2, 4
0.3, 0.5, 1, 2, 4
The PI resin used in this study has high heat resistance and high melting point, and the molding temperature was determined as 410 °C (viscosity; 600 Pa·s) from a supplier data. Since the hot press system used could heat up to 350 °C which is lower than the molding temperature, the mold was implemented with additional four cartridge heaters (200 V -1400 W) which result in higher molding temperature. The mold temperature and the pressure loaded on the specimen were defined as molding temperature and molding pressure. In this study, in order to investigate void dissipation behavior, non-pressurizing cooling (NPC) and pressurizing cooling (PC) were conducted during cooling in the molding process.
Cross-sectional observation
In order to measure impregnation ratio as a function of molding conditions, cross sectional observation was performed at the center of the specimen molded under each condition. After the molded specimen was embedded in epoxy resin, the cross-section was polished using # 180–2000 emery papers, and the section was buffed with alumina slurry (0.3 μm, Maruto Co.). The polished surface was observed using a digital-microscope (VH-Z100R, KEYENCE) having a zoom lens which enables to confirm a cross-section of a single carbon fiber. The digital image obtained was converted into a binary bitmap image using software (GIMP 2). The resin impregnation ratio was calculated as the ratio of the number of pixels in the impregnated region including the cross-sectional area of the fiber yarn to that of pixels in the whole cross section of the yarn. Since a moderate scatter was observed, the average value was shown as the result. The schematic view of the resin impregnation ratio measurement is shown in Fig. 1.
The schematic view of the resin impregnation ratio measurement
Calculation of impregnation ratio of elliptical model (Kobayashi et al. 2017)
In order to fully demonstrate the mechanical properties of reinforcing carbon fibers in CFRTP, complete impregnation of the resin to reinforcing fiber yarns is necessary. Thus it is an important to analytically-predict the time necessary for complete resin impregnation to yarns. In the present study, resin impregnation behavior was analytically predicted similar to the previous study (Kobayashi et al. 2017).
The fiber yarn is considered as a porous medium, where the gap between fibers is regarded as pore. In general, the impregnation phenomenon of the rein to fiber yarns is regarded as laminar flow to a porous medium and represented using the Darcy's law (Åström et al. 1992). The Darcy's law is expressed as,
$$ u=-\frac{k}{\mu}\cdot \frac{\partial P}{\partial x} $$
where u is Darcy's velocity, μ is the viscosity, ∂P / ∂x is the pressure gradient, and k is the permeability coefficient.
Also, an equation of continuity can be written as,
$$ \nabla \cdot \mathbf{u}=0 $$
In the present study, the cross-section of a fiber yarn was deformed to an elliptical shape by compression loading. Thus it is assumed that the fiber yarn has a cross section close to an elliptical shape with major radius a0 (x direction) and minor radius b0 (y direction), as shown in Fig. 2. The fiber longitudinal direction is defined as z.
Coordinate of elliptical fiber yarn model
As shown in Fig. 2, an oval un-impregnated region where major and minor radii are a1 and b1 respectively, is assumed. a1 and b1 become shorter with molding time and resultant resin impregnation.
By using the fiber volume fraction Vf in the fiber yarn and the eq. (1), Darcy's velocity is converted to flow front velocity in x direction.
$$ \left(1-{V}_f\right)\frac{da_1}{dt}={\left.{u}_x\right|}_{x={a}_1}=-\frac{k}{\mu}\cdot {\left.\frac{\partial P}{\partial x}\right|}_{x={a}_1} $$
By re-arranging and integrating both sides,
$$ {a}_1=-\frac{k}{\mu \left(1-{V}_f\right)}\int \frac{\partial P}{\partial x} dt $$
In the same way, the following equation is obtained for the y direction.
$$ {b}_1=-\frac{k}{\mu \left(1-{V}_f\right)}\int \frac{\partial P}{\partial y} dt $$
In the elliptical model as shown in Fig. 2, the cross-sectional area of the fiber yarn and the cross-sectional area of the non-impregnated region are S0 = πa0b0 and S1 = πa1b1, respectively, so that the impregnation ratio I becomes.
$$ I(t)=1-\frac{\pi {a}_1{b}_1}{\pi {a}_0{b}_0}=1-\frac{a_1(t){b}_1(t)}{a_0{b}_0} $$
The pressure gradient of the coordinates (a1, 0) and (0, b1) at the flow front of the resin shown in Fig. 2 should be obtained to calculate the position of the flow front from eqs. (4) and (5) and resultant impregnation ratio represented by eq. (6) at a certain time t.
Calculation of pressure gradient in elliptical model
In the present study, resin flow in the axial direction is neglected. In orthogonal coordinates as shown in Fig. 2, since the flow velocity uz in the z direction is assumed as 0, eq. (2) can be expressed as follows.
$$ \nabla \mathbf{u}=\frac{\partial {u}_z}{\partial x}+\frac{\partial {u}_y}{\partial y}=0 $$
Here, substituting eq. (3) into eq. (7) and assuming that the melt viscosity μ and the permeability coefficient k of the resin are independent of time, the following equation is obtained.
$$ \frac{\partial^2P}{\partial {x}^2}+\frac{\partial^2P}{\partial {y}^2}=0 $$
In this case, the boundary condition for the elliptical model as shown in Fig. 2 is defined as follows.
1: P=Pm (resin pressure) on the outer boundary of the elliptical fiber yarn \( \left(\frac{x^2}{a_0}+\frac{y^2}{b_0}=1\right) \).
2: P=P0 (atmospheric pressure) on the boundary of the elliptical un-impregnated region \( \left(\frac{x^2}{a_1}+\frac{y^2}{b_1}=1\right) \).
Since the elliptical model shown in Fig. 2 is symmetrical to the x and y axes, the pressure distribution for the resin impregnation to the fiber yarn in the 1/4 model as shown in Fig. 3 shall be considered. The pressure gradient at the flow front could be obtained from the pressure distribution. It is, however, difficult to solve eq. (8) analytically, a mathematical approach was carried out by using the boundary element method.
Boundary condition of quarter elliptical fiber yarn model
Eqs. (4) and (5) are discretized in terms of time t, when t = ti = iΔT (i: natural number), as
$$ {a}_1\left({t}_i\right)={a}_0-{\sum}_{j=1}^{i-1}{\left.\frac{k}{\mu \left(1-{V}_f\right)}\frac{\partial P}{\partial x}\right|}_{x={a}_1\left({t}_j\right)}\bullet \Delta t $$
$$ {b}_1\left({t}_i\right)={b}_0-{\sum}_{j=1}^{i-1}{\left.\frac{k}{\mu \left(1-{V}_f\right)}\frac{\partial P}{\partial y}\right|}_{x={b}_1\left({t}_j\right)}\bullet \Delta t $$
where a0 and b0 are the positions of flow front at t = 0 considering capillary effect.
In the present study, permeability, k, is calculated according to the Kozeny-Carman equation (Gutowski 1985) as,
$$ k=\frac{{d_f}^2}{16{k}_0}\frac{{\left(1-{V}_f\right)}^3}{{V_f}^2} $$
where df is the diameter of a single fiber and k0 is the Kozeny constant. This equation is known as the permeable Kozeny-Carman equation.
Generally, the fiber volume fraction is related to the applied pressure. Gutowski (1985) derived this relationship in consideration of the elastic deformation of the fiber yarn. Assuming a quasi-static loading, the relationship between pressure and fiber volume fraction is expressed as,
$$ {P}_m-{P}_0=A\frac{\sqrt{\frac{V_f}{V_0}}-1}{{\left(\sqrt{\frac{V_a}{V_0}}-1\right)}^4} $$
where Pa is the molding pressure, P0 is the atmospheric pressure, A is the experimental spring constant, va is the maximum possible fiber volume fraction, and vo is the no-load volume fraction.
Correction of resin infiltration ratio by capillary phenomenon
The impregnation ratio can be obtained with respect to time based on the theory described above. On the other hand, the actual resin impregnation to a fiber yarn occurs before temperature and pressure reach the target values because of capillary action. For example, resin impregnation was confirmed at molding time = 0 s as described below. In this study, the influence of capillary action is not analytically considered. In order to consider the capillary action semi-quantitatively, the predicted curve was sifted in the time direction as shown in Fig. 4. Here the time shift is defined as M value. Effect of molding condition on the M values are also discussed later.
Modification of analytical impregnation ratio curve
Effect of molding time and molding method (3 K-4 plies–MB. FS, powder)
Figures 5 and 6 show the effect of molding time on the resin impregnation behavior for each molding pressure (2, 4 MPa) and molding method. Due to the increase in molding pressure, the following two affect on the resin impregnation behavior; 1. the promotion of resin impregnation caused by increasing flow rate of molten resin, and 2. the decrease in permeability due to the increase in fiber volume fraction. In the present case, the resin impregnation was promoted with increasing molding pressure, which result suggest that the molding pressure acted as a driving force for resin impregnation.
Effect of molding time on resin impregnation behavior in 3 K composites (Molding pressure: 2 MPa)
In order to predict resin impregnation process, impregnation ratio as a function of time at molding pressure 2 and 4 MPa were curve-fitted by choosing the permeability coefficient k and M value defined in section 3.4 as shown in Table 3. We conducted calculation iteratively by selecting k and M values to obtain the parameters to fit the experimental results until correlation coefficient more than 0.92. The geometric parameters for carbon fiber yarn were also shown in Table 4. From the eqs. (11) and (12), the permeability coefficient at an arbitrary molding pressure could be obtained with the Kozeny constant k0. The parameters included in the equation were assumed as shown in Table 5 (Lin et al. 1994). Figure 8 shows the permeability coefficient as a function of molding pressure. The permeability decreased with increasing pressure, whereas the difference in permeability at the present molding pressure was lower than at the lower pressure condition. As a result, the molding pressure acted as the driving force for resin impregnation more than reducing permeability.
Parameters used in the analysis
MB, FS, Powder
2 MPa
k [m2]
4000 × 10−18
3073 × 10− 18
k0 [−]
M [s]
Geometric parameters for carbon fiber yarns
Major Axis [μm]
Minor Axis [μm]
Aspect Ratio [−]
Parameters included in the eq. (11) and (12)
Fiber Diameter [μm]
Applied Pressure [MPa]
Atmospheric Pressure [MPa]
Maximum Fiber Volume Fraction [%]
Initial Fiber Volume Fraction [%]
Fiber Bed Constant [kN/m2]
The analytical results for the resin impregnation process were in good agreement with experimental results as shown in Figs. 5 and 6. The M values shown in Table 3 were the same for both molding pressures and molding methods. This result suggests that the capillary effect on resin impregnation is independent of molding pressure and molding method. In general, capillary effect depends on the combination of materials and the size of capillary. In the present molding condition, the size of capillary which corresponds to the distance between fibers remained constant irrespective of molding pressure, because the pressure was large enough. It is also confirmed from the permeability show in Fig. 7, which depended on fiber volume fraction as shown in eq. (11).
Permeability for 3 K carbon fiber yarn as a function of molding pressure
In the FS and Powder methods, it was found that the experimental resin impregnation ratio remained almost constant after molding time 120 s at molding pressure 2 MPa and after 60 s at molding pressure 4 MPa. It was considered that the resin impregnation saturated at that points. As for the MB method, molding time necessary for resin impregnation saturation was longer than for the other molding methods. This is considered to be because the aspect ratio of the MBY is smaller than the carbon fiber yarn for the plain woven fabric used for the other method because braided resin yarns constrained the carbon fiber yarn at the center before weaving. The smaller aspect ratio resulted in the longer resin impregnation distance.
In the present study, the analytical results were in good agreement with the experimental results regardless of molding pressure and methods with same k0 and M values. This fact means that k0 and M values are material constant and once the values are determined, resin impregnation behavior for arbitrarily molding condition could be predicted. In other words, we optimize the process parameters for molding of CFRTP with MB, FS and Powder method, once the material paprameters were determined experimentally.
Effect of number of filament (12 K-4 plies–FS, powder)
Figures 8 and 9 show the effect of fiber thickness on impregnation ratio at molding time 0 and 300 s, respectively. Comparing the 12 K and 3 K composites produced with FS method at molding time 0 s and molding pressure 0 MPa, the resin impregnation ratio for 3 K composites was remarkably higher. Since the 12 K fiber yarns included more air between fibers and longer resin impregnation distance than the 3 K yarns, higher driving force might be required to discharge the air and induce resin flow. In comparison with the Powder specimen under the same condition, the impregnation ratio for 3 K composites with FS method was very high, which result indicated the possibility of higher driving force caused by capillary effect with FS method. Since it is difficult to uniformly dispersed PI resin powder on the entire surface of the carbon fiber fabric, the unevenness of resin powder dispersion might hinder the uniformity of resin flow.
Effect of number of filament on resin impregnation behavior in 3 K and 12 K composites (Molding time: 0 s)
Effect of number of filament on resin impregnation behavior in 3 K and 12 K composites (Molding time: 300 s)
Comparing the resin impregnation ratio under the molding pressure 0 and 4 MPa at molding time 0 s, the resin impregnation ratio for both the 12 K and 3 K composites was improved with increasing molding pressure. However, the improvement in the impregnation ratio for 12 K composites was less than that for 3 K composites. Similarly, comparing the resin impregnation ratio for 12 K composites under molding pressure 4 and 8 MPa at the molding time of 300 s, the improvement with molding pressure was limited. Moreover, comparing the rein impregnation ratio under molding pressure 4 MPa at molding time 300 s, the ratio for the 12 K composites was less than half of that for 3 K composites. These results indicate that higher molding pressure might be necessary for the resin impregnation for 12 K composites.
Effect of molding pressure (3 K-1, 4 plies-FS, powder)
Figures 10 and 11 show the cross section photograph for each molding pressure and resin impregnation ratio measured from cross-sectional observation with FS method. Since the resin impregnation was not improved after the molding pressure 4 MPa, the molding pressure 4 MPa was indicated as the optimum molding pressure. A void in the fiber yarn located at the center of the yarn under molding pressure 0 MPa, whereas fine voids were distributed in the whole of the fiber yarn with increasing molding pressure. This result indicated that the molten resin impregnated to the fiber yarn by the capillary phenomenon at molding pressure 0 MPa and the air between fibers was forced into the center of the yarn. This impregnation behavior began and continued when molding temperature reached the melting point 390 °C for polyimide until reaching the target molding temperature of 410 °C. Thereafter, applying predetermined molding pressure disassemble the void at the center to the fine voids. The fine voids were dispersed to the whole of the yarn and some part of them were pushed out to the yarn, which resulted in the formation of the void in resin rich region.
Cross section of fiber yarn of 3 K composites (FS method-1ply-900 s)
Effect of molding pressure on resin impregnation behavior in 3 K composites (FS method-1ply-900 s)
Figures 12 and 13 show the cross section photograph for each molding pressure. Figure 14 shows the effect of molding pressure on the resin impregnation behavior with Powder method. Here, molding under much lower molding pressure conditions was conducted. From Fig. 14, it is confirmed that the impregnation ratio reached the maximum at the molding pressure of 0.3 MPa, and the impregnation ratio showed a substantially constant value at the molding pressure more than 0.3 MPa. Therefore, resin impregnation could be sufficiently completed at the molding pressure of 0.3 MPa with the Powder method when sufficient molding time was given. On the other hand, many fine voids were generated in the fiber yarns. This might be due to no pressure loading during cooling. Therefore the effect of pressurizing cooling on void formation is discussed in the following.
Cross section of fiber yarn for 3 K composites (Powder method-1 ply-900 s)
Cross section of fiber yarn of 3 K composites (Powder method-4 plies-900 s)
Effect of pressure on resin impregnation behavior in 3 K composites (Powder method-1, 4 plies-900 s)
Effect of pressurizing cooling (3 K-4 plies-MB, FS, powder)
The influence of pressure loading during cooling on the resin impregnation behavior with MB, FS and Powder method was also discussed. Figure 15 shows the cross-section of the composites with each molding method. These composites were molded with keeping molding pressure during cooling until 255 °C. With all method. Less voids were observed comparing with un-pressurizing cooling. Although slight un-impregnation region existed at the center of the fiber yarn with MB method, fine voids inside the fiber yarn could be completely eliminated during this pressurizing cooling process.
Cross sections of composites (Pressurizing cooling and Non-pressurizing cooling -2 MPa-300 s)
With Powder and MB method, many cracks induced by thermal residual stress were observed, whereas no cracks existed in the composite with FS method. The difference in crack formation was in the difference in the magnitude of thermal residual stress. Figure 16 shows the results of differential scanning calorimetry measurement on PI resin fiber, film and powder. From Fig. 16, it is clarified that PI fiber used in this study was amorphous phase, whereas PI powder and film were crystalline phase which corresponded to the drop in the heat flow around 400 °C. In Fig. 16, the glass transition temperature which corresponded to the point where the slope of the curve changes were around 259, 248, and 256 °C for PI fiber, film and powder respectively. The glass transition temperature was slightly lower for PI films. Considering the un-pressurizing temperature 255 °C, un-pressurizing just above the glass transition temperature could suppress thermal residual stress and resultant cracks.
Heat flow as a function of temperature
In this study, resin impregnation behavior for continuous carbon fiber reinforced polyimide composites was investigated experimentally and analytically. In order to investigate the effect of molding method on the resin impregnation to carbon fiber yarn, specimens were prepared by MB, FS and Powder method. The influence of molding time, pressure, yarn thickness and cooling condition on the resin impregnation process was also discussed. The conclusions obtained were as follows.
Molding pressure acted as a driving force for resin flow between fibers rather than for the reduction in permeability.
When the viscocity of the molten resin was known, the resin impregnation behavior to continuous fiber yarn under arbitrary molding conditions could be predicted by the present analytical method, once k0 and M value were determined in a single set of experiment. The Kozeny constant k0 was independent of molding pressure and molding method.
Higher molding pressure was required for 12 K composites compared with 3 K composites to obtain sufficient resin impregnation.
When sufficient molding time conditions were given, sufficient impregnation was obtained for 3 K yarn even at lower molding pressure 0.3 MPa.
Fine voids present inside the fiber yarn under the non-pressurizing cooling condition could be completely eliminated by pressurizing cooling process.
CFRP:
Carbon fiber reinforced plastics
CFRTP:
Carbon fiber reinforced thermoplastics
CY:
Commingled yarn
FS:
The work was supported by Tokyo Metropolitan University and Japan Aerospace Exploration Agency.
SKa carried out the experiments. TO carried out mathematical calculation. SKo conceived the analytical method.KG supervised the work. All authors contributed to the writing and editing of the manuscript. All authors read and approved the final manuscript.
SKa: Graduate student, Tokyo Metropolitan University. TO: Assistant Professor, Tokyo Metropolitan Univeristy. SKo: Professor, Tokyo Metropolitan University. KG: Associate Professor, Japan Aerospace Exploration Agency.
Department of Mechanical Engineering, Graduate School of Science and Engineering, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji Tokyo, 192-0397, Japan
Japan Aerospace Exploration Agency, Sagamihara, Kanagawa, Japan
Åström BT, Buron R, Advani SG (1992) On flow through aligned Fiber beds and its application to composites. J Compos Mater 26(9):1351–1373View ArticleGoogle Scholar
Bernet N, Michaud V, Bourban PE, Manson JAE (1999) An impregnation model for the consolidation of thermoplastic composites model from commingled yarns. J Compos Mater 33(8):751–772View ArticleGoogle Scholar
Bijewa J, Rattan R (2007) Carbon fabric reinforced polyetherimide composites: optimization of fabric content for best combination of strength and adhesive wsear performance. Wear 262:749–758View ArticleGoogle Scholar
Clemans S, Western E, Handermann A (1987) Hybrid yarns for high-performance thermoplastic composites. Mater Sci Monog 41:429–434Google Scholar
Fujihara H, Harada K (2000) Influence of sizing agents on bending property of continuous glass fiber reinforced polypropylene composites. Compos A 31(9):979–990View ArticleGoogle Scholar
Fujihara K, Huang ZM, Ramakrishna S, Satknanantham K, Hamada H (2003) Performance study of braided carbon/PEEK composite compression bone plates. Biomat 24:2661–2667View ArticleGoogle Scholar
Gebart BR (1992) Permeability of unidirectional reinforcements for RTM. J Compos Mater 26:1100–1133View ArticleGoogle Scholar
Gutowski TG (1985) A resin Folw/Fiber deformation model for composites. Mater Sci 16(4):58–64MathSciNetGoogle Scholar
Gutowski TG, Cai Z, Bauer S, Boucher D, Kingery J, Wineman S (1987) Consolidation experiments for laminate composites. J Compos Mater 21(7):650–669View ArticleGoogle Scholar
Hou M, Ye L, Lee HJ, Mai YW (1998) Manufacture of a carbon-fabric-reinforced polyetherimide (CF PEI) composite material. Compos Sci Technol 58(2):181–190View ArticleGoogle Scholar
Hung G (2004) Research on the impregnation behavior of the micro-braided thermoplastic matrix. Mater Design 25:167–170View ArticleGoogle Scholar
John DM, Yi Z, Jurron B (1999) Flow of thermoplastics through fiber assemblies. 5th int conference on flow processes in Comppos. Mater:71–78Google Scholar
Kamaya M, Nakai A, Hamada H (2001) Micro-braided yarn as intermediate material System for continuous fiber reinforced thermoplastic composite. 13th Int Conference on Compos Mater:ID −1552Google Scholar
Kim TW, Jun EJ, Um MK, Lee WI (1989) Effect of pressure on the impregnation of thermoplastic resin into a nidirectional fiber bundle. Adv Polym Technol 9(4):257–279View ArticleGoogle Scholar
Kobayashi S, Morimoto T (2014) Experimental and numerical characterization of resin impregnation behavior in textile composite fabricated with micro-braiding technique. Mech Eng J 1(4):14–00071Google Scholar
Kobayashi S, Takada K (2013) Processing of unidirectional hemp fiber reinforced composites with micro-braiding technique. Compos A 46:173–179View ArticleGoogle Scholar
Kobayashi S, Takada K, Nakamura R (2014) Processing and characterization of hemp fiber textile composites with micro-braiding technique. Compos A 59:1–8View ArticleGoogle Scholar
Kobayashi S, Takada K, Song DY (2012b) Effect of molding condition on the mechanical properties of bamboo-rayon continuous Fiber/poly (lactic acid) composites. Adv Compos Mater 21:79–90Google Scholar
Kobayashi S, Tanaka A (2012) Resin impregnation behavior in processing of unidirectional carbon Fiber reinforced thermoplastic composites. Adv Compos Mater 21:91–102Google Scholar
Kobayashi S, Tanaka A, Morimoto T (2012a) Analytical prediction of resin impregnation behavior during processing of unidirectional fiber reinforced thermoplastic composites considering pressure fluctuation. Adv Compos Mater 21:425–432View ArticleGoogle Scholar
Kobayashi S, Tsukada T, Morimoto T (2017) Resin impregnation behavior in carbon fiber reinforced polyamide 6 composite: effects of yarn thickness, fabric lamination and sizing agent. Compos A 101:283–289View ArticleGoogle Scholar
Lee JS, Kim JW (2017) Impact response of carbon Fiber fabric/thermoset-thermoplastic combined polymer composites. Adv Compos Letters 26:82–88Google Scholar
Lin Y, Friedrich K (1995) Processing of thermoplastic composites from powder/sheath-fiber bundles. J Compos Mater 48:317–324Google Scholar
Lin Y, Friedrich K, Cutolo D, Savadori A (1994) Manufacture of CF/PEEK composites from powder/sheath fiber preforms. Compos Manuf 5(1):41–50View ArticleGoogle Scholar
Lin Y, Friedrich K, Kӓstel J, Mai YW (1995) Consolidation of unidirectional CF/PEEK composites from commingled yarn prepreg. Compos Sci Technol 54:349–358View ArticleGoogle Scholar
Rattan R, Bijwe J (2006) Carbon fabric reinforced polyetherimide composites: influence of weave of fabric and processing parameters on performance properties and erosive wear. Mater Sci Engng A 420:342–350View ArticleGoogle Scholar
Sakaguchi M, Nkai A, Hamada H, Takada N (2000) The mechanical properties of unidirectional thermoplastic composites manufactured by a micro-braiding technique. Compos Sci Technol 60:717–722View ArticleGoogle Scholar
Svensson N, Shishoo R, Gilchrist M (1998) Manufacturing of thermoplastic composites from commingled yarns –a review. J Thermoplast Compos Mater 11:22–56View ArticleGoogle Scholar
Thomas AC, Pavel S, Avdvani SG (2013) Resin film impregnation in fabric prepregs with dual length scale permeability. Compos A 53:118–128View ArticleGoogle Scholar
West VBP, Pipes RB, Advani SG (1991) The consolidation of commingled thermoplastic fabrics. Polym Compos 12(6):417–427View ArticleGoogle Scholar
Wolfrath J, Michaud V, Modaressi A, JAE M°n (2006) Unsaturated flow in compressible fiber preforms. Compos A 37:881–889View ArticleGoogle Scholar
Advanced Composites: Microstructure, Mechanics, Manufacturing and Optimi...
|
CommonCrawl
|
Some examples of generalized reflectionless Schrödinger potentials
Formulas for generalized principal Lyapunov exponent for parabolic PDEs
August 2016, 9(4): 1171-1188. doi: 10.3934/dcdss.2016047
Local study of a renormalization operator for 1D maps under quasiperiodic forcing
Àngel Jorba 1, , Pau Rabassa 2, and Joan Carles Tatjer 3,
Departament de Matemàtica Aplicada i Anàlisi, Universitat de Barcelona, Gran Via 585 , 08007 Barcelona
School of Mathematical Sciences, Queen Mary University of London, Mile End Road, London E1 4NS, United Kingdom
Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Gran Via 585, 08007 Barcelona, Spain
Received September 2015 Revised December 2015 Published August 2016
The authors have recently introduced an extension of the classical one dimensional (doubling) renormalization operator to the case where the one dimensional map is forced quasiperiodically. In the classic case the dynamics around the fixed point of the operator is key for understanding the bifurcations of one parameter families of one dimensional unimodal maps. Here we perform a similar study of the (linearised) dynamics around the fixed point for further application to quasiperiodically forced unimodal maps.
Keywords: discretization., Quasiperiodically forced maps, renormalization.
Mathematics Subject Classification: Primary: 37C55; Secondary: 37E20, 37G3.
Citation: Àngel Jorba, Pau Rabassa, Joan Carles Tatjer. Local study of a renormalization operator for 1D maps under quasiperiodic forcing. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 1171-1188. doi: 10.3934/dcdss.2016047
H. Broer and F. Takens, Dynamical Systems and Chaos, vol. 172 of Applied Mathematical Sciences,, Springer, (2011). doi: 10.1007/978-1-4419-6870-8. Google Scholar
W. de Melo and S. van Strien, One-dimensional Dynamics, vol. 25 of Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)],, Springer-Verlag, (1993). doi: 10.1007/978-3-642-78043-1. Google Scholar
J. Dieudonné, Foundations of Modern Analysis,, Academic Press, (1969). Google Scholar
R. Fabbri, T. Jäger, R. Johnson and G. Keller, A Sharkovskii-type theorem for minimally forced interval maps,, Topol. Methods Nonlinear Anal., 26 (2005), 163. doi: 10.12775/TMNA.2005.029. Google Scholar
U. Feudel, S. Kuznetsov and A. Pikovsky, Strange Nonchaotic Attractors, vol. 56 of World Scientific Series on Nonlinear Science. Series A: Monographs and Treatises,, World Scientific Publishing Co. Pte. Ltd., (2006). Google Scholar
À. Jorba, C. Núñez, R. Obaya and J. C. Tatjer, Old and new results on strange nonchaotic attractors,, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 17 (2007), 3895. doi: 10.1142/S0218127407019780. Google Scholar
À. Jorba, P. Rabassa and J. C. Tatjer, A renormalization operator for 1D maps under quasi-periodic perturbations,, Nonlinearity, 28 (2015), 1017. doi: 10.1088/0951-7715/28/4/1017. Google Scholar
À. Jorba, P. Rabassa and J. C. Tatjer, Period doubling and reducibility in the quasi-periodically forced logistic map,, Discrete Contin. Dyn. Syst. Ser. B, 17 (2012), 1507. doi: 10.3934/dcdsb.2012.17.1507. Google Scholar
À. Jorba and J. C. Tatjer, A mechanism for the fractalization of invariant curves in quasi-periodically forced 1-D maps,, Discrete Contin. Dyn. Syst. Ser. B, 10 (2008), 537. doi: 10.3934/dcdsb.2008.10.537. Google Scholar
T. Kato, Perturbation Theory for Linear Operators,, Die Grundlehren der mathematischen Wissenschaften, (1966). Google Scholar
A. Katok and B. Hasselblatt, Introduction to the Modern Theory of Dynamical Systems, vol. 54 of Encyclopedia of Mathematics and its Applications,, Cambridge University Press, (1995). doi: 10.1017/CBO9780511809187. Google Scholar
O. Lanford III, A computer-assisted proof of the Feigenbaum conjectures,, Bull. Amer. Math. Soc. (N.S.), 6 (1982), 427. doi: 10.1090/S0273-0979-1982-15008-X. Google Scholar
O. Lanford III, Computer assisted proofs,, in Computational methods in field theory (Schladming, (1992), 43. doi: 10.1007/3-540-55997-3_30. Google Scholar
J. Milnor, On the concept of attractor,, Comm. Math. Phys., 99 (1985), 177. doi: 10.1007/BF01212280. Google Scholar
A. Prasad, S. Negi and R. Ramaswamy, Strange nonchaotic attractors,, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 11 (2001), 291. doi: 10.1142/S0218127401002195. Google Scholar
P. Rabassa, À. Jorba and J. C. Tatjer, A numerical study of universality and self-similarity in some families of forced logistic maps,, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 23 (2013). doi: 10.1142/S0218127413500727. Google Scholar
W. Rudin, Real and Complex Analysis,, 3rd edition, (1987). Google Scholar
S. Smale, Differentiable dynamical systems,, Bull. Amer. Math. Soc., 73 (1967), 747. doi: 10.1090/S0002-9904-1967-11798-1. Google Scholar
Tingting Zhang, Àngel Jorba, Jianguo Si. Weakly hyperbolic invariant tori for two dimensional quasiperiodically forced maps in a degenerate case. Discrete & Continuous Dynamical Systems - A, 2016, 36 (11) : 6599-6622. doi: 10.3934/dcds.2016086
Denis Gaidashev, Tomas Johnson. Spectral properties of renormalization for area-preserving maps. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 3651-3675. doi: 10.3934/dcds.2016.36.3651
Yiming Ding. Renormalization and $\alpha$-limit set for expanding Lorenz maps. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 979-999. doi: 10.3934/dcds.2011.29.979
Olivier Goubet, Ezzeddine Zahrouni. On a time discretization of a weakly damped forced nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2008, 7 (6) : 1429-1442. doi: 10.3934/cpaa.2008.7.1429
Max Gunzburger, Sung-Dae Yang, Wenxiang Zhu. Analysis and discretization of an optimal control problem for the forced Fisher equation. Discrete & Continuous Dynamical Systems - B, 2007, 8 (3) : 569-587. doi: 10.3934/dcdsb.2007.8.569
Hans Koch. On hyperbolicity in the renormalization of near-critical area-preserving maps. Discrete & Continuous Dynamical Systems - A, 2016, 36 (12) : 7029-7056. doi: 10.3934/dcds.2016106
Antonio Pumariño, José Ángel Rodríguez, Enrique Vigil. Renormalization of two-dimensional piecewise linear maps: Abundance of 2-D strange attractors. Discrete & Continuous Dynamical Systems - A, 2018, 38 (2) : 941-966. doi: 10.3934/dcds.2018040
Àngel Jorba, Joan Carles Tatjer. A mechanism for the fractalization of invariant curves in quasi-periodically forced 1-D maps. Discrete & Continuous Dynamical Systems - B, 2008, 10 (2&3, September) : 537-567. doi: 10.3934/dcdsb.2008.10.537
João Lopes Dias. Brjuno condition and renormalization for Poincaré flows. Discrete & Continuous Dynamical Systems - A, 2006, 15 (2) : 641-656. doi: 10.3934/dcds.2006.15.641
Frank D. Grosshans, Jürgen Scheurle, Sebastian Walcher. Invariant sets forced by symmetry. Journal of Geometric Mechanics, 2012, 4 (3) : 271-296. doi: 10.3934/jgm.2012.4.271
Flaviano Battelli, Michal Fe?kan. Chaos in forced impact systems. Discrete & Continuous Dynamical Systems - S, 2013, 6 (4) : 861-890. doi: 10.3934/dcdss.2013.6.861
Kazuyuki Yagasaki. Degenerate resonances in forced oscillators. Discrete & Continuous Dynamical Systems - B, 2003, 3 (3) : 423-438. doi: 10.3934/dcdsb.2003.3.423
Marin Kobilarov, Jerrold E. Marsden, Gaurav S. Sukhatme. Geometric discretization of nonholonomic systems with symmetries. Discrete & Continuous Dynamical Systems - S, 2010, 3 (1) : 61-84. doi: 10.3934/dcdss.2010.3.61
Michal Fečkan, Michal Pospíšil. Discretization of dynamical systems with first integrals. Discrete & Continuous Dynamical Systems - A, 2013, 33 (8) : 3543-3554. doi: 10.3934/dcds.2013.33.3543
Fernando Jiménez, Jürgen Scheurle. On some aspects of the discretization of the suslov problem. Journal of Geometric Mechanics, 2018, 10 (1) : 43-68. doi: 10.3934/jgm.2018002
Matthieu Hillairet, Alexei Lozinski, Marcela Szopos. On discretization in time in simulations of particulate flows. Discrete & Continuous Dynamical Systems - B, 2011, 15 (4) : 935-956. doi: 10.3934/dcdsb.2011.15.935
Mathieu Desbrun, Evan S. Gawlik, François Gay-Balmaz, Vladimir Zeitlin. Variational discretization for rotating stratified fluids. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 477-509. doi: 10.3934/dcds.2014.34.477
P.E. Kloeden, Victor S. Kozyakin. Uniform nonautonomous attractors under discretization. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 423-433. doi: 10.3934/dcds.2004.10.423
Giovanni Cimatti. Forced periodic solutions for piezoelectric crystals. Communications on Pure & Applied Analysis, 2005, 4 (2) : 475-485. doi: 10.3934/cpaa.2005.4.475
Sze-Bi Hsu, Bernold Fiedler, Hsiu-Hau Lin. Classification of potential flows under renormalization group transformation. Discrete & Continuous Dynamical Systems - B, 2016, 21 (2) : 437-446. doi: 10.3934/dcdsb.2016.21.437
Àngel Jorba Pau Rabassa Joan Carles Tatjer
|
CommonCrawl
|
The stop-signal task has been used in a number of laboratories to study the effects of stimulants on cognitive control. In this task, subjects are instructed to respond as quickly as possible by button press to target stimuli except on certain trials, when the target is followed by a stop signal. On those trials, they must try to avoid responding. The stop signal can follow the target stimulus almost immediately, in which case it is fairly easy for subjects to cancel their response, or it can come later, in which case subjects may fail to inhibit their response. The main dependent measure for stop-signal task performance is the stop time, which is the average go reaction time minus the interval between the target and stop signal at which subjects inhibit 50% of their responses. De Wit and colleagues have published two studies of the effects of d-AMP on this task. De Wit, Crean, and Richards (2000) reported no significant effect of the drug on stop time for their subjects overall but a significant effect on the half of the subjects who were slowest in stopping on the baseline trials. De Wit et al. (2002) found an overall improvement in stop time in addition to replicating their earlier finding that this was primarily the result of enhancement for the subjects who were initially the slowest stoppers. In contrast, Filmore, Kelly, and Martin (2005) used a different measure of cognitive control in this task, simply the number of failures to stop, and reported no effects of d-AMP.
Even though smart drugs come with a long list of benefits, their misuse can cause negative side effects. Excess use can cause anxiety, fear, headaches, increased blood pressure, and more. Considering this, it is imperative to study usage instructions: how often can you take the pill, the correct dosage and interaction with other medication/supplements.
Of course, there are drugs out there with more transformative powers. "I think it's very clear that some do work," says Andrew Huberman, a neuroscientist based at Stanford University. In fact, there's one category of smart drugs which has received more attention from scientists and biohackers – those looking to alter their own biology and abilities – than any other. These are the stimulants.
Yet some researchers point out these drugs may not be enhancing cognition directly, but simply improving the user's state of mind – making work more pleasurable and enhancing focus. "I'm just not seeing the evidence that indicates these are clear cognition enhancers," says Martin Sarter, a professor at the University of Michigan, who thinks they may be achieving their effects by relieving tiredness and boredom. "What most of these are actually doing is enabling the person who's taking them to focus," says Steven Rose, emeritus professor of life sciences at the Open University. "It's peripheral to the learning process itself."
With all these studies pointing to the nootropic benefits of some essential oils, it can logically be concluded then that some essential oils can be considered "smart drugs." However, since essential oils have so much variety and only a small fraction of this wide range has been studied, it cannot be definitively concluded that absolutely all essential oils have brain-boosting benefits. The connection between the two is strong, however.
The smart pill industry has popularized many herbal nootropics. Most of them first appeared in Ayurveda and traditional Chinese medicine. Ayurveda is a branch of natural medicine originating from India. It focuses on using herbs as remedies for improving quality of life and healing ailments. Evidence suggests our ancestors were on to something with this natural approach.
(We already saw that too much iodine could poison both adults and children, and of course too little does not help much - iodine would seem to follow a U-curve like most supplements.) The listed doses at iherb.com often are ridiculously large: 10-50mg! These are doses that seems to actually be dangerous for long-term consumption, and I believe these are doses that are designed to completely suffocate the thyroid gland and prevent it from absorbing any more iodine - which is useful as a short-term radioactive fallout prophylactic, but quite useless from a supplementation standpoint. Fortunately, there are available doses at Fitzgerald 2012's exact dose, which is roughly the daily RDA: 0.15mg. Even the contrarian materials seem to focus on a modest doubling or tripling of the existing RDA, so the range seems relatively narrow. I'm fairly confident I won't overshoot if I go with 0.15-1mg, so let's call this 90%.
Another moral concern is that these drugs — especially when used by Ivy League students or anyone in an already privileged position — may widen the gap between those who are advantaged and those who are not. But others have inverted the argument, saying these drugs can help those who are disadvantaged to reduce the gap. In an interview with the New York Times, Dr. Michael Anderson explains that he uses ADHD (a diagnosis he calls "made up") as an excuse to prescribe Adderall to the children who really need it — children from impoverished backgrounds suffering from poor academic performance.
So what's the catch? Well, it's potentially addictive for one. Anything that messes with your dopamine levels can be. And Patel says there are few long-term studies on it yet, so we don't know how it will affect your brain chemistry down the road, or after prolonged, regular use. Also, you can't get it very easily, or legally for that matter, if you live in the U.S. It's classified as a schedule IV controlled substance. That's where Adrafinil comes in.
Additionally, this protein also controls the life and death of brain cells, which aids in enhancing synaptic adaptability. Synapses are important for creating new memories, forming new connections, or combining existing connections. All of these components are important for mood regulation, maintenance of clarity, laser focus, and learning new life skills.
NGF may sound intriguing, but the price is a dealbreaker: at suggested doses of 1-100μg (NGF dosing in humans for benefits is, shall we say, not an exact science), and a cost from sketchy suppliers of $1210/100μg/$470/500μg/$750/1000μg/$1000/1000μg/$1030/1000μg/$235/20μg. (Levi-Montalcini was presumably able to divert some of her lab's production.) A year's supply then would be comically expensive: at the lowest doses of 1-10μg using the cheapest sellers (for something one is dumping into one's eyes?), it could cost anywhere up to $10,000.
Power times prior times benefit minus cost of experimentation: (0.20 \times 0.30 \times 540) - 41 = -9. So the VoI is negative: because my default is that fish oil works and I am taking it, weak information that it doesn't work isn't enough. If the power calculation were giving us 40% reliable information, then the chance of learning I should drop fish oil is improved enough to make the experiment worthwhile (going from 20% to 40% switches the value from -$9 to +$23.8).
Jesper Noehr, 30, reels off the ingredients in the chemical cocktail he's been taking every day before work for the past six months. It's a mixture of exotic dietary supplements and research chemicals that he says gives him an edge in his job without ill effects: better memory, more clarity and focus and enhanced problem-solving abilities. "I can keep a lot of things on my mind at once," says Noehr, who is chief technology officer for a San Francisco startup.
Smart Pill is a dietary supplement that blends vitamins, amino acids, and herbal extracts to sustain mental alertness, memory and concentration. One of the ingredients used in this formula is Vitamin B-1, also known as Thiamine, which sustains almost all functions present in the body, but plays a key role in brain health and function. A deficiency of this vitamin can lead to several neurological function problems. The most common use of Thiamine is to improve brain function; it acts as a neurotransmitter helping the brain prevent learning and memory disorders; it also provides help with mood disorders and offers stress relief.
A synthetic derivative of Piracetam, aniracetam is believed to be the second most widely used nootropic in the Racetam family, popular for its stimulatory effects because it enters the bloodstream quickly. Initially developed for memory and learning, many anecdotal reports also claim that it increases creativity. However, clinical studies show no effect on the cognitive functioning of healthy adult mice.
Nootrobox co-founder Geoffrey Woo declines a caffeinated drink in favour of a capsule of his newest product when I meet him in a San Francisco coffee shop. The entire industry has a "wild west" aura about it, he tells me, and Nootrobox wants to fix it by pushing for "smarter regulation" so safe and effective drugs that are currently unclassified can be brought into the fold. Predictably, both companies stress the higher goal of pushing forward human cognition. "I am trying to make a smarter, better populace to solve all the problems we have created," says Nootroo founder Eric Matzner.
Legal issues aside, this wouldn't be very difficult to achieve. Many companies already have in-house doctors who give regular health check-ups — including drug tests — which could be employed to control and regulate usage. Organizations could integrate these drugs into already existing wellness programs, alongside healthy eating, exercise, and good sleep.
Nondrug cognitive-enhancement methods include the high tech and the low. An example of the former is transcranial magnetic stimulation (TMS), whereby weak currents are induced in specific brain areas by magnetic fields generated outside the head. TMS is currently being explored as a therapeutic modality for neuropsychiatric conditions as diverse as depression and ADHD and is capable of enhancing the cognition of normal healthy people (e.g., Kirschen, Davis-Ratner, Jerde, Schraedley-Desmond, & Desmond, 2006). An older technique, transcranial direct current stimulation (tDCS), has become the subject of renewed research interest and has proven capable of enhancing the cognitive performance of normal healthy individuals in a variety of tasks. For example, Flöel, Rösser, Michka, Knecht, and Breitenstein (2008) reported enhancement of learning and Dockery, Hueckel-Weng, Birbaumer, and Plewnia (2009) reported enhancement of planning with tDCS.
the rise of IP scofflaw countries which enable the manufacture of known drugs: India does not respect the modafinil patents, enabling the cheap generics we all use, and Chinese piracetam manufacturers don't give a damn about the FDA's chilling-effect moves in the US. If there were no Indian or Chinese manufacturers, where would we get our modafinil? Buy them from pharmacies at $10 a pill or worse? It might be worthwhile, but think of the chilling effect on new users.
Using prescription ADHD medications, racetams, and other synthetic nootropics can boost brain power. Yes, they can work. Even so, we advise against using them long-term since the research on their safety is still new. Use them at your own risk. For the majority of users, stick with all natural brain supplements for best results. What is your favorite smart pill for increasing focus and mental energy? Tell us about your favorite cognitive enhancer in the comments below.
The above information relates to studies of specific individual essential oil ingredients, some of which are used in the essential oil blends for various MONQ diffusers. Please note, however, that while individual ingredients may have been shown to exhibit certain independent effects when used alone, the specific blends of ingredients contained in MONQ diffusers have not been tested. No specific claims are being made that use of any MONQ diffusers will lead to any of the effects discussed above. Additionally, please note that MONQ diffusers have not been reviewed or approved by the U.S. Food and Drug Administration. MONQ diffusers are not intended to be used in the diagnosis, cure, mitigation, prevention, or treatment of any disease or medical condition. If you have a health condition or concern, please consult a physician or your alternative health care provider prior to using MONQ diffusers.
No. There are mission essential jobs that require you to live on base sometimes. Or a first term person that is required to live on base. Or if you have proven to not be as responsible with rent off base as you should be so your commander requires you to live on base. Or you're at an installation that requires you to live on base during your stay. Or the only affordable housing off base puts you an hour away from where you work. It isn't simple. The fact that you think it is tells me you are one of the "dumb@$$es" you are referring to above.
Smart pills have huge potential and several important applications, particularly in diagnosis. Smart pills are growing as a highly effective method of endoscopy, particularly for gastrointestinal diseases. Urbanization and rapid lifestyle changes leaning toward unhealthy diets and poor eating habits have led to distinctive increasing lifestyle disorders such as gastroesophageal reflux disease (GERD), obesity, and gastric ulcers.
Flow diagram of epidemiology literature search completed July 1, 2010. Search terms were nonmedical use, nonmedical use, misuse, or illicit use, and prescription stimulants, dextroamphetamine, methylphenidate, Ritalin, or Adderall. Stages of subsequent review used the information contained in the titles, abstracts, and articles to determine whether articles reported studies of the extent of nonmedical prescription stimulant use by students and related questions addressed in the present article including students' motives and frequency of use.
Between midnight and 1:36 AM, I do four rounds of n-back: 50/39/30/55%. I then take 1/4th of the pill and have some tea. At roughly 1:30 AM, AngryParsley linked a SF anthology/novel, Fine Structure, which sucked me in for the next 3-4 hours until I finally finished the whole thing. At 5:20 AM, circumstances forced me to go to bed, still having only taken 1/4th of the pill and that determines this particular experiment of sleep; I quickly do some n-back: 29/20/20/54/42. I fall asleep in 13 minutes and sleep for 2:48, for a ZQ of 28 (a full night being ~100). I did not notice anything from that possible modafinil+caffeine interaction. Subjectively upon awakening: I don't feel great, but I don't feel like 2-3 hours of sleep either. N-back at 10 AM after breakfast: 25/54/44/38/33. These are not very impressive, but seem normal despite taking the last armodafinil ~9 hours ago; perhaps the 3 hours were enough. Later that day, at 11:30 PM (just before bed): 26/56/47.
Your mileage will vary. There are so many parameters and interactions in the brain that any of them could be the bottleneck or responsible pathway, and one could fall prey to the common U-shaped dose-response curve (eg. Yerkes-Dodson law; see also Chemistry of the adaptive mind & de Jongh et al 2007) which may imply that the smartest are those who benefit least23 but ultimately they all cash out in a very few subjective assessments like energetic or motivated, with even apparently precise descriptions like working memory or verbal fluency not telling you much about what the nootropic actually did. It's tempting to list the nootropics that worked for you and tell everyone to go use them, but that is merely generalizing from one example (and the more nootropics - or meditation styles, or self-help books, or getting things done systems - you try, the stronger the temptation is to evangelize). The best you can do is read all the testimonials and studies and use that to prioritize your list of nootropics to try. You don't know in advance which ones will pay off and which will be wasted. You can't know in advance. And wasted some must be; to coin a Umeshism: if all your experiments work, you're just fooling yourself. (And the corollary - if someone else's experiments always work, they're not telling you everything.)
Several chemical influences can completely disconnect those circuits so they're no longer able to excite each other. "That's what happens when we're tired, when we're stressed." Drugs like caffeine and nicotine enhance the neurotransmitter acetylcholine, which helps restore function to the circuits. Hence people drink tea and coffee, or smoke cigarettes, "to try and put [the] prefrontal cortex into a more optimal state".
Flow diagram of cognitive neuroscience literature search completed July 2, 2010. Search terms were dextroamphetamine, Aderrall, methylphenidate, or Ritalin, and cognitive, cognition, learning, memory, or executive function, and healthy or normal. Stages of subsequent review used the information contained in the titles, abstracts, and articles to determine whether articles reported studies meeting the inclusion criteria stated in the text.
A rough translation for the word "nootropic" comes from the Greek for "to bend or shape the mind." And already, there are dozens of over-the-counter (OTC) products—many of which are sold widely online or in stores—that claim to boost creativity, memory, decision-making or other high-level brain functions. Some of the most popular supplements are a mixture of food-derived vitamins, lipids, phytochemicals and antioxidants that studies have linked to healthy brain function. One popular pick on Amazon, for example, is an encapsulated cocktail of omega-3s, B vitamins and plant-derived compounds that its maker claims can improve memory, concentration and focus.
As already mentioned, AMPs and MPH are classified by the U.S. Food and Drug Administration (FDA) as Schedule II substances, which means that buying or selling them is a felony offense. This raises the question of how the drugs are obtained by students for nonmedical use. Several studies addressed this question and yielded reasonably consistent answers.
The magnesium was neither randomized nor blinded and included mostly as a covariate to avoid confounding (the Noopept coefficient & t-value increase somewhat without the Magtein variable), so an OR of 1.9 is likely too high; in any case, this experiment was too small to reliably detect any effect (~26% power, see bootstrap power simulation in the magnesium section) so we can't say too much.
It can easily pass through the blood-brain barrier and is known to protect the nerve tissues present in the brain. There is evidence that the acid plays an instrumental role in preventing strokes in adults by decreasing the number of free radicals in the body. It increases the production of acetylcholine, a neurotransmitter that most Alzheimer's patients are a deficit in.
Spaced repetition at midnight: 3.68. (Graphing preceding and following days: ▅▄▆▆▁▅▆▃▆▄█ ▄ ▂▄▄▅) DNB starting 12:55 AM: 30/34/41. Transcribed Sawaragi 2005, then took a walk. DNB starting 6:45 AM: 45/44/33. Decided to take a nap and then take half the armodafinil on awakening, before breakfast. I wound up oversleeping until noon (4:28); since it was so late, I took only half the armodafinil sublingually. I spent the afternoon learning how to do value of information calculations, and then carefully working through 8 or 9 examples for my various pages, which I published on Lesswrong. That was a useful little project. DNB starting 12:09 AM: 30/38/48. (To graph the preceding day and this night: ▇▂█▆▅▃▃▇▇▇▁▂▄ ▅▅▁▁▃▆) Nights: 9:13; 7:24; 9:13; 8:20; 8:31.
There are also premade 'stacks' (or formulas) of cognitive enhancing superfoods, herbals or proteins, which pre-package several beneficial extracts for a greater impact. These types of cognitive enhancers are more 'subtle' than the pharmaceutical alternative with regards to effects, but they work all the same. In fact, for many people, they work better than smart drugs as they are gentler on the brain and produce fewer side-effects.
Some work has been done on estimating the value of IQ, both as net benefits to the possessor (including all zero-sum or negative-sum aspects) and as net positive externalities to the rest of society. The estimates are substantial: in the thousands of dollars per IQ point. But since increasing IQ post-childhood is almost impossible barring disease or similar deficits, and even increasing childhood IQs is very challenging, much of these estimates are merely correlations or regressions, and the experimental childhood estimates must be weakened considerably for any adult - since so much time and so many opportunities have been lost. A wild guess: $1000 net present value per IQ point. The range for severely deficient children was 10-15 points, so any normal (somewhat deficient) adult gain must be much smaller and consistent with Fitzgerald 2012's ceiling on possible effect sizes (small).
All clear? Try one (not dozens) of nootropics for a few weeks and keep track of how you feel, Kerl suggests. It's also important to begin with as low a dose as possible; when Cyr didn't ease into his nootropic regimen, his digestion took the blow, he admits. If you don't notice improvements, consider nixing the product altogether and focusing on what is known to boost cognitive function – eating a healthy diet, getting enough sleep regularly and exercising. "Some of those lifestyle modifications," Kerl says, "may improve memory over a supplement."
Smart drugs act within the brain speeding up chemical transfers, acting as neurotransmitters, or otherwise altering the exchange of brain chemicals. There are typically very few side effects, and they are considered generally safe when used as indicated. Special care should be used by those who have underlying health conditions, are on other medications, pregnant women, and children, as there is no long-term data on the use and effects of nootropics in these groups.
Brain focus pills mostly contain chemical components like L-theanine which is naturally found in green and black tea. It's associated with enhancing alertness, cognition, relaxation, arousal, and reducing anxiety to a large extent. Theanine is an amino and glutamic acid that has been proven to be a safe psychoactive substance. Some studies suggest that this compound influences, the expression in the genes present in the brain which is responsible for aggression, fear, and memory. This, in turn, helps in balancing the behavioral responses to stress and also helps in improving specific conditions, like Post Traumatic Stress Disorder (PTSD).
My general impression is positive; it does seem to help with endurance and extended the effect of piracetam+choline, but is not as effective as that combo. At $20 for 30g (bought from Smart Powders), I'm not sure it's worthwhile, but I think at $10-15 it would probably be worthwhile. Sulbutiamine seems to affect my sleep negatively, like caffeine. I bought 2 or 3 canisters for my third batch of pills along with the theanine. For a few nights in a row, I slept terribly and stayed awake thinking until the wee hours of the morning; eventually I realized it was because I was taking the theanine pills along with the sleep-mix pills, and the only ingredient that was a stimulant in the batch was - sulbutiamine. I cut out the theanine pills at night, and my sleep went back to normal. (While very annoying, this, like the creatine & taekwondo example, does tend to prove to me that sulbutiamine was doing something and it is not pure placebo effect.)
Cytisine is not known as a stimulant and I'm not addicted to nicotine, so why give it a try? Nicotine is one of the more effective stimulants available, and it's odd how few nicotine analogues or nicotinic agonists there are available; nicotine has a few flaws like short half-life and increasing blood pressure, so I would be interested in a replacement. The nicotine metabolite cotinine, in the human studies available, looks intriguing and potentially better, but I have been unable to find a source for it. One of the few relevant drugs which I can obtain is cytisine, from Ceretropic, at 2x1.5mg doses. There are not many anecdotal reports on cytisine, but at least a few suggest somewhat comparable effects with nicotine, so I gave it a try.
Theanine can also be combined with caffeine as both of them work in synergy to increase memory, reaction time, mental endurance, and memory. The best part about Theanine is that it is one of the safest nootropics and is readily available in the form of capsules. A natural option would be to use an excellent green tea brand which constitutes of tea grown in the shade because then Theanine would be abundantly present in it.
Critics will often highlight ethical issues and the lack of scientific evidence for these drugs. Ethical arguments typically take the form of "tampering with nature." Alena Buyx discusses this argument in a neuroethics project called Smart Drugs: Ethical Issues. She says that critics typically ask if it is ethically superior to accept what is "given" instead of striving for what is "made". My response to this is simple. Just because it is natural does not mean it is superior.
With just 16 predictions, I can't simply bin the predictions and say yep, that looks good. Instead, we can treat each prediction as equivalent to a bet and see what my winnings (or losses) were; the standard such proper scoring rule is the logarithmic rule which pretty simple: you earn the logarithm of the probability if you were right, and the logarithm of the negation if you were wrong; he who racks up the fewest negative points wins. We feed in a list and get back a number:
Not included in the list below are prescription psychostimulants such as Adderall and Ritalin. Non-medical, illicit use of these drugs for the purpose of cognitive enhancement in healthy individuals comes with a high cost, including addiction and other adverse effects. Although these drugs are prescribed for those with attention deficit hyperactivity disorder (ADHD) to help with focus, attention and other cognitive functions, they have been shown to in fact impair these same functions when used for non-medical purposes. More alarming, when taken in high doses, they have the potential to induce psychosis.
** = Important note - whilst BrainZyme is scientifically proven to support concentration and mental performance, it is not a replacement for a good diet, moderate exercise or sleep. BrainZyme is also not a drug, medicine or pharmaceutical. It is a natural-sourced, vegan food supplement with ingredients that are scientifically proven to support cognition, concentration, mental performance and reduction of tiredness. You should always consult with your Doctor if you require medical attention.
"Love this book! Still reading and can't wait to see what else I learn…and I am not brain injured! Cavin has already helped me to take steps to address my food sensitivity…seems to be helping and I am only on day 5! He has also helped me to help a family member who has suffered a stroke. Thank you Cavin, for sharing all your knowledge and hard work with us! This book is for anyone that wants to understand and implement good nutrition with all the latest research to back it up. Highly recommend!"
But, if we find in 10 or 20 years that the drugs don't do damage, what are the benefits? These are stimulants that help with concentration. College students take such drugs to pass tests; graduates take them to gain professional licenses. They are akin to using a calculator to solve an equation. Do you really want a doctor who passed his boards as a result of taking speed — and continues to depend on that for his practice?
Since LLLT was so cheap, seemed safe, was interesting, just trying it would involve minimal effort, and it would be a favor to lostfalco, I decided to try it. I purchased off eBay a $13 48 LED illuminator light IR Infrared Night Vision+Power Supply For CCTV. Auto Power-On Sensor, only turn-on when the surrounding is dark. IR LED wavelength: 850nm. Powered by DC 12V 500mA adaptor. It arrived in 4 days, on 7 September 2013. It fits handily in my palm. My cellphone camera verified it worked and emitted infrared - important because there's no visible light at all (except in complete darkness I can make out a faint red light), no noise, no apparent heat (it took about 30 minutes before the lens or body warmed up noticeably when I left it on a table). This was good since I worried that there would be heat or noise which made blinding impossible; all I had to do was figure out how to randomly turn the power on and I could run blinded self-experiments with it.
|
CommonCrawl
|
Julia Vs Matlab Speed
Hence in terms of language features, Julia is the clear winner, with R, MATLAB and Python far behind. Julia has an LLVM Low-Level Virtual Machine (LLVM) is a compiler infrastructure to build intermediate and/or binary machine code. But it also gives you advantages that Matlab/Python users don't have. We must also add decorators to speed the code. Difficult to find programmers. Julia is a general-purpose, open-source language aimed squarely at scientific computation, with the high-level feel of Python, the numerical ease-of-use of Matlab, the speed of compiled C, and the meta-programming CS sophistication of Lisp. In the Julia, we assume you are using v1. It is well known for its speed and transposability and its applicability in modelling Convolution Neural Networks (CNN). mex file pain,…. 1 => King (Currently R is the King but in future Python will give tough fight to R as Python is both General purpose programming language and data analysis tool due to enhanced libraries like Pandas, Scipy, Numpy as opposed to R which is only statistical analysis tool. A Deep Learning Certification MOOC (Massive Open Online Course) grows your career. Comparison of Julia, Python and Octave Overview. Sample Student Final Projects. 30 Child Stars Who Were Totally Okay with Losing the Spotlight. Julia Studio is an free IDE dedicated to the language. It combines nice features from some of my favorite languages: MATLAB, Python, Common Lisp, C++, etc. Behavior Differences vs other FFT libraries. Quantitative Economics with Julia. Routine statistical tasks such as data extraction, graphical summary, and technical interpretation all require pervasive use of modern computing machinery. You can use Numerical Recipes to extend MATLAB ®, sometimes giving huge speed increases. As someone who is very active in the R community, I am biased of course, and have been (and remain) a skeptic about Julia. Julia is a really well-thought-out language. 19 KB, 31 pages and we collected some download links, you can download this pdf book for free. As such, the core developers and the community are now doing extensive revisions and testing before version 1. From a report: Released in 2012, Julia is designed to combine the speed of C with the usability. Given only the mean and standard deviation of noise, the Kalman filter is the best linear estimator. 0 on each, so a smaller value is better. What's new in 0. Documentation, tutorials etc. Shah, Alan Edelman. Julia Interoperates well with both Python and Fortran which is useful for handling legacy code and it's free. In short, because we are greedy. 評価関数の最適化 DWAの利点と欠点 利点 欠点 DWAのMATLABサンプルプログラム Pythonサンプルプログラム その他のロボティクスアルゴリズムのサンプルコード 参考資料 MyEnigm…. We want the speed of C with the dynamism of Ruby. 0 is an open source programming language for scientific, technical and high-performance computing environments. A reader, Ismael V. A tutorial with examples is here. Julia was designed from the start for scientific and numerical computation. This method may provide a speed improvements of ~2x for trivial functions such as sine but can produce a much more noticeable improvements (10x+) for more complex functions. This speed is important because it then allows the model to be solved repeatedly as one would require in order to do estimation. Exporting MPS files is easier than you may think. The central conclusions of AFV2015 remain unaltered: C++ is the fastest alternative, Julia o⁄ers a great balance of speed and ease of use, and Python is too slow. Amazon SageMaker is a fully-managed service that covers the entire machine learning workflow. One backslash operation in JULIA (and MATLAB) (erroneous) results with 8 threads suggests ~1. The following tables provide a comparison of numerical-analysis software. Still, I've been hesitant to devote any more time to SciLab if Octave is significantly better. Using the default values of tolerance, vpaintegral can handle values that cause the MATLAB integral function to overflow or underflow. This chapter documents instances where MATLAB's parser will fail to run code that will run in Octave, and instances where Octave's parser will fail to run code that will run in MATLAB. (To my knowledge there are no speed regressions with this change, so it can only get better. After coding in Julia for the past two years I have definitely fell in love with its pythonic syntax, multiple dispatch, and MATLAB-like handiness in. If you haven't, well now is as good a time as any to learn about it. This chapter documents instances where MATLAB's parser will fail to run code that will run in Octave, and instances where Octave's parser will fail to run code that will run in MATLAB. It is built for speed since the founders wanted something 'fast'. I have used R's excellent data. MATLAB was built by Cleve Moler (University of New Mexico) to give students access to LINPACK and EISPACK without them having to learn Fortran Python Numpy (Travis Oliphant, Brigham Young University) originates from f2py, a tool to easily extend Python with Fortran code. Package overview; 10 minutes to pandas; Essential basic functionality; Intro to data structures. Popuri, and Andrew M. Find column with unique values move to the first column and sort worksheets. So your question is not so much MATLAB vs FORTRAN as it is high-level versus low-level languages. Wolfram Community forum discussion about Matrix operation speed: Mathematica vs Matlab?. What about beautiful new languages like Go, Haskell, Rust, Scala, Julia, etc. Parallel computing is also being touted as a useful feature of Julia now available. Many proxies are available: Kickass Torrents, The Pirate Bay, YTS, RARBG, 1337x, EZTV, Zoogle, and more!. Julia is a compiled language that targets. It is an industrial strength programming language supporting functional, imperative and object-oriented styles. Julia is a new language in the same arena as Matlab or R. Eight Advantages of Python Over Matlab Dr. The effectiveness of the design is validated using MATLAB/Simulink. Simple, Jackson Annotations, Passay, Boon, MuleSoft, Nagios, Matplotlib, Java NIO. > > I've actually been working on just that, on and off for a few months now. Yes, but at least Julia does so in a way that allows for professional programmers to easily work with and maintain it. Somewhat confusing type-system. In addition to these, you can easily use libraries from Python, R, C/Fortran, C++, and Java. Bank identification number (BIN) is the initial four to six numbers that appear on a credit card. I've had failed attempts to quit the Matlab addiction in the past, making me generally quite conservative about new platforms. Various math functions and Built-in library commands are used to analyze data, generate plots and perform complex Integrations and Differentiations. This page also contains notes on differences between things that are different between Octave (in traditional mode) and MATLAB. After doing this I get times that are essentially the same as MATLAB. Many proxies are available: Kickass Torrents, The Pirate Bay, YTS, RARBG, 1337x, EZTV, Zoogle, and more!. Pure Julia polygamma(m, z) [ = (m+1)th derivave of the ln Γ funcon ] ~ 2× faster than SciPy's (C/Fortran) for real z … and unlike SciPy's, same code supports complex argument z Julia code can actually be faster than typical "op)mized". Then I need to figure out how to export the DCA data components from Matlab to an excel spreadsheet so I can use it to create variables for analysis in SPSS. But it also gives you advantages that Matlab/Python users don't have. Then, running times improve significantly and become similar to C. Disc Sanders For Sale Ac Band Saw, Bench Sander, Air Sander, Disk Sander, Belt Sander, Belt Disc Sander, Wood Lathe, Delta Rockwell, Powermatic, oscillating sanders. Development. If there's even a little bit of noise in the data, you won't have an R-squared of one. We regularly hear of people (and whole research groups) that transition from Matlab to Python. This is very useful if you have an existing C library you need to integrate with Lua or quickly get a Lua script running on the C side of the game. 05x for V100 compared to the P100 in training mode – and 1. In the notebook 05. Released in 2012, Julia is designed to combine the speed of C with the usability of Python, the dynamism of Ruby, the mathematical prowess of MatLab, and the statistical chops of R. It is the technology of choice in companies where a single mistake can cost millions and speed matters,. Caffe is a deep learning framework that is supported with interfaces like C, C++, Python, MATLAB as well as the Command Line Interface. SAS - I used the free University edition. Learn More » Try Now ». They said they chose Julia because, "as the models that we use for forecasting and policy analysis grow more complicated, we need a language that can perform computations at high speed. R, MATLAB and Python are interpreted languages, which by nature incur more processing time. Shah Matlab) Those who convert ideas to products fastest will win HPAT vs. Consider the case where I have a function that provides the square of a Float64. Apache MXNet is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. So your question is not so much MATLAB vs FORTRAN as it is high-level versus low-level languages. com is Nigeria's number one online Shopping destination. V ectors and scalars are referred to as n-b y-1 and 1-b y-1 matrices resp ectiv ely. devectorized code in Julia. Maintain a single codebase that works seamlessly across every platform Roblox supports. It has gained wide acceptance in the academic, engineering and financial sectors. Even the assumption that I made about a high-level language leading to slower running code is not necessarily true. I'm pleased to have Dave Foti back for a look at objects and performance. Searching for suitable software was never easier. With reviews, features, pros & cons of Xcos. Radix MIT licensed Redis client which supports pipelining, pooling, redis cluster, scripting, pub/sub, scanning, and more. When dealing with arrays, we have two choices: apply a for loop or vectorize an array: apply the desired changes to all members of the array in a single statement. If you are already in a data analytics job, there's a good chance you have learned. Difficult to find programmers. While the syntax looks superficially Matlabby, that is about as far as the similarity goes. Simple logistic regression finds the equation that best predicts the value of the Y variable for each value of the X variable. This article has multiple issues. In this article, we will be going to discuss some data science tools that data scientists use to conduct data transactions. 19 KB, 31 pages and we collected some download links, you can download this pdf book for free. This article pro vides a general tutorial on FSK in its many forms. What about beautiful new languages like Go, Haskell, Rust, Scala, Julia, etc. Matlab declares war on Python! Economist Now do Matlab vs Mathematica And in terms of speed Julia isn't that much faster. Popuri, and Andrew M. Some of us are Lisp hackers. The slice representing Python, Octave and Julia together is too small to be visible. In this type of motion gravity is the only factor acting on our objects. It's totally fine and natural to write procedural code like you would in C, and when you do, you can get performance similar to C. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible. What determines the top speed in ice skating?. Learn more about Maplesoft. Escher is a graphical interface for Julia. Both the Python and R languages have developed robust ecosystems of open source tools and libraries that help data scientists of any skill level more easily perform analytical work. LSTMs are a powerful kind of RNN used for processing sequential data such as sound, time series (sensor) data or written natural language. Simple logistic regression finds the equation that best predicts the value of the Y variable for each value of the X variable. So, I decided to be (almost) consistent with the MATLAB implementation of rotm2euler. Julia vs Python: Julia language advantages. Skillshare is a learning platform with online classes taught by the world's best practitioners. Comparing Julia and R's Vocabularies By John Myles White on 4. I updated the post to show its use. Matlab Use "[]" to index matrices (so X(:,1)in Matlab should be X[:,1]) ones(N)produces an Nx1 vector in Julia as opposed to an NxN matrix element-wise operators need to explicitly have ". Hello guys, Thanks for starting this topic. What's new in 0. It is a mix of R, Matlab, Python and other similar languages. It is a N-by-N matrix associated with the Riemann hypothesis, which is true if and only if: DET(A) = O( N!. You learn Python, and use Numpy. We are not sure that we can achieve it with Julia that seems to assume that each user is expected to add/build on his/her own packages on top of Julia. We thought it will be also necessary you have a grip on the element-by-element Matrix division in Matlab. So, I decided to be (almost) consistent with the MATLAB implementation of rotm2euler. To read NetCDF files there are graphical tools like Matlab, IDL, ArcGIS, NCView, Xconv and developer tools like the Unidata NetCDF4 module for Python and Xarray. It claims speeds comparable to that of a compiled language such as C++, but in the form of a high-level programming language such as R or Matlab. Electrified technologies are defining both the powertrain and the automotive industry. While the syntax looks superficially Matlabby, that is about as far as the similarity goes. Along with Rust and Go it is one of the recent advances in imperative languages. While all now offer just-in-time (JIT) compilation, it may not always help much. Microsoft Offers a Faster, More Efficient R, But Is it Right for You? In early 2015, Microsoft announced its successful acquisition of Revolution Analytics, which made R available as an enterprise ready statistical and data science solution. 4 has a built-in cache that can greatly speed up Fibonacci() function. We must write the code to take into account the Fortran ordering used by Julia. Using the Pypy implementation, it runs around 44 times slower than in C++. This short course will provide an overview of the language, including comparisons with Matlab, R, and Python. I wrote these constraint in Matlab, but I am getting infeasible results which is due to constraints. Brewster, Sai K. Deploy a MongoDB database in the cloud with just a few clicks. How Julia Goes Fast pdf book, 244. We continue working with OLS, using the model and data generating process presented in the previous post. Edinburgh, U. Cython is an optimising static compiler for both the Python programming language and the extended Cython programming language (based on Pyrex). The authors explain their justification for the language as follows: We want a language that's open source, with a liberal license. New Parallel Programming Languages for Optimization Research John W. GitLab Enterprise vs GitHub Enterprise. I've had failed attempts to quit the Matlab addiction in the past, making me generally quite conservative about new platforms. This language has all the potential which can make it rank among the upcoming top programming languages in the world. Matlab (and Julia) try to give both speed and interactive use - they come to the table as the prototyping languages for high performance computing. I have prepared two simple scripts for both Julia and Matlab that are intended to do the same, however they seem to perform very differently. A reader, Ismael V. Routine statistical tasks such as data extraction, graphical summary, and technical interpretation all require pervasive use of modern computing machinery. Hessians, Gradients and Forms - Oh My!¶ Let's review the theory of optimization for multivariate functions. Nor has this filter been tested with anyone who has photosensitive epilepsy. Discussion on advances in GPU computing with R. Created in 2012 by a group of MIT students. The resemblance to MATLAB is more than coincidence; some of the key people in the Julia project have a background in numerical analysis and linear algebra, where MATLAB has long been a standard tool. Yes, but at least Julia does so in a way that allows for professional programmers to easily work with and maintain it. These include various mathematical libraries, data manipulation tools, and packages for general purpose computing. We'll explore this below. MATLAB, R and Python. Eight Advantages of Python Over Matlab Dr. A global materials science company focused on discovery, product innovation based on fluoropolymer technology and manufacturing, and rewarding careers for our associates. All efforts to make Blender work on specific configurations are welcome, but we can only officially support those used by active developers. A whopping 8 (in words, eight) hits for Python, 5 for Octave and none for Julia. Julia Interoperates well with both Python and Fortran which is useful for handling legacy code and it's free. Collecting Data. Searching for suitable software was never easier. Today I'd like to introduce you to a guest blogger, Dave Bergstein, who is a MATLAB Product Manager here at MathWorks. Why We Created Julia. 30pm Lunch - (Machine Learning Interest group meeting - open to all) 1. 3-4× faster than Matlab's and 2-3× faster than SciPy's (Fortran Cephes). Hi, my original problem is a dynammic programming problem in which I need to interpolate the value function on an irregular grid using a cubic spline. In this step-by-step tutorial, you'll learn about MATLAB vs Python, why you should switch from MATLAB to Python, the packages you'll need to make a smooth transition, and the bumps you'll most likely encounter along the way. The syntax looks fairly simple and it is about as fast as C (Fortran looks like it still is the Ferrari of scientific computing). From a report: Released in 2012, Julia is designed to combine the speed of C with the usability. Besides speed, Julia offers great features:. Julia Part II Julia for Data Science Matlab uses help, Julia switches into help mode by typeing ? Use @time to compare the speed of these two functions for. PureOJuliaFFT* performance* 2 4 8 16 32 64 128 256 512 1024 2048 4096 8192 16384 32768 65536 131072 262144 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 11000 12000 13000. Theano, Flutter, KNime, Mean. 評価関数の最適化 DWAの利点と欠点 利点 欠点 DWAのMATLABサンプルプログラム Pythonサンプルプログラム その他のロボティクスアルゴリズムのサンプルコード 参考資料 MyEnigm…. That's why we're on a mission to become the ultimate online playground for players and game developers alike. Lightweight - Bluefish tries to be lean and clean, as far as possible given it is a GUI editor. Please help improve it or discuss these issues on the talk page. Recall that in the single-variable case, extreme values (local extrema) occur at points where the first derivative is zero, however, the vanishing of the first derivative is not a sufficient condition for a local max or min. Searching for suitable software was never easier. 30 Child Stars Who Were Totally Okay with Losing the Spotlight. The Distance Between Two Vectors. Reasons Not to use Julia Somewhat rare programming language. Chess position evaluation with convolutional neural network in Julia; Optimization techniques comparison in Julia: SGD, Momentum, Adagrad, Adadelta, Adam; Backpropagation from scratch in Julia (part I) Random walk vectors for clustering (part I - similarity between objects) Solving logistic regression problem in Julia. This month saw the release of the long-awaited version 1. The use of integer variables. The challenge is making the Python fast. Julia*: A High-Level Language for Supercomputing. Given some vectors $\vec{u}, \vec{v} \in \mathbb{R}^n$, we denote the distance between those two points in the following manner. Lecture 7: Lab 2 & Pipelining David Black-Schaffer [email protected] This page also contains notes on differences between things that are different between Octave (in traditional mode) and MATLAB. It allows to use the SageMath notebook in your web browser with no noticable speed loss compared to a native Linux install. It uniquely identifies the institution issuing the card. For most of the geoscientific applications main advice would be to use vectorisation whenever possible, and avoid loops. Thus it's no surprise that Julia has many features advantageous for such use cases:. While the syntax looks superficially Matlabby, that is about as far as the similarity goes. Almost everything in Plots is done by specifying plot attributes. Matlab/Octave to Python conversion facility. Blender is cross-platform, it runs on every major operating system: Windows 10, 8 and 7 macOS 10. Gupta, A fourth Order poisson solver, Journal of Computational Physics, 55(1):166-172, 1984. Python and hence I have chosen not to implement Ergashev's methods. The language does these. Julia Studio is an free IDE dedicated to the language. Twenty percent of species currently face extinction, and that number could rise to 50 percent by 2100. > Overall my impression is that Julia (and Matlab's) language choices are driven by people who want to directly type their math paper into a program with as little thought and as few changes as possible. They said they chose Julia because, "as the models that we use for forecasting and policy analysis grow more complicated, we need a language that can perform computations at high speed. Cells(1, sht. Escher is a graphical interface for Julia. • Convenient form for online real time processing. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible. If the initial state is chosen as x(0) = (sI¡A)¡1B the output only consists of the pure exponential response and both the state. It allows to use the SageMath notebook in your web browser with no noticable speed loss compared to a native Linux install. Somewhat confusing type-system. It will be much nicer to maintain, and I think, given the people working with julia, will in time exceed(/become) the state of the art, in performance and accuracy. edu EE183 Spring 2003 EE183 Lecture 7 - Slide 2 Overview nFixed Point nDetermine your number format from the matlab code (what's the largest number you get?) nMap the -2 to 2 plane to a 0 to 63 screen by extracting bits and choosing a binary point. 4 has a built-in cache that can greatly speed up Fibonacci() function. While the syntax looks superficially Matlabby,(Is that really a word?) that is about as far as the similarity goes. Using Julia requires the gcc module to be loaded:. 30 Child Stars Who Were Totally Okay with Losing the Spotlight. With Julia, you won't be overburdened with the tasks of freeing and allocating memory. It turns out that for a simple processing task of calculating a T1 map of a lemon Julia is 10 times faster than Python and ~635 times faster than Matlab. Author(s) David M. Consider the case where I have a function that provides the square of a Float64. In particular, where is the main bottleneck for Julia in this task? Or, why does Matlab have an edge in this case? Second, my current Julia package is based on the generic and standard BLAS and LAPACK packages for MacOS. The Visualization ToolKit (VTK) is an open source, freely available software system for 3D computer graphics, image processing, and visualization used by thousands of researchers and developers around the world. 7 arrives but let's call it 1. This article provides a step-by-step tutorial on how to use ASP. Author: Thomas Breloff (@tbreloff) To get started, see the tutorial. Individual subscribers to Numerical Recipes Electronic who also own the book, can now convert their subscriptions to "lifetime" subscriptions. The authors are Jeff Bezanson, Stefan Karpinski, Viral B. Speed is a key feature of Julia. F undamen tals Matlab w orks with essen tially one kind of ob ject, a rectangular n umerical matrix. Twitter 21204 Followers. Find your best replacement here. Just for curiosity, tried to compile it with cython with little changes and then I rewrote it using loops for the numpy p…. A whopping 8 (in words, eight) hits for Python, 5 for Octave and none for Julia. In this tutorial you're going to learn how to work with large Excel files in Pandas, focusing on reading and analyzing an xls file and then working with a subset of the original data. Julia Part II Julia for Data Science Matlab uses help, Julia switches into help mode by typeing ? Use @time to compare the speed of these two functions for. 6MB), Code (TXT), Report (PDF) (Courtesy of anonymous MIT student. Python testing done with: Python 3. Specialties: Notable areas of past and current research pursuits include energy systems, heat exchanger technology, experimental and computational fluid dynamics and heat transfer, moving boundary problems, photonics and high-speed visualization, microscopy, modeling and analysis of single- and multi-phase thermal-fluid systems, nucleation and. On the other hand, Matlab shows significant speed improvements and demonstrates how native linear algebra code is preferred for speed. One of the best features of Lua is its very well designed C API. , one of the co-authors of the language. 05 Jan 2015. 8 is the latest official version of FFTW (refer to the release notes to find out what is new). Here is the julia code: FFT speed comparison between Matlab and Julia Nrows=1001; Ncols=501; A=complex(r. F undamen tals Matlab w orks with essen tially one kind of ob ject, a rectangular n umerical matrix. Plots - powerful convenience for visualization in Julia. Note that this filter is not FDA approved, nor are we medical professionals. (Supports SSE/SSE2/Altivec, since version 3. Various math functions and Built-in library commands are used to analyze data, generate plots and perform complex Integrations and Differentiations. Against a background of increasing energy demand and rising fuel prices, hybrid-electric propulsion systems have the potential to significantly reduce fuel consumption in the aviation industry, par. While doing a recursive addition to your path is more difficult in Matlab, the process of adding a single directory is a bit easier, in my opinion. The benchmarks on the Julia website 1 2 include R and Matlab as competitors. Used with permission. Julia vs Python; Basic Comparison of Python, Julia, Matlab, IDL and Java (2018 Edition). Update 2: Python and Matlab code edited on 4/5/2015. You have to pick the solution according to the problem, and Matlab and R definitely are superior to Python for certain problems. The Gurobi Optimizer enables users to state their toughest business problems as mathematical models and then finds the best solution out of trillions of possibilities. Exporting MPS files is easier than you may think. R, MATLAB and Python are interpreted languages, which by nature incur more processing time. Eight Advantages of Python Over Matlab Dr. For this experiment, they executed some tasks of simple recursive Fibonacci implementation, which resulted that the Julia is 40 times faster than Python, 100 times faster than R language, and around 1000 times faster than MATLAB. 30pm Lunch - (Machine Learning Interest group meeting - open to all) 1. Interpreters vs Compilers Compilers – can apply code-wise powerful optimization – practically have no run-time overhead → Speed Interpreters – allow easy code introspection – offer high-level language constructs and tools → Ease of use. Welcome to the official site of the Los Angeles Clippers. Madrid Area, Spain *Haramain High-Speed Rail - Traction & TCMS Engineer Main responsibilities: system definition, modelling, simulations (Matlab/Simulink), requirements specification, static & dynamic testing. As the main changes, Matlab and R have considerably improved their performance, in the case of Matlab to make it competitive, for example, with Rcpp, without the need to learn any C++. Update 1: A more complete and updated speed comparison can be found here. It provides a sophisticated compiler, distributed parallel execution, numerical accuracy, and an extensive mathematical function library. Skillshare is a learning platform with online classes taught by the world's best practitioners. I used Julia since version 0. To do this, just click on the "Set Path" icon at the top of the Matlab IDE (under the Home tab). It includes (an)isotropic linear elastic, hyperelastic and viscoplastic material models for static, frequency, buckling and implicit/explicit dynamic calculations. In 2016, there were 772 weather and disaster events, triple the number that occurred in 1980. within a programming language, but Python is too slow and. The scripts don't do anything productive they just mimic the structures of most of my codes in Matlab. Notepad++ 7. While you should read their rich comparison, a brief summary of their assessment is that Julia. Python and hence I have chosen not to implement Ergashev's methods. WHAT IS JULIA? Julia is described as a "high-level, high-performance dynamic programming language for numerical computing". Multistep pipelines: Many data science tasks can be divided into a pipeline of completely independent steps. Julia is a new languange for technical computing. Trapezoidal rule to approximate the integral of x^2. count: true --- # « Julia, my new friend for computing and optimization. In the Julia, we assume you are using v1. Searching for suitable software was never easier. Speed: The speed rating is assigned by myself as essentially an average of the speed tests I have found by Googling "comparison of programming languages by performance". But if I can do the same (or equivalent tweaks) in Matlab, then the relative speed (which is what we really care about) would be around the same. Cells(1, sht. Please help improve it or discuss these issues on the talk page. Our customers now face a multitude of choices and this alone is a significant challenge. MatLab Publish Tab, How to use MatLab to prepare a report or homework; First MatLab homework: A basic Euler solver for y'=f(x,y), To run it you have three options: 1) Use a lab computer which already has MatLab on it (Most CS and Eng computer labs do) 2) Install Matlab to your own laptop 3) Connect to cloud version of MatLab. Science quickly creates large code bases, unfortunately, so far it's mostly Python and Matlab which makes it hard to use the algorithms in real world applications. For this experiment, they executed some tasks of simple recursive Fibonacci implementation, which resulted that the Julia is 40 times faster than Python, 100 times faster than R language, and around 1000 times faster than MATLAB. Project Lab Renewable and Sustainable Energy Systems Notes on registration. table' data. Download Cracks, Serial Keys, Patches for Windows, Mac and Android. Some of us are Lisp hackers. Similarly, Matlab. A description is given here and more information can be obtained by email. It features the speed of C and the dynamics of Ruby. Different aspects of using Julia for implementing MPM such as vectorized vs de-vectorized codes, efficient use of composite types and the choice of concrete types over abstract types etc are discussed. How to set a timer on a Windows 10 PC. edu EE183 Spring 2003 EE183 Lecture 7 - Slide 2 Overview nFixed Point nDetermine your number format from the matlab code (what's the largest number you get?) nMap the -2 to 2 plane to a 0 to 63 screen by extracting bits and choosing a binary point. Bluefish Features. Immediately ship your projects on phones, desktops, consoles, and VR with a single click. There is also a chapter on IJulia, which is not really a plotting library, but can incorporate plots from other libraries. We gloss over their pros and cons, and show their relative computational complexity measure. It is still relatively young for a language but has reached its first stable release and is starting to be adopted more widely across industry and academia.
|
CommonCrawl
|
Spatial modeling of HIV and HSV-2 among women in Kenya with spatially varying coefficients
Elphas Okango1,
Henry Mwambi1 &
Oscar Ngesa1,2
BMC Public Health volume 16, Article number: 355 (2016) Cite this article
Disease mapping has become popular in the field of statistics as a method to explain the spatial distribution of disease outcomes and as a tool to help design targeted intervention strategies.
Most of these models however have been implemented with assumptions that may be limiting or altogether lead to less meaningful results and hence interpretations. Some of these assumptions include the linearity, stationarity and normality assumptions. Studies have shown that the linearity assumption is not necessarily true for all covariates. Age for example has been found to have a non-linear relationship with HIV and HSV-2 prevalence. Other studies have made stationarity assumption in that one stimulus e.g. education, provokes the same response in all the regions under study and this is also quite restrictive. Responses to stimuli may vary from region to region due to aspects like culture, preferences and attitudes.
We perform a spatial modeling of HIV and HSV-2 among women in Kenya, while relaxing these assumptions i.e. the linearity assumption by allowing the covariate age to have a non-linear effect on HIV and HSV-2 prevalence using the random walk model of order 2 and the stationarity assumption by allowing the rest of the covariates to vary spatially using the conditional autoregressive model. The women data used in this study were derived from the 2007 Kenya AIDS indicator survey where women aged 15–49 years were surveyed. A full Bayesian approach was used and the models were implemented in R-INLA software.
Age was found to have a non-linear relationship with both HIV and HSV-2 prevalence, and the spatially varying coefficient model provided a significantly better fit for HSV-2. Age-at first sex also had a greater effect on HSV-2 prevalence in the Coastal and some parts of North Eastern regions suggesting either early marriages or child prostitution. The effect of education on HIV prevalence among women was more in the North Eastern, Coastal, Southern and parts of Central region.
The models introduced in this study enable relaxation of two limiting assumptions in disease mapping. The effects of the covariates on HIV and HSV-2 were found to vary spatially. The effect of education on HSV-2 status for example was lower in North Eastern and parts of the Rift region than most of the other parts of the country. Age was found to have a non-linear effect on HIV and HSV-2 prevalence, a linearity assumption would have led to wrong results and hence interpretations. The findings are relevant in that they can be used in informing tailor made strategies for tackling HIV and HSV-2 in different counties. The methodology used here may also be replicated in other studies with similar data.
The World Health Organization (WHO) places at more than 1 million, the number of people who acquire sexually transmitted infections (STI) daily. By 2013 more than 530 million (about 7.5 %) had the virus that causes genital herpes or the herpes simplex virus type 2 (HSV-2) [1]. Out of these, it is estimated that about 123.7 million or 23 % resided in sub-Saharan Africa, among whom 63 % were women [2]. HSV-2 prevalence in the age group 15–49 in sub-Saharan Africa region ranges from 30 to 80 % among women and from 10 to 50 % among men [3]. There were about 35 million individuals living with HIV in sub-Saharan Africa by the end of 2013 with 2.1 million new infections [4]. HSV-2 is associated with a two to three-fold increased risk of HIV acquisition and an up to five-fold increased risk of HIV transmission per-sexual act, and may account for 40 to 60 % of new HIV infections in populations where HSV-2 has a high prevalence [2]. HIV and HSV-2 share common risk factors e.g. education level, place of residence, and age among others. Therefore understanding the spatial distribution, the dynamics and the underlying factors that propagate the spread of these diseases will help in ultimately winning the war against them. STIs can have serious consequences beyond the immediate impact of the infection itself, through mother-to-child transmission (MTCT) of infections and chronic diseases. Drug resistance is a major threat to reducing the impact of STIs worldwide [1]. The national HIV and HSV-2 prevalence rates in Kenya within the adult population (15–64 years) were estimated to be as high as 5.6 % and 7.1 % respectively [5], with a wide gender and geographical variation. The North Eastern region had HIV prevalence of as low as 2.1 % while regions around Lake Victoria and the Western region had prevalence ranging from between 13–25 % [6]. HIV and HSV-2 prevalence by age have a non-linear relationship assuming an inverted U shape [6, 7]. HIV prevalence increases with age until it plateaus at between ages 25–35, then starts decreasing with increasing age. HSV-2 prevalence increases with age up to between ages 35–45 then begins to decline with increasing age.
In the conventional generalized linear regression models applied to spatial data, many studies have assumed stationarity in that the same stimulus of a disease predictor provokes the same response in all parts of the study region [8–10]. This assumption is highly untenable for spatial processes. This may be as a result of sampling variation, intrinsically different relationships across space e.g. attitudes, cultures, preferences and model misspecification. It is therefore realistic to assume that the regression coefficients vary across space [11]. The issue of spatial non-stationarity can be addressed by allowing the relationships we are measuring to vary over space through the geographically weighted regression (GWR) model where the weights applied to observations in a series of locally weighted regression models across the study area are determined by a spatial kernel function [11], or the Bayesian spatially varying coefficients process (BSVCP), where spatially varying coefficients are modeled as a multivariate spatial process [12]. In the BSVCP model as discussed by Assuncao et al, the covariates are allowed to vary spatially by assigning its coefficients the Bayesian autoregressive (BAR), simultaneous autoregressive (SAR) or the conditional autoregressive (CAR) model [13]. Assuncao et al applied the BSVCP to model agricultural development in Brazil. The model showed significant regional differences in agricultural development [14]. Evidence of spatially varying parameters, even against strong prior belief on the absence of such variation, can be indicative of spatial differences of database collection procedures e.g. large differences on underreporting rates [13]. Several studies that use the linear predictor class of models including both the general and generalized linear models assume that all the covariates in the study have a linear relationship with the response variable. This linear relationship may not hold for all variables as in our case; age, which has a non-linear relationship with the response variable. Our objective is to perform a spatial modeling analysis while relaxing the stationarity and the linearity assumption by respectively employing the BSVCP and the random walk model of order 2 to model HIV and HSV-2 among women in Kenya.
The data for this study was obtained from the Kenya AIDS Indicator Survey (KAIS) which was carried out by the Kenyan government with financial support from the United States President's Emergency Plan for AIDS Relief (PEPFAR) and the United Nations (UN). The main aim of the survey was to obtain high quality data on the prevalence of HIV and Sexually Transmitted Infections (STI) among adults and to assess the knowledge of HIV and STIs in the population.
The sampling frame for KAIS was the National Sample Survey and Evaluation Programme IV (NASSEP IV). It consisted of 1800 clusters comprising 1260 rural and 540 urban clusters; of these, 294 rural and 141 urban clusters were sampled for KAIS. The overall design for KAIS 2007 was a stratified, two-stage cluster sampling design. The first stage involved selecting clusters from NASSEP IV, and the second stage involved the selection of household for KAIS with equal probability in the urban-rural strata within the districts. A sample of 415 clusters and 10,375 households were systematically selected for KAIS. A uniform sample of 25 households per cluster was selected using an equal probability systematic sampling method.
The survey was twofold: A household questionnaire was used to collect the characteristics of the living environment and an individual questionnaire to collect information on demographic characteristics and the knowledge of HIV and STIs on men and women aged 15–64 years. A representative sample of households and individuals was selected from eight provinces in the country. Each individual was asked for consent to provide a venous blood sample for HIV and HSV-2 testing. More information on survey methodologies used in collecting the data is found in the final KAIS, 2007 report [6]. This study uses the 2007 data even though a new round of KAIS, 2012 [5] has been done. The final release of this new data had not been made hence the data was not available for use. This study uses the women's data from the KAIS, 2007 survey. Information from 4864 women, aged 15–64 years who had provided venous blood for HIV and HSV-2 testing and also had full covariate information was used in the analysis. In the data, age was captured as both categorical and continuous while all other covariates were categorical. Readers are directed to the KAIS, 2007 report [6] for more information. An initial exploratory data analysis was carried out using a univariate standard logistic regression model to determine the association of each single covariate with the outcome variable (HIV and HSV-2 status). These variables were categorized into four groups, namely: demographic, social, biological and behavioral [9].
From this initial analysis, education level, age at first sex, perceived risk, partners in the last 1 year, marital status, place of residence, STI status in the last 1 year and age of the respondent were found to be associated with HIV and HSV-2 infection.
The covariates were tested for significance by fitting a univariate standard logistic model between each single covariate with the outcome variables (HIV and HSV-2 status). The association was considered significant at 5 % significance level. These are shown in Tables 1 and 2.
Table 1 Exploratory data analysis for HIV
Table 2 Exploratory data analysis for HSV-2
Let y ijk be the disease k status (0/1), k = 1 for HIV and k = 2 for HSV-2, for individual j in county i: i = 1, 2, …, 46. y ij1 = 1 if individual j in county i is HIV positive and zero otherwise and y ij2 = 1 if individual j in county i is HSV-2 positive and zero otherwise. This study assumes the dependent variable y ij1 and y ij2 are univariate Bernoulli distributed, i.e. y ij1|p ij1 ~ Bernoulli(p ij1) and y ij2|p ij2 ~ Bernoulli(p ij2).
The p continuous independent variables are contained in the vector X ijk = (x ij1, x ij2, …, x ijp )' while W ijk = (w ij1, w ij2, …, w ijr )' contains r categorical independent random variables with the first component accounting for intercept. In this study, p = 1 (age) and r = 8.
The unknown mean response namely E(y ijk ) = p ijk relates to the independent variable as follows:
$$ \begin{array}{l}h\left({p}_{ij1}\right)={\boldsymbol{X}}^{\boldsymbol{T}}{\boldsymbol{\beta}}_{\boldsymbol{1}}+{\boldsymbol{W}}^{\boldsymbol{T}}{\boldsymbol{\gamma}}_{\boldsymbol{1}},\ \mathrm{f}\mathrm{o}\mathrm{r}\ \mathrm{H}\mathrm{I}\mathrm{V}\ \mathrm{and}\\ {}h\left({p}_{ij2}\right)={\boldsymbol{X}}^{\boldsymbol{T}}{\boldsymbol{\beta}}_{\boldsymbol{2}}+{\boldsymbol{W}}^{\boldsymbol{T}}{\boldsymbol{\gamma}}_{\boldsymbol{2}}\ \mathrm{f}\mathrm{o}\mathrm{r}\ \mathrm{H}\mathrm{S}\mathrm{V}\hbox{-} 2.\end{array} $$
Where h(.) is a logit link function, β is a p dimensional vector of regression coefficients for the continuous independent variables, and γ is a r dimensional vector of regression coefficients for the categorical independent variables. A random walk model of order 2 (RW2) and a convolution model were employed in order to cater for both the non-linear effects of the continuous covariates and the spatial autocorrelation in the data.
The RW2 model approach relaxed the highly restrictive linear predictor by a more flexible semi-parametric predictor, defined as:
$$ \begin{array}{l}h\left({p}_{ij1}\right)={\displaystyle \sum_{t=1}^p{f}_t\left({x}_{ijt}\right)+{f}_{spat}\left({s}_{i1}\right)}+{\boldsymbol{W}}^{\boldsymbol{T}}{\boldsymbol{\gamma}}_{\boldsymbol{1}}\ \mathrm{f}\mathrm{o}\mathrm{r}\ \mathrm{H}\mathrm{I}\mathrm{V}\ \mathrm{and}\\ {}h\left({p}_{ij2}\right)={\displaystyle \sum_{t=1}^p{f}_t\left({x}_{ijt}\right)+{f}_{spat}\left({s}_{i2}\right)}+{\boldsymbol{W}}^{\boldsymbol{T}}{\boldsymbol{\gamma}}_{\boldsymbol{2}}\ \mathrm{f}\mathrm{o}\mathrm{r}\ \mathrm{H}\mathrm{S}\mathrm{V}\hbox{-} 2\end{array} $$
The function f t (.) is a non-linear twice differentiable smooth function for the continuous covariate and f spat (s ik ) is a factor that caters for the spatial effects of each county. This study utilized the convoluted spatial structure which assumes that the spatial effect can be decomposed into two components: spatially structured and spatially unstructured i.e. f spat (s ik ) = f str (s ik ) + f unstr (s ik ) , k = 1, 2 [9, 15]. The spatially unstructured random effects cover the unobserved covariates that are inherent within the counties or the correlation within the counties e.g. common cultural practices, climate, cultures etc. while the spatially structured random effect accounts for any unobserved covariates which vary spatially among counties. This is called spatial autocorrelation and it is technically defined as the dependence due to geographical proximity. Thus the final model is expressed as:
$$ h\left({p}_{ijk}\right)={\displaystyle \sum_{t=1}^p{f}_t\left({x}_{ijt}\right)+{f}_{str}\left({s}_{ik}\right)+{f}_{unstr}\left({s}_{ik}\right)}+{\boldsymbol{W}}^{\boldsymbol{T}}{\boldsymbol{\gamma}}_{\boldsymbol{k}}, $$
$$ with\;k=1\;for\;HIV\; and\;k=2\;for\;HSV-2 $$
Parameter estimation
This study used a full Bayesian estimation approach where parameters were assigned prior distributions as will be discussed in the priors' specification section.
Non-linear effects
Several studies have discussed extensively the methods for estimating the smooth function f t (.) [16–18]. The penalized regression splines model proposed by Eliers and Marx [18] for example is commonly used. Here, the assumption is that the effect of the continuous covariates can be approximated using the polynomial spline. They assumed that the smooth function f t (.) can be estimated by a spline of degree l with K equally spaced knots; x p,min = ψ p1 < ψ p2 ⋯ < ψ pk − 1 < ψ pK = x p,max. Many studies have explored the relationships between the Gaussian Markov Random Fields (GMRF) and smoothing splines [19–21].In this study we used the random walk model for estimating the smooth function f t (.). This is briefly discussed in Appendix 1.
Spatially varying coefficients
As stated before, many studies have been done with the assumption that the relationship between the explanatory variable and the response variables in a regression model are constant across the study region [8–10]. This assumption is unrealistic for spatial processes as factors such as sampling variation, different relationships across space e.g. attitudes, preferences, culture etc. contribute to a different response to the same stimuli as one moves across space. Two competing spatially varying models are the GWR and the BSVCP. The GWR addresses this by estimating the coefficients β ' s by the weighted least squares method, where more emphasis in terms of weights are placed on the observations which are close to location i, since it is assumed that the observations close to i exert more influence on the parameter estimates at location i than those farther away [11]. The weighting schemes can be fixed or adaptive. In the fixed scheme, observations that are within some distance d are given the weight of 1 while those farther away beyond some distance d from location i are given a weight of zero, while in the adaptive scheme, weights inside some radius d are made to decrease monotonically to zero as the radius increases. In this study we used the BSVCP (Appendix 2) model to relax the stationarity assumption, the covariates are allowed to vary spatially by assigning its coefficients the conditional autoregressive (CAR) model [13].
Priors for the spatial components
The prior for the structured random effects was defined to follow the CAR model while for the unstructured random effects, the independently and identically distributed normal distribution.
Posterior distribution
This is the distribution of the parameters after observing the data. The posterior distribution is obtained by updating the prior distribution with the observed data. Since our study is fully Bayesian, inference is made by sampling from this posterior distribution. Markov Chain Monte Carlo (MCMC) is the most common approach to do inference for latent Gaussian models however this method is slow and performs poorly when applied to such models [22]. The Integrated Nested Laplace (INLA) criterion is a relatively new technique developed to circumvent these shortfalls [22]. The posterior distribution for the latent Gaussian model is:
$$ \pi \left(\boldsymbol{x},\boldsymbol{\theta} \left|\boldsymbol{y}\right.\right)\;\alpha\;\pi \left(\boldsymbol{\theta} \right)\pi \left(\boldsymbol{x}\left|\boldsymbol{\theta} \right.\right)\prod_{i\in I}\pi \left({y}_i\left|{x}_i,\boldsymbol{\theta} \right.\right) $$
$$ \alpha\;\pi \left(\boldsymbol{\theta} \right){\left|\boldsymbol{Q}\left(\boldsymbol{\theta} \right)\right|}^{\frac{n}{2}} \exp \left(-\frac{1}{2}{x}^TQ\left(\boldsymbol{\theta} \right)x+{\displaystyle \sum_{i\in I} \log \pi \left({y}_i\left|{x}_i,\boldsymbol{\theta} \right.\right)}\right). $$
Where x is the class of latent fields, θ is the set of hyper parameters and y is the data. In the INLA approach, the posterior marginals of interest are:
$$ \pi \left({x}_i\left|\boldsymbol{y}\right.\right)={\displaystyle \int \pi \left({x}_i\left|\boldsymbol{\theta}, \boldsymbol{y}\right.\right)}\;\pi \left(\boldsymbol{\theta} \left|\boldsymbol{y}\right.\right)d\boldsymbol{\theta}\ \mathrm{and}\ \pi \left({\theta}_j\left|\boldsymbol{y}\right.\right)={\displaystyle \int \pi \left(\boldsymbol{\theta} \left|\boldsymbol{y}\right.\right)d{\boldsymbol{\theta}}_{\mathit{\hbox{-}}j}}, $$
and these are used to construct the nested approximations:
$$ \tilde{\pi}\left({x}_i\left|\boldsymbol{y}\right.\right)={\displaystyle \int \tilde{\pi}\left({x}_i\left|\boldsymbol{\theta}, \boldsymbol{y}\right.\right)}\;\tilde{\pi}\left(\boldsymbol{\theta} \left|\boldsymbol{y}\right.\right)d\boldsymbol{\theta} \kern0.5em \mathrm{and}\ \tilde{\pi}\left({\theta}_j\left|\boldsymbol{y}\right.\right)={\displaystyle \int \tilde{\pi}\left(\boldsymbol{\theta} \left|\boldsymbol{y}\right.\right)d{\boldsymbol{\theta}}_{\mathit{\hbox{-}}j}}. $$
The analyses in this study were carried out using the R software with the INLA package. The codes used for this analysis can be found in Additional file 1.
Model diagnostics
The models were compared using the deviance information criterion (DIC) suggested by Spiegelhalter et al. [23]. The best fitting model is one with the smallest DIC. The DIC value is obtained as: \( DIC=\overline{D}\left(\theta \right)+pD \), where \( \overline{D} \) is the posterior mean of the deviance that measures the goodness of fit while pD gives the effective number of parameters in the model which penalizes for complexity of the model. In DIC, low values of \( \overline{D} \) indicate a better fit while small values of pD indicate model parsimony. One challenge with the DIC is, how big the difference in DIC values of two competing models needs to be in order to declare one model as being better than the other is not well defined. Studies have declared that a difference in DIC of 3 between two models cannot be distinguished while for a difference of between 3 and 7 the two models can be weakly differentiated [23].
Application/Data analysis
The following sets of models were investigated in order to understand the effect of the observed covariates and unobserved effects on the distribution of HIV and HSV-2 in Kenya among the female population.
Model 1: This is a model of fixed categorical covariates which are assumed to have linear effects on the response variable namely, education level, age at first sex, perceived risk, partners in the last 1 year, marital status, place of residence, STI status in the past 1 year, number of times one had stayed away from home in the past 1 year and one continuous covariate, age, modeled with a non-linear smooth function: the RW2 model. Model 1 does not take into account the spatially structured and the spatially unstructured random effects and the two diseases are modeled independently.
Model 2: This is an additive model that assumes linear effects of the categorical covariates listed in model 1 above, non-linear effect of the continuous covariate age and spatially unstructured random effect which caters for the unobserved covariates that are inherent within the counties specified by the identically and independently distributed (iid) normal distribution.
Model 3: This model explores the effect of the linear covariates listed in model 1 above, non-linear covariate age and spatially structured random effect which accounts for any unobserved covariates which vary spatially among counties, specified by the CAR model.
Model 4: Examines the effects of the nonlinear effects of age, linear effects of the categorical covariates and a convolution of spatially structured and spatially unstructured random effect, specified by the CAR model and the iid normal distribution respectively.
Models 5–8 are similar to models 1–4 respectively, the only difference is that the regression coefficients γ in these models are assumed to vary spatially and are assigned CAR priors.
$$ \begin{array}{l} Mode{l}_1: logit\left({\rho}_{ij1}\right)={\beta}_{01}+f(age)+{\boldsymbol{w}}^{\boldsymbol{T}}\boldsymbol{\gamma}\;for\;HIV\\ {}\kern2.04em logit\left({\rho}_{ij2}\right)={\beta}_{02}+f(age)+{\boldsymbol{w}}^{\boldsymbol{T}}\boldsymbol{\gamma}\;for\;HSV-2\end{array} $$
$$ \begin{array}{l} Mode{l}_2: logit\left({\rho}_{ij1}\right)={\beta}_{01}+f(age)+{\boldsymbol{w}}^{\boldsymbol{T}}\boldsymbol{\gamma} +{f}_{unstr}\left({s}_{i1}\right)\;for\;HIV\\ {}\kern2.28em logit\left({\rho}_{ij2}\right)={\beta}_{02}+f(age)+{\boldsymbol{w}}^{\boldsymbol{T}}\boldsymbol{\gamma} +{f}_{unstr}\left({s}_{i2}\right)\;for\;HSV-2\end{array} $$
$$ \begin{array}{l} Mode{l}_3: logit\left({\rho}_{ij1}\right)={\beta}_{01}+f(age)+{\boldsymbol{w}}^{\boldsymbol{T}}\boldsymbol{\gamma} +{f}_{str}\left({s}_{i1}\right)\;for\;HIV\\ {}\kern2.28em logit\left({\rho}_{ij2}\right)={\beta}_{02}+f(age)+{\boldsymbol{w}}^{\boldsymbol{T}}\boldsymbol{\gamma} +{f}_{str}\left({s}_{i2}\right)\;for\;HSV-2\end{array} $$
$$ \begin{array}{l} Mode{l}_4: logit\left({\rho}_{ij1}\right)={\beta}_{01}+f(age)+{\boldsymbol{w}}^{\boldsymbol{\prime}}\boldsymbol{\gamma} +{f}_{unstr}\left({s}_{i1}\right)+{f}_{str}\left({s}_{i1}\right)\;for\;HIV\\ {}\kern2.28em logit\left({\rho}_{ij2}\right)={\beta}_{02}+f(age)+{\boldsymbol{w}}^{\boldsymbol{\prime}}\boldsymbol{\gamma} +{f}_{unstr}\left({s}_{i2}\right)+{f}_{str}\left({s}_{i2}\right)\;for\;HSV-2\end{array} $$
Ethical clearance was granted by the institutional review board of the Kenya Medical Research Institute (KEMRI) and the US Centers for Disease Control and Prevention. No ethical clearance was required from the University of Kwazulu-Natal or any other institution save for the aforementioned. The consent procedure, stated below, was approved by KEMRI and the US centers for Disease Control and Prevention.
Participants provided separate informed oral consent for interviews, blood draws and blood storage and the interviewer signed the consent form to indicate whether or not consent was given for each part. An oral informed consent was given for participants in the age of 18–64 while for minors, in the age group 15–17, oral informed consent was obtained from a parent/guardian or other adult responsible for the youth's health and welfare before the youth was asked for his/her consent. Only after the parent or guardian had agreed, was the consent of the adolescent sought.
Investigators in the study got a waiver of documentation of informed consent for all participants due to the fact that the research presented very minimal risk of harm to the individuals. The waiver did not adversely affect the rights and welfare of the participants, and the survey involved no procedures for which written consent is normally required outside the research context in Kenya.
Model assessment and comparison
Table 3 shows the DICs for the four separately fitted models for HIV and HSV-2. These four models were assumed to have stationary coefficients. Table 4 shows the DICs for the four separate models with spatially varying coefficients. The model with the smallest DIC provides the best fit. Studies have however reported that two models with a difference of 3 or less in DIC are indistinguishable, while a difference of between 3 and 7 suggests that the two models are weakly distinguishable [23]. From the tables, all the spatially varying models have a lower DIC as compared with their corresponding stationary models. For HIV, Spatially varying coefficient models 6, 7, 8 are not significantly different form each other and from the corresponding stationary model counterparts as the difference in DIC is less than 3. This suggests that the covariates for HIV do not vary significantly across space. For HSV-2, the spatially varying models are significantly better than the stationary models since they have significantly lower DICs. This suggests that the covariates provoke different responses across space for HSV-2. Spatially varying model 8 provided the best fit for HSV-2.
Table 3 Stationary model
Table 4 Spatially varying coefficients
We therefore present and discuss the results based on model 8 for both HIV and HSV-2, which allows the covariates to vary spatially by the CAR model and also captures the structured and the unstructured random effects.
Spatially varying effects
The DIC values indicate that the SVC models are better than the stationary ones, especially for the HSV-2 model. The choropleth maps show the varying effects of each covariate across space. Figure 1 shows the map of Kenya. Kenya is positioned on the equator on Africa's East Coast. The administration units in Kenya were provinces before changing to counties after the 2010 promulgation of the constitution. There are 47 counties in Kenya but this study discusses results from 46 counties as the KAIS 2007 was not conducted in Samburu County due to insecurity.
Map of Kenya
Though the SVC model for HIV was not significantly different from its stationary counterpart, the choropleth maps suggest that the effects of some of the covariates vary across space. The effect of education on HIV prevalence among women was more in the North Eastern, Coastal, Southern regions and parts of Central region indicated by the yellow to orange shading in the choropleth map in Fig. 2. Age at first sex also had a greater effect in those parts where education had greater effects as compared with the other parts of the country suggesting a correlation between education and age at first sex. The effect of number of partners had in the last 1 year was almost the same across the country except for some parts of West, Lake and Central region, where the effect was greater indicated by yellow/orange shading on the choropleth map in Fig. 2. The effect of frequency of travel away was also evident in the North Eastern, Coastal and Southern regions and parts of Central region while that of marital status was dominant in the Lake region.
Spatially varying effects of covariates on HIV status
HSV-2
The effect of education on HSV-2 status was lower in North Eastern and parts of Rift region than most of the other parts of the country shown by the blue shading on the map in Fig. 3. Age at first sex also had a greater bearing in the Costal and some parts of North Eastern, parts of Rift and West and Lake (pink/yellow shading) suggesting either early marriages or child prostitution. The highest rates of arranged marriages among adolescent girls in Kenya are found in Northeastern (73 %), Rift Valley (22 %), and Coast (21 %) provinces [24]. A study by the University of Chicago in Kenya and Zambia found that among 15-to-19 year old girls who are sexually active, being married increased their chance of HIV and other STIs by more than 75 %. This is due to the fact that most of these young marrieds were more likely to be in a polygamous union [25]. Partners had in the last 1 year had more effect on HSV-2 status in the West and Lake regions and some parts of the Central and Southern regions depicted by yellow shading on Fig. 3, while the number of partners had in the last 1 year had less effect in the regions with blue shading. The effect of place of residence also varied spatially. The effects were higher in the West and Lake, Southern and parts of Central and Coastal and Rift regions depicted by yellow shading on Fig. 3.
Spatially varying effects of covariates on HSV-2 status
The spatial effects based on model 4 indicate that HIV prevalence varies spatially with areas in the Central, West and Lake regions recording the highest prevalence. HIV prevalence is lowest in the North Eastern region (shown by blue shadding on Fig. 4) with some significant prevalence in some parts of the Coastal region. On the other hand, HSV-2 prevalence is also highest in the West and Lake regions, but also generally high across the country as shown in the yellow/orange shadding on the choropleth map in Fig. 4. Most regions with high HSV-2 prevalence had aslo a high HIV prevalence. Identifying the effects of individual covariates on each area can go a long way in informing strategies to deal with HIV and HSV-2 prevalence.
Spatial effects of HIV and HSV-2
The non-linear effect of age
Figure 5 shows the nonlinear association between age of an individual and HIV infection and age of an individual and HSV-2 infection. The figures give the posterior mean of the smooth function and their corresponding 95 % CI. From the figures it is evident that there is a nonlinear relationship between age and HIV and HSV-2 infection. An assumption of linear relationship would have led to misleading results and subsequently wrong interpretations. The chance of HIV infection increases with age up to an optimum age of about 30 years then starts declining with increase in age. For HSV-2, the likelihood of infection increases with age up to an optimum age of about 40 years then starts to decline thereafter with increasing age. The results depict that the prevalence of HIV peaks earlier in age than HSV-2.
Non-linear effect of age on HIV and HSV-2
This study found that the effect of the covariates on HIV and HSV-2 prevalence varied spatially, although the spatially varying HIV model was not significantly different from the stationary one. This could be due to bias introduced by deletion of cases. A stationarity assumption would therefore have masked these varying effects. The major strength of the spatially varying model is that it is able to unmask the effect of each covariate on HIV and HSV-2 prevalence in each region. Age at first sex had greatest effect on HSV-2 prevalence in the Central and parts of Rift region and more effect on HIV prevalence in the Coastal, North Eastern and Central regions. This may suggest either early marriages,child prostitution or teenage sex. Intervention strategies geared towards delaying the age at first sex, stoping childhood prostitution or early marriages can be put in place in these regions. Partners had in the last 1 year had more effect on HSV-2 status in the West and Lake regions and some parts of the Central region. Residents in these regions can be educated on faithfulness, use of protection and/or absteinance. Place of residence had more effect on HSV-2 prevalence in the Southern, parts of Central, West, Lake and Coastal regions. Various studies have documented that education level is inversely related to HIV and HSV-2 infection [26, 27]. Education level provoked more response in HIV prevalence in the North Eastern, Coastal, Southern and parts of Central region. In the Coastal region where tourism is rife, vices such as child prostitution and drug abuse can greatly contribute to the prevalence of HIV and HSV-2. Education can not only detract an individual from activities that can lead to acquisition of HIV and/or HSV-2, but also make them aware of the safe practices. The effects of frequency of travel away on HIV prevalence was dominant in Coastal, Central and Rift regions, with some parts of North Eastern region having a near zero effect while for HSV-2 prevalence, the effect was dominant in the West and Lake regions and some parts of Central and Rift region. This shows that frequency of travel away has different effects across the regions suggesting that women in the Coastal, Central and Rift regions travel away from their homes/regions more than women from the rest of the country. Frequency of travel away also has different effects on HIV and HSV-2. Since its effect on HSV-2 is dominant in West and Lake region, this could mean that the regions visited by these women have high HSV-2 prevalence and the same applies for HIV. The 2011-12 Tanzanian HIV/AIDS and malaria indicator survey found that women who traveled away from home five or more times in a year were twice likely to be infected with HIV(STIs) compared to women who did not travel [28]. This could be due to the fact that these women are more likely to engage in risky sexual behaviours when they are away from home. The effect of marital status on HIV prevalence was dominant in the West and the Lake region. This could be attributed to traditional practices such as wife inheritance which is rife in these regions. Wife inheritance is a widespread cultural practice in sub-Saharan Africa that increases the risk of HIV acquisition and transmission [29, 30]
Age was found to have a non-linear effect on both HIV and HSV-2. i.e. an inverted "U" shape. The likelihood of HIV infection among women increases with age up to about age 30 then reduces thereafter with increasing age. On the other hand the likelihood of HSV-2 infection increases with age up to about age 40 and then starts declining with age. These findings were consistent with other studies [31]. Spatial effects in the model account for unobserved variables that represent those variables that vary spatially. Identifying high prevalence areas and the relationship between HIV and HSV-2 can provide more insight that can be useful in coming up with campaigns and prevention strategies for specific regions. There was evidence of spatial variation of HIV and HSV-2 infection among counties. HIV prevalence was lowest in the North Eastern region with some significantly high prevalence in some parts of the Coastal, Central, Western and lake regions. HSV-2 prevelance was highest in the West and Lake regions, but generally high across the country. Identifying the effects of individual covariates on each region will help in informing region specific strategies to deal with HIV and HSV-2 prevalence.
The spatially varying coefficient model has a huge epidemiological implication. With limited resources such as funds, time and personnel, intervention strategies may be tailor made for specific regions instead of rolling out blanket intervention strategies. More emphasis for example can be put in delaying the age at first sex in those regions where the effect of age at first sex on HIV and HSV-2 was great etc. Areas where individuals engage in sexual activities with multiple partners can for example be targeted with intervention strategies tailored to either help these individuals stick to one partner or educate them on the use of protection rather than addressing issues that do not contribute much to the prevalence of HIV and HSV-2 in that particular area thereby wasting valuable resources.
This study used a full Bayesian approach to relax the stationarity assumption of the coefficients using the conditional autoregressive model [12]. The non-linear effects of age were modeled using the random walk model of order 2 [32], while the spatial effects and the spatially unstructured random effects in the model were modeled using a Gaussian Markov Random Field (GMRF) and a zero mean Gaussian process respectively. We determined that the effects of the covariates on HIV and HSV-2 prevalence vary across space while age had a non-linear effect on HIV and HSV-2 prevalence. The posterior distribution was obtained by updating the prior distribution with the observed data. Since our study was fully Bayesian, inference was made by sampling from this posterior distribution. Markov Chain Monte Carlo (MCMC) is the most common estimation approach to inference for latent Gaussian models, however the method is slow and performs poorly when applied to such models [22]. The Integrated Nested Laplace (INLA) criterion, a relatively new technique developed to circumvent these shortfalls was used instead [22]. The SVC model was found to be better than the stationary model on the account of DIC.
The covariates used in these study had full information. This was obtained by deleting all missing values. More accurate results may be obtained by incorporating the weights to account for these deletion a task impossible for this study as the weights were based on different administrative units (provincial) instead of counties. The models introduced in this study can be replicated in other studies with similar data. Further work could be conducted to get the effect of the particular categories of the covariates e.g. for marital status, the effect of divorce, or single status e.t.c on each county. A comparison of this analysis with the recent KAIS 2012 data would reveal how the effects of the covariates in each region have changed over time and if the intervention strategies put in place have helped. Other models such as the simultaneous autoregressive model can be used in place of the conditional autoregressive model to relax the stationarity assumption. Since the CAR assumes normality, this assumption can be relaxed or we may altogether use a non-parametric approach.
The authors confirm that all data underlying the findings are fully available without restriction. The data is held by the Kenya National Bureau of Statistics and freely available to the public but a request has to be sent to the Kenya National Bureau of Statistics. The link to access it is http://statistics.knbs.or.ke/nada/index.php/catalog/25.
Report on sexually transmitted Infections (STIs) [http://www.who.int/mediacentre/factsheets/fs110/en/].
Looker K, Garnett G, Schmid G. An estimate of the global prevalence and incidence of herpes simplex virus type 2 infection. Bull World Health Organ. 2008;86(10):805–12.
Weiss H. Epidemiology of herpes simplex virus type 2 infection in the developing world. National Center for Biotechnology Information. 2004;11:24A–35A.
UNAIDS. Report on global AIDS epidemic. Geneva, Switzeland: UNAIDS/WHO; 2013.
NASCOP. Ministry of Health, Kenya: Kenya AIDS Indicator Survey report. 2012.
Ghebremichael M, Larsen U, Painstil E. Association of Age at first sex with HIV-1, HSV-2, and other sexual transmitted infections among women in Northern Tanzania. National Center for Biotechnology Information. 2009;36(9):570–6.
Mishra V, Montana L, Neuman M. Spatial modeling of HIV prevalence in kenya. In: Demographic and Health Research. 2007.
Ngesa O, Mwambi H, Achia T. Bayesian spatial semi-parametric modeling of HIV variation in Kenya. PLoS One. 2014;9(7):e103299.
Hastie T, Tibshirani R. Genaralized additive models for medical research. Stat Methods Res. 1995;4:187.
Fotheringham S, Chris B, Martin C. Geographically weighted Regression: the analysis of spatially varying relationships: John Wiley & Sons; 2003.
Wheeler D, Waller L. Comparing spatially varying coefficient models: a case study examining violent crime rates and their relationships to alcohol outlets and illegal drug arrests. J Geogr Syst. 2009;11(1):1–22.
Assunçao RM. Space varying coefficient models for small area data. Environmetrics. 2003;14(5):453–73.
Assunçao R, Assunçao J, Lemos M. Induced technical change: a Bayesian spatial varying parameter model. In: Proceedings of XVI Latin American Meeting of the Econometric Society Catholic University of Peru: Lima: 1998. 1998.
Manda O, Leyland H. An empirical comparison of maximum likelihood and Bayesian estimation methods for multivariate disease mapping. S Afr Stat J. 2007;41(4):1–21.
Fahrmeir L, Tutz G. Multivariate Statistical Modelling based on Generalized Linear Models. 2nd ed. New York: Springer; 2001.
Hastie T, Tibshirani R, Friedman J. The Elements of Statistical Learning. New York: Springer; 2001.
Eilers P, Marx B. Flexible smoothing with B-splines and penalties. Stat Sci. 1996;11(2):89–102.
Fahrmeir L, Knorr-Held L. Dynamic and semiparametric models. 1997.
Fahrmeir L, Lang S. Bayesian inference for generalized additive mixed models based on Markov random field priors. J R Stat Soc: Series C (Appl Stat) 2001;50(2):201–220.
Fahrmeir L, Wagenpfeil S. Smoothing hazard functions and time-varying effects in discrete duration and competing risks models. J Am Stat Assoc. 1996;91(436):1584–94.
Rue H, Martino S, Chopin N. Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations. J R Stat Soc Ser B (Stat Methodol). 2009;71(2):319–92.
Spiegelhalter D, Best N, Carlin B, Van-der-Linde A. Bayesian measures of model complexity and fit. J R Stat Soc Ser B (Stat Methodol). 2002;64:583–639.
CBS. Central Bureau of Statistics (CBS) [Kenya], Ministry of Health (MOH) [Kenya], and ORC Macro. 2004.
Clark S. Early marriage and HIV risks in sub‐Saharan Africa. Stud Fam Plan. 2004;35(3):149–60.
Cohen MS. Sexually transmitted diseases enhance HIV transmission: no longer a hypothesis. The Lancet 351. 1998:S5-S7.
Burgoyne AD, Drummond PD. Knowledge of HIV and AIDS in women in sub-Saharan Africa. Afr J Reprod Health. 2009;12(2):14–31.
TACAIDS. Ministry of Health, Tanzania; Tanzania HIV/AIDS indicator survey 2011-2012 report.
Government of Networks. Ministry of Health, Nairobi, Kenya, Government Printers: 1997 Sessional Paper No.4 of 1997 on AIDS in Kenya.
Amornkul PN, Vandenhoudt H, Nasokho P, Odhiambo F, Mwaengo D, Hightower A, Buvé A, Misore A, Vulule J, Vitek C. HIV prevalence and associated risk factors among individuals aged 13-34 years in Rural Western Kenya. PLoS One. 2009;4(7):e6470.
Johnson K, Way A. Risk factors for HIV infection in a national adult population: evidence from the 2003 Kenya Demographic and Health Survey. J Acquir Immune Defic Syndr. 2006;42(5):627-36.
Speckman PL, Sun D. Fully Bayesian spline smoothing and intrinsic autoregressive priors. Biometrika. 2003;90(2):289–302.
Lang S, Fronk EM, Fahrmeir L. Function estimation with locally adaptive dynamic models. Comput Stat. 2002;17(4):479–500.
Yue YR, Speckman PL, Sun D. Priors for Bayesian adaptive spline smoothing. Ann Inst Stat Math. 2012;64(3):577–613.
Lindgren F, Rue H. On the second‐order random walk model for irregular locations. Scand J Stat. 2008;35(4):691–700.
Rue H. Fast sampling of Gaussian Markov random fields. J R Stat Soc Ser B Stat Methodol. 2001:325–338.
Rue H, Held L. Gaussian Markov random fields: theory and applications: CRC Press; 2005.
Banerjee S, Carlin BP, Gelfand AE. Hierarchical modeling and analysis for spatial data: CRC Press; 2014.
Besag J. Spatial interaction and the statistical analysis of lattice systems. J R Stat Soc Ser B (Methodol). 1974:192–236.
Mardia K. Multi-dimensional multivariate Gaussian Markov random fields with application to image processing. J Multivar Anal. 1998;24(2):265–84.
Harville DA. Matrix algebra from a statistician's perspective, vol. 1: Springer; 1997.
We sincerely thank The Kenya National Bureau of Statistics (KNBS) for providing the data used in this study. We would like to thank the reviewers for their comments which greatly enhanced the quality of the paper, and most sincerely thank the editors for their patience, guidance and correspondences.
The authors would like to thank the University of Kwazulu-Natal for funding the work through EO's PHD work.
School of Mathematics, Statistics and Computer Science, University of KwaZulu -Natal, Private Bag X01, 3201, Pietermaritzburg, South Africa
Elphas Okango, Henry Mwambi & Oscar Ngesa
Mathematics and Informatics Department, Taita Taveta University College, P.O Box 635-80300, Voi, Kenya
Oscar Ngesa
Elphas Okango
Henry Mwambi
Correspondence to Elphas Okango.
Conceived and designed the experiments OE HM ON. Analyzed data: OE. Contributed to the writing of the manuscript: OE HM ON. All authors read and approved the final manuscript.
Additional file
Additional file 1: Text S1.
R-INLA codes used in the analysis. (R 12 kb)
The Random walk model
Random walk (RW) models can be used as priors to derive the discretized Bayesian smoothing spline estimator [32]. The Random walk was made spatially adaptive by introducing local smoothing parameters into the models [33, 34]. The random walk model of order 2 (RW2) for the Gaussian vector X = (x 1 , …, x n ) is constructed assuming independent second-order increments:
$$ {\varDelta}^2{x}_i={x}_i\mathit{\hbox{-}} 2{x}_{i+ 1}+{x}_{i+ 2}\sim N\left( 0,{\tau}^{\mathit{\hbox{-}} 1}\right) $$
The density of X is derived from its n-2 s-order decrements as:
$$ \pi \left(X\left|\tau \right.\right)\mu {\tau}^{\left(n\mathit{\hbox{-}} 2\right)/ 2} exp\left\{\mathit{\hbox{-}}\frac{\tau }{2}{\displaystyle \sum {\left({\varDelta}^2{x}_i\right)}^2}\right\}={\tau}^{\left(n\mathit{\hbox{-}} 2\right)/ 2} exp\left\{\mathit{\hbox{-}}\frac{1}{2}{X}^TQX\right\} $$
The term x i ‐ 2x i + 1 + x i + 2 can be interpreted as an estimate of the second-order derivative of a continuous time function x(t) at t = i using the values of x(t) at t = i, i + 1, i + 2 [35]. The RW2 model is quite flexible due to its invariance to addition of a linear trend, and also computationally convenient due to its Markov properties i.e.
π(x i |x − i ) = π(x i |x i − 2, x i + 1, x i + 2) for 2 < i < n − 2. RW2 is also a GMRF for which efficient numerical methods for sparse matrix in place of Markov chain Monte Carlo algorithms exists [36, 37].
The Bayesian Spatially Varying Coefficient Process (BSVCP)
The specification of the BSVCP is in a hierarchical manner. The first stage is to specify the distribution of the data conditional on unknown parameters, and the second stage is specifying these unknown parameters conditional on other parameters.
The SVCP model is:
$$ {y}_{ijk}\left|{p}_{ijk}\sim Bernoulli\left({p}_{ijk}\right)\right.. $$
$$ h\left({p}_{ijk}\right)={\boldsymbol{X}}^{\boldsymbol{T}}{\boldsymbol{\beta}}_{\boldsymbol{k}}+{\boldsymbol{W}}^{\boldsymbol{T}}{\boldsymbol{\gamma}}_{\boldsymbol{k}} $$
The prior distribution for the regression coefficients is given as:
$$ \left[\boldsymbol{\upgamma} \left|{\boldsymbol{\mu}}_{\boldsymbol{\upgamma}},{\displaystyle {\sum}_{\boldsymbol{\upgamma}}}\right.\right]=\boldsymbol{N}\left({\boldsymbol{1}}_{\boldsymbol{n}\times \boldsymbol{1}}\otimes {\boldsymbol{\mu}}_{\boldsymbol{\upgamma}},{\displaystyle {\sum}_{\boldsymbol{\upgamma}}}\right) $$
\( {\boldsymbol{\mu}}_{\boldsymbol{\gamma}}={\left({\boldsymbol{\mu}}_{{\boldsymbol{\gamma}}_{\boldsymbol{0}}},{\boldsymbol{\mu}}_{{\boldsymbol{\gamma}}_{\boldsymbol{1}}},\dots, {\boldsymbol{\mu}}_{{\boldsymbol{\gamma}}_{\boldsymbol{p}}}\right)}^{\boldsymbol{T}} \) is the vector of means of the regression coefficients corresponding to each of P explanatory variables. Spatial dependence is taken into account through the covariance ∑ γ . This is achieved by specifying the priors for γ ' s as an aerial unit model e.g. the conditional autoregressive model (CAR) or the spatial autoregressive model (SAR) [38] or a geostatistical approach, where a parametric distance-based covariance function is specified [12]. Our focus is on the aerial unit model and in particular we assume the CAR priors for the γ ' s.
Conditional autoregressive (CAR) Model
Consider a vector ϕ = (ϕ 1, …, ϕ p )T of p components that follows a multivariate Gaussian distribution with mean 0 and B as the inverse of the dispersion matrix, so that B is a p × p symmetric and positive definite matrix. The density for ϕ is given by:
$$ p\left(\phi \right)={\left(2\pi \right)}^{\frac{p}{2}}{\left|B\right|}^{\frac{1}{2}} \exp \left(-\frac{1}{2}{\phi}^TB\phi \right) $$
For the CAR model, the conditional distribution of a particular component given the remaining components is considered. In terms of the elements of the matrix B = (b ij ), from the normal theory, ϕ i has a full conditional distribution;
$$ p\left({\phi}_i\left|{\phi}_{-i}\right.\right)\propto \exp \left(-\frac{1}{2}{b}_{ii}{\left({\phi}_i-{\displaystyle \sum_{j\ne i}\frac{-{b}_{ij}}{b_{ii}}}{\phi}_j\right)}^2\right) $$
which is normally distributed i.e. \( {\phi}_i\left|\Big|,{\phi}_{-i}\right.\sim N\left({\displaystyle \sum_{j\ne i}\frac{-{b}_{ij}}{b_{ii}}{\phi}_j,\frac{1}{b_{ii}}}\right) \) [39]
Mardia [40] showed the conditions under which the full conditional distributions specified above uniquely define a full joint distribution.
We let \( {c}_{ij}=\frac{-{b}_{ij}}{b_{ii}} \) and \( {b}_{ii}=\frac{1}{\sigma_i^2} \) and form a matrix C with \( {c}_{ii}=0\; and\;{c}_{ij}=-\frac{-{b}_{ij}}{b_{ii}} \), and another matrix M = Diag(σ 2 i ) and M − 1 = Diag(b ii )). The inverse of the dispersion matrix, B is then related to C and M as:
$$ B={M}^{-1}\left(I-C\right). $$
I is the identity matrix and the joint distribution of ϕ is MVN(0, M − 1(I − C)). C and M must be modeled properly to ensure the symmetry of B, and this is achieved by conditioning c ij σ 2 j = c ji σ 2 i . The C matrix is also specified to show relationship between neighbors. The elements of matrix C are defined as \( {c}_{ii}=0\; and\;{c}_{ij}=\frac{1}{m_i} \) [39], if j is adjacent to i and zero otherwise. This is a commonly used adjacency matrix for lattice data. m i represent the number of neighbors of region i. Define another matrix W to hold the adjacency structure, where, w ij = 1 if region i and region j are neighbors and zero otherwise. Then, \( C={W}_s\; where\;{W}_s= diag\left(\frac{1}{m_i}\right)W \). i.e. W s is a scaled adjacency matrix, the i th row being scaled by the number of neighbors of region i. The above expressions for the elements of C and M translate to the following specifications for inverse covariance matrix B: b ii = λm i , and b ij = − λ if j is adjacent to i and 0 otherwise. Thus B is symmetric and it can be expressed as B = λ(Diag(m i ) − C). The expression M − 1(I − C) has a positive definite structure for the conditional distribution to give rise to a valid probability distribution function (pdf). The definition of the adjacency matrix above leads to an improper joint pdf. This is overcome by introducing a parameter α into the precision matrix B, to give:
$$ B={M}^{-1}\left(I-\alpha C\right) $$
If |α| < 1 then the matrix M − 1(I − αC) is diagonally dominant and symmetric. Symmetric and diagonally dominant matrices are positive definite [41].
Okango, E., Mwambi, H. & Ngesa, O. Spatial modeling of HIV and HSV-2 among women in Kenya with spatially varying coefficients. BMC Public Health 16, 355 (2016). https://doi.org/10.1186/s12889-016-3022-0
Deviance Information Criterion
Random Walk Model
North Eastern Region
Vary Coefficient Model
|
CommonCrawl
|
1 Deforming spaces vs. deforming maps
2 Properties and examples
3 Types of connectedness
4 Homotopy theory
5 Homotopy equivalence
6 Homotopy in calculus
7 Is there a constant homotopy between constant maps?
8 Homotopy equivalence via cell collapses
9 Invariance of homology under cell collapses
Deforming spaces vs. deforming maps
What do the objects below have in common?
The answer we have been giving is: they all have one hole. However, there is a profound reason why they must all have one hole. These space are homeomorphic!
The reasoning, still not fully justified, is transparent: $$X\approx Y \Rightarrow H(X)\cong H(Y).$$
Now, let's choose a broader collection. This time the spaces aren't homeomorphic, but do they have anything in common?
The answer again is: they all have one hole. But, again, maybe there is a profound reason why they all have one hole.
Is there a relation between two topological spaces, besides homeomorphism, that ensures that they would have the same count of topological features? There is indeed an equivalence relation that produces the same result for a much broader class of spaces: $$X\sim Y \Rightarrow H(X)\cong H(Y).$$
Informally, we say that one space can be "deformed into" the other.
Let's try to understand the actual mathematics behind these words, with the help of this juxtaposition:
the cylinder $X={\bf S}^1 \times {\bf I}$ vs. the circle $Y={\bf S}^1$.
The first meaning of the word "deformation" is a transformation that is gradual. Unlike a homeomorphism, which is instantaneous, this transformation is stretched over time through a continuum of intermediate states:
Let's take a look at the natural maps between these two spaces. The first is the projection $p:X\to Y$ of the cylinder along its axis and the second is the embedding $e:Y\to X$ of the circle into the cylinder as one of its boundary circles:
Both preserve the hole even though neither one is a homeomorphism.
Let's assume that $X$ is the unit cylinder in ${\bf R}^3$ and $Y$ is the unit circle in ${\bf R}^2 \subset {\bf R}^3$. Then the formulas are simple: $$p(x,y,z)= (x,y),\ e(x,y)=(x,y,0).$$ Let's consider the compositions of these maps:
$pe:Y\to Y$ is the identity $\operatorname{Id}_Y$;
$ep:X\to X$ is the collapse, or self-projection, of the cylinder on its bottom edge.
The latter, even though not the identity, is related to $\operatorname{Id}_X$. This relation is seen through the continuum of maps that connects the two maps; we choose map $h_t:X\to X$ to shrink the cylinder -- within itself -- to height $t\in [0,1]$. The images of these maps are shown:
This is our main idea: we should interpret deformations of spaces via deformations of maps.
The formulas for these maps are simple: $$h_t(x,y,z)=(x,y,tz),\ t\in [0,1].$$ And, it is easy to confirm that we have what we need: $$h_0=ep,\ h_1=\operatorname{Id}_X.$$ Looking closer at these maps, we realize that
$h_t$ is continuous for each $t$, but also
$h_t$ are continuous, as a whole, over $t$.
Therefore, the transformation can be captured by a single map $$H(t,x,y,z)=h_t(x,y,z),$$ continuous with respect to the product topology.
The precise interpretation of this analysis is given by the two definitions below.
Definition. Two maps $f_0, f_1: X \to Y$ are called homotopic if there is a map $$F: [0,1]\times X \to Y$$ such that $$F(0,x) = f_0(x),\ F(1,x) = f_1(x),$$ for all $x\in X$. Map $F$ is called a homotopy between $f_0$ and $f_1$. The relation is denoted by: $$F:f_0 \simeq f_1, $$ or simply: $$f_0 \simeq f_1 .$$
Definition. Suppose that $X$ and $Y$ are topological spaces and $f: X \to Y,\ g: X \to Y$ are maps, and $fg$ and $gf$ are homotopic to the identity maps on $Y$ and $X$ respectively: $$fg \simeq \operatorname{Id}_{Y},\ gf \simeq \operatorname{Id}_{X},$$ then $f$ is called a homotopy equivalence. In this case, $X$ and $Y$ are called homotopy equivalent.
We set the latter aside for now and study properties of homotopy.
Properties and examples
It is often hard to visualize a homotopy via its graph, unless the dimensions of the spaces are low. For example, below we have the graph of a homotopy (blue) between a constant map (orange) and the identity (purple) of an interval: $$c,\operatorname{Id}:[a,b]\to [a,b].$$
The diagram on the left demonstrates that $c \simeq \operatorname{Id}$ by showing the homotopy as a surface that "connects" these two maps.
On the right, we also provide a more common way to illustrate a homotopy -- by plotting the "intermediate" maps. Those are, of course, simply the vertical cross-sections of this surface. The homotopy above is piece-wise linear and the one below is differentiable:
Theorem. Homotopy is an equivalence relation. For two given topological spaces $X$ and $Y$, the space $C(X,Y)$ of maps from $X$ to $Y$ is partitioned into equivalence classes: $$[f]:=\{g:X\to Y,\ g \simeq f\}.$$
Proof. 1. Reflexivity: do nothing. For $F:f\simeq f$, choose $H(t,x)=f(x)$.
2. Symmetry: reverse time. Given $H:f\simeq g$, choose $F(t,x)=H(1-t,x)$ for $F:g\simeq f$.
3. Transitivity: ? One needs to figure out this: $$F:f \simeq g,\ G: g\simeq h \Rightarrow H=?: f \simeq h.$$ We carry out these two processes consecutively but, since it has to be within the same time frame, twice as fast. We define: $$H(t,x):=\begin{cases} F(2t,x) & \text{ if } 0\le t\le 1/2 \\ G(2t-1,x) & \text{ if } 1/2 \le t\le 1.\\ \end{cases}$$ $\blacksquare$
Referring to the proof, these three new homotopies are called respectively:
1. a constant homotopy,
2. the inverse of a homotopy, and
3. the concatenation of two homotopies.
They are illustrated below:
Exercise. Provide the missing details of the proof.
Notation. We will use this notation for this quotient set: $$[X,Y]:=C(X,Y) / _{\simeq}.$$
Sometimes things are simple. Any function $f:X \to {\bf R}$ can be "transformed" into any other. In fact, a simpler idea is to push the graph of a given function $f$ to the $x$-axis:
We simply put: $$f_t(x):=tf(x).$$ Then, we use the fact that an equivalence relation creates a partition into equivalence classes, to conclude that $$[{\bf R},{\bf R}] = \{[0]\}.$$ In other words, all maps are homotopic.
We can still have an explicit formula for a homotopy between two functions $f,g$: $$F(t,x) = (1-t)f(x) + tg(x).$$
Exercise. Prove the continuity of $F$.
The same exact formula describes how one "gradually" slides $f(x):X\to Y$ toward $g(x)$ if $X$ is any topological space and $Y$ is a convex subset of ${\bf R}^n$, which guarantees that all convex combinations make sense:
This is called the straight-line homotopy.
A more general setting for this construction is the following. A vector space $V$ over ${\bf R}$ is called a topological vector space if it is equipped with a topology with respect to which its vector operations are continuous:
addition: $V \times V \to V$, and
scalar multiplication: ${\bf R} \times V \to V$.
Exercise. Prove that these are topological vector spaces: ${\bf R}^n$, $C[a,b]$. Hint: don't forget about the product topology.
Proposition. If $Y$ is a convex subset of a topological vector space, then all maps to $Y$ are homotopic: $\#[X,Y]=1.$
Exercise. Prove the proposition. Hint:
What if $Y$ is the disjoint union of $m$ convex sets in ${\bf R}^n$? Will we have: $$\#[X,Y]=m?$$ Yes, but only if $X$ is path-connected!
Exercise. Prove this statement and demonstrate that it fails without the path-connectedness requirement.
Exercise. Prove that if $Y$ isn't path-connected then $\#[X,Y] > 1$. Hint: consider constant maps:
Exercise. What if $X$ has $k$ path-components?
Next, what can we say about the homotopy classes of maps from the circle to the ring, or another circle?
The circles can be moved and stretched as if they were rubber bands:
Later we will classify these maps according to their homotopy classes.
Sometimes the homotopy is easy to specify. For example, one can gradually stretch this circle:
To get from $f_0$ (the smaller circle) to $f_0$ (the large circle), one goes through intermediate steps -- circles indexed by values between $0$ and $1$: $$f_0, f_{1/4}, f_{1/2}, f_{3/4}, f_1.$$ Generally: $$f_t(x) := (1+t)f_0(x).$$ It is clear that the right-hand side continuously depends on both $t$ and $x$.
Exercise. Suppose that two maps $f,g:X\to {\bf S}^1$ never take antipodal to each other points: $f(x)\ne -g(x),\ \forall x\in X$. Prove that $f,g$ are homotopic.
To summarize,
homotopy is a continuous transformation of a continuous function.
Homotopies help us tame the monster of a space-filling curve.
Theorem. Every map $f:[0,1]\to S$, where $S$ is a surface, is homotopic to a map that isn't onto.
Proof. Suppose $Q$ is the point on $S$ that we want to avoid and $f(0)\ne Q,\ f(1)\ne Q$. Let $D$ be a small disk neighborhood around $Q$. Then $f^{-1}(D)\subset (0,1)$ is open, and, therefore, is the disjoint union of open intervals. Pick one of them, $(a,b)$. Then we have: $$\begin{array}{lll} A:=f(a) &\in \partial D,\\ B:=f(b) &\in \partial D,\\ f\big( (a,b) \big) &\subset D. \end{array}$$
Choose $$C \in \partial D \setminus \{A,B\}.$$ Now, $f([a,b])$ is compact and, therefore, closed. Then there is a neighborhood of $C$ disjoint with the path $f([a,b])$. We construct a homotopy that pushes this path away from $C$ towards the opposite side of $\partial D$. If we push far enough, the point $Q\in D$ will no longer lie on the path.
This homotopy is "relative": all the changes to $f$ are limited to the values $x\in (a,b)$. This allows us to build such a homotopy independently for every interval $(a,b)$ in the decomposition of $f^{-1}(D)$. Next, the fact that the condition $f(a)=A,\ f(b)=B$ is preserved under this homotopy allows us to stitch these "local" homotopies together one by one. The resulting homotopy $F$ pushes every piece of the path close to $Q$ away from $Q$.
This construction only works when there are finitely many such open intervals.
Exercise. Show that the proof, as written, fails when we attempt to stitch together infinitely many "local" homotopies to build $F:[0,1]\times (0,1)\to S$. Hint: consider,
$f^{-1}(D)=\cup_n (a_n,b_n),\ f(a_n)=A_n,\ f(b_n)=B_n,$
$A_n\to A,\ B_n\to B=A,\ F(1,t_n)=-A,\ t_n\in (a_n,b_n)$.
Exercise. Fix the proof. Hint: choose only the intervals on which the path actually passes through $Q$.
Exercise. Provide an explicit formula for a "local" homotopy.
Exercise. Generalize the theorem as much as you can.
Types of connectedness
Comparing the familiar illustrations of path-connectedness on the left and a homotopy between two constant maps reveals that the former is a special case of the latter:
Indeed, suppose in $Y$ there is a path between two points $a,b$ as a function $p:[0,1] \to Y$ with $p(0)=a,p(1)=b$. Then the function $H: [0,1] \times X$ given by $H(t,x)=p(t)$ is a homotopy between these two constant maps.
Exercise. Sketch the graph of a homotopy between two constant maps defined on $[a,b]$.
To summarize, in a path-connected space all constant maps are homotopic.
Let's carry out a similar analysis for simple-connectedness. This condition was defined informally as every closed path can be deformed to a point.
By now we know what "deformed" means. Let's translate: $$\begin{array}{llll} \text{ Informally: } & \longrightarrow & \text{ Formally: }\\ \text{"A closed path in $Y$...} & \longrightarrow & \text{"A map } f: {\bf S}^1 \rightarrow Y ...\\ \text{can be deformed to...} & \longrightarrow & \text{is homotopic to...}\\ \text{a point."} & \longrightarrow & \text{a constant function."} \end{array}$$
Exercise. Indicate which of the following spaces are simply connected:
disk: $\{(x,y) \in {\bf R}^2: x^2 + y^2 < 1 \}$;
circle with pinched center: $\{(x,y) \in {\bf R}^2: 0 < x^2 + y^2 <1 \}$;
ball with pinched center: $\{(x,y,z) \in {\bf R}^3: 0 < x^2 + y^2 + z^2 < 1 \}$;
ring: $\{(x,y) \in {\bf R}^2: 1/2 <x^2 + y^2 <1 \}$;
thick sphere: $\{(x,y,z) \in {\bf R}^3: 1/2 < x^2 + y^2 + z^2 < 1 \}$;
the doughnut (solid torus): ${\bf T}^2 \times {\bf I}$.
Recall that the plane ${\bf R}^2$ is simply connected because every loop can be deformed to a point via a straight line homotopy:
More general is the following:
Theorem. Any convex subset of ${\bf R}^n$ is simply connected.
And so are all spaces homeomorphic to convex sets. There others too.
Theorem. The $n$-sphere ${\bf S}^n, \ n\ge 2$, is simply connected.
Proof. Idea for $n=2$. We assume that the loop $p$ isn't space-filling, so that there is a point $Q$ in its complement, $Q \in {\bf S}^2 \setminus \operatorname{Im}p.$ But ${\bf S}^2\setminus \{Q\}$ is homeomorphic to the plane! Since it also contains the loop, $p$ can be contracted to a point by a homotopy. $\blacksquare$
Exercise. Provide an explicit formula for the homotopy.
It is often more challenging to prove that a space is not simply connected. We will accept the following without proof (see Kinsey, Topology of Surfaces, p. 203).
Theorem. The circle ${\bf S}^1$ is not simply connected.
Corollary. The plane with a point taken out, ${\bf R}^2 \setminus \{(0,0) \}$, is not simply connected.
Exercise. Prove that corollary.
Simple-connectedness helps us classify manifolds.
Theorem (Poincaré Conjecture). If $M$ is a simply connected compact path-connected $3$-manifold without boundary, then $M$ is homeomorphic to the $3$-sphere.
The Poincare sphere serves as an example that shows that simple-connectedness can't be replaced with $H_1=0$ for the theorem to hold.
Exercise. Prove that the sphere with a point taken out, ${\bf S}^2\setminus \{ N \}$, is simply connected.
Exercise. Prove that the sphere with two points taken out, ${\bf S}^2\setminus \{ N, S \}$, is not simply connected.
Exercise. Is the $3$-space with a line, such as the $z$-axis, taken out simply connected? Hint: Imagine looking down at the $xy$-plane:
Exercise. Is the $3$-space with a point taken out, ${\bf R}^3 \setminus \{(0,0,0)\}$, simply connected?
This analysis demonstrates that we can study the topology of a space $X$ indirectly, by studying the set of homotopy classes of maps $[Q,X]$ from a collection of wisely chosen topological spaces $Q$ to $X$. Typically, we use the $n$-spheres $Q={\bf S}^n,\ n=0,1,2,...$. These sets are called the homotopy groups and denoted by $$\pi_n (X):=[ {\bf S}^n, X].$$ Then, similarly to the homology groups $H_n(X)$, the sets $\pi_n (X)$ capture the topological features of $X$:
$\pi_0(X)$ for cuts,
$\pi_1(X)$ for tunnels,
$\pi_2(X)$ for voids, etc.
In particular, we showed above that $\pi _1({\bf S}^n)$ is trivial for $n\ge 2$. These issues will be considered later in more detail.
Homotopy theory
We will seek a structure for the set of homotopy classes of maps.
First, what happens to two homotopic maps when they are composed with other maps? Are the compositions also homotopic?
In particular, if $f\simeq d:X\to Y$ and $q:X\to S$ is a map, are $qf,qg:x\to S$ homotopic too? As the picture below suggests, the answer is Yes:
The picture illustrates the right upper part of the following diagram:
$$ \newcommand{\ral}[1]{\!\!\!\!\!\xrightarrow{\quad\quad\quad#1\quad\quad\quad}\!\!\!\!\!} \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccccccc} X & \ra{f\simeq g} & Y \\ \ \ua{p} & \searrow & \ \da{q} \\ R & \ra{pfq\simeq pgq} & S \end{array}$$
The result below refers to this diagram.
Theorem. Homotopy, as an equivalence of maps, is preserved under compositions. First, $$F:f\simeq g \Rightarrow H:qf\simeq qg,$$ where $$H(t,x) := qF(t,x),\ x \in X, t\in [0,1];$$ second, $$F:f\simeq g \Rightarrow H:fp\simeq gp,$$ where $$H(t,z) := F(t,p(z)),\ z \in R, t\in [0,1].$$
In either case, the new homotopy is the composition of the old homotopy and the new map.
Exercise. Finish the proof.
So, a map takes -- via compositions -- every pair of homotopic maps to a pair of maps that is also homotopic. Therefore, the map takes every homotopy class of maps to another homotopy class. Therefore, for any topological spaces $X,Y,Q$ and any map $$h:X\to Y,$$ the quotient map $$[h]:[Q,X] \to [Q,Y]$$ given by $$[h]([f]):=[hf]$$ is well defined.
In particular, $h:X\to Y$ generates a function on the homotopy groups: $$[h_n]:\pi_n(X) \to \pi_n(Y),\ n=0,1,2,...$$ The outcome is very similar to the way $h$ generates the homology maps: $$[h_n]:H_n(X) \to H_n(Y),\ n=0,1,2,...$$ as quotients of the maps of chains. However, in comparison, where is the algebra in these homotopy groups?
We need to define an algebraic operation on the homotopy classes. For example, given two homotopy classes of maps $f,g:{\bf S}^1 \to X$, i.e., two loops, what is the meaning of their "sum", or their "product", or some other combination? Under the homology theory approach, we'd deal with a formal sum of the loops. But such a "double" loop can't be a real loop, i.e., a map $f\cdot g:{\bf S}^1 \to X$.
Unless, the two loops have a point in common! Indeed, if $f(c)= g(c)$ for some $c\in {\bf S}^1$, then one can "concatenate" $f,g$ to create a new loop.
Exercise. Provide a formula for this construction.
Note that we are using the multiplicative notation for this group to be because, in general, it's not abelian.
Exercise. Give an example of two loops with $a\cdot b \not\simeq b \cdot a$.
It's similar for $n=2$. One can see these two intersecting (parametrized) spheres as one:
To make sure that this algebra works every time, we make a switch to the following:
Definition. Every topological space $X$ comes with a selected base point $x_0\in X$, denoted by $(X,x_0)$. They are called pointed spaces. Further, every map $f:(X,x_0)\to (Y,y_0)$ between two pointed spaces takes the base point to the base point: $f(x_0)=y_0$. These are called pointed maps.
Exercise. Prove that pointed spaces and pointed maps form a category.
This is, of course, just a special case of maps of pairs.
Now, the concatenation of two pointed maps $f,g:({\bf S}^n , u)\to (X,x_0)$ always makes sense as another pointed map $f\cdot g:({\bf S}^n , u)\to (X,x_0)$.
What about their homotopy classes? They are supposed to be the element of the homotopy groups. As usual, the product of the quotients is defined as the quotient of the product: $[f] \cdot [g] := [f\cdot g]$ for $f,g\in \pi_n(X,x_0)$. We will accept without proof that this operation is well defined and satisfies the axioms of group. The classes are with respect to the pointed homotopies, i.e., ones that remain fixed for the base points, as follows. Given two pointed maps $f_0, f_1: (X,x_0)\to (Y,y_0)$, a pointed homotopy is a homotopy $F: [0,1]\times X \to Y$ of $f_0, f_1$ which, in addition, is constant at $x_0$: for all $t\in [0,1]$ we have $$F(t,x_0) = f_0(x_0) = f_1(x_0).$$
Exercise. Prove that any pointed loop on a surface is homotopic to a loop that isn't onto. Derive that $\pi_1({\bf S}^2)=0$.
That's a group structure for the set of homotopy classes. Now a topology structure.
Recall that we have two interpretations of a homotopy $F$ between $f,g:X\to Y$. First, it's a continuous function ${\bf I}\times X \to Y$: $$F\in C({\bf I}\times X,Y).$$ Second, it's a continuously parametrized (by ${\bf I}$) collection of continuous functions $X\to Y$: $$F\in C({\bf I},C(X,Y)).$$ We conclude that $$C({\bf I}\times X,Y)=C({\bf I},C(X,Y)).$$ This is a particular case of the continuous version of the exponential identity of functions, as follows.
Proposition. For polyhedra $A,B,C$ we have $$C(A\times B,C)=C(A,C(B,C)).$$
Exercise. Prove the proposition.
We can take this one step further if the set of continuous functions $C(X,Y)$ is equipped with a topology. We can argue that this isn't just two sets with a bijection between them but a homeomorphism: $$C(A\times B,C)\approx C(A,C(B,C)).$$
Exercise. Suggest an appropriate choice of topology for this set. Hint: start with $A=B=C={\bf I}$.
From this point of view, homotopy is a path between two maps in the function space.
Exercise. What path-connected function spaces do you know?
Homotopy equivalence
Let's review what we've come up with in our quest for a relation between topological spaces that is "looser" than topological equivalence but still capable of detecting the topological features that we have been studying.
Spaces $X$ and $Y$ are called homotopy equivalent, or are of the same homotopy type, $X\simeq Y$, if there are maps $$f: X \to Y,$$ $$g: Y \to X$$ such that $$fg \simeq \operatorname{Id}_{Y},$$ $$gf \simeq \operatorname{Id}_{X}.$$ In that case, the notation is : $$f:X\simeq Y,\ g:Y\simeq X,$$ or simply: $$X\simeq Y.$$
Homotopy equivalence of two spaces is commonly described as a "deformation" and illustrated with a sequence of images that transforms one into the other:
Warning: in the common example above, the fact that the two spaces are also homeomorphic is irrelevant and may lead to confusion. To clarify, these as transformations preformed over a period of time:
topological equivalence: continuous over space, incremental and reversible over time;
homotopy equivalence: continuous over space, continuous and reversible over time.
Some of the examples of homotopies above also suggest examples of homotopy equivalent spaces.
For example, since all maps are homotopic here, the $n$-ball is homotopy equivalent to the point: $${\bf B}^n\simeq \{ 0\}.$$ Indeed, we can just contract the ball to its center. Note here that, if done incrementally, this contraction becomes a collapse and can't be reversed.
Exercise. Prove that any space homeomorphic to a convex subset of a Euclidean space is homotopy equivalent to a point.
Definition. A topological space $X$ is called contractible if $X$ is homotopy equivalent to a point, i.e., $X\simeq \{x_0\}$.
Example. A less obvious example of a contractible space is "the two-room house". Here we have two rooms each taking the whole floor. There is access to the first floor through a tube from the top and to the second floor from the bottom:
These are the steps of the deformation:
1. we expand the entries into the building at the top and bottom to the size of the whole circle, which turns the two tubes into funnels;
2. then we expand the entries into the rooms from the funnels to the half of the whole circle creating a sloped ellipse;
3. finally, we contract the walls and we are left with only the ellipse.
Exercise. Prove that the dunce hat is contractible:
Further, our intuition suggests that "thickening" of a topological space doesn't introduce new topological features. Indeed, we have the following.
Theorem. For any topological space $Y$, $$Y\times {\bf I} \simeq Y.$$
Proof. The plan of the proof comes from our analysis of circle vs. cylinder. Let $$X={\bf I} \times Y.$$ Consider the two natural maps. The first is the projection $$p:X\to Y,\ p(t,x)=x,$$ of $X$ onto $Y$ as its "bottom" and the second is the embedding $$e:Y\to X, \ e(x)=(0,x)$$ of $Y$ into $X$ as its "bottom".
Let's consider their compositions:
$pe:Y\to Y=\operatorname{Id}_Y$;
$ep:X\to X$ is the collapse:
$$ep(t,x)=(0,x).$$ For the latter, we define the homotopy $$H:[0,1]\times [0,1]\times Y \to [0,1]\times Y$$ by $$H(t,s,x):=(ts,x).$$ It is easy to confirm that: $$H(0,\cdot)=ep, \ H(1,\cdot)=\operatorname{Id}_X.$$
Next, we prove that this is a continuous map with respect to the product topology. Suppose $U$ is an element of the standard basis of the product topology of $[0,1]\times Y$, i.e., $U=(a,b)\times V$, where $a < b$ and $V$ is open in $Y$. Then $$\begin{array}{lllllll} H^{-1}(U)&=H^{-1}((a,b)\times V) \\ &= \{ (t,s)\in [0,1]\times [0,1]: a < ts < b \} \times V \end{array}$$ The first component of this set is a region between two hyperbolas $s=a/t,s=b/t$. It is open and so is $H^{-1}(U)$.
The main properties are below.
The first two are obvious:
Theorem.
Homotopy invariance is a topological property.
A homeomorphism is a homotopy equivalence.
Later we will show the following:
Homology groups are preserved under homotopy equivalence.
Theorem. Homotopy equivalence is an equivalence relation for topological spaces.
Proof. 1. Reflexivity: $f,g:X\simeq X$ with $f=g=\operatorname{Id}_X$.
2. Symmetry: if $f,g:X\simeq Y$ then $g,f:Y\simeq X$.
3. Transitivity: if $f,g:X\simeq Y$ and $p,q:Y\simeq Z$ then $pf,gq:X\simeq Z$. Indeed: $$(pf)(gq)=p(fg)q \simeq p \operatorname{Id}_X q = pq \simeq \operatorname{Id}_Z,$$ $$(gq)(pf)=g(qp)f \simeq g \operatorname{Id}_Z f = gf \simeq \operatorname{Id}_X,$$ by the theorem from the last subsection.
Once again, homotopy helps us classify manifolds.
Theorem (Generalized Poincaré Conjecture). Every compact $n$-manifold without boundary which is homotopy equivalent to the $n$-sphere is homeomorphic to the $n$-sphere.
Exercise. (a) Prove that the above spaces are homotopy equivalent but not homeomorphic. (b) Consider other versions of the square with a handle and find out whether they are homeomorphic and/or homotopy equivalent. (c) What about the Mobius band with a handle? (d) What about the sphere with a handle? Hint: don't keep the sphere in $3$-space.
Exercise. Prove that ${\bf S}^1 \times {\bf S}^1$ and ${\bf S}^3 \vee {\bf S}^1$ are not homotopy equivalent.
Exercise. Show that ${\bf S}^2 \vee {\bf S}^1 \simeq {\bf S}^2 \cup A$, where $A$ is the line segment from the north to south pole.
Exercise. These spaces are homotopy equivalent to some familiar ones: (a) ${\bf S}^2 \cup A$, where $A$ is disk bounded by the equator, (b) ${\bf T}^2 \cup D_1 \cup D_2$, where $D_1$ is a disk bounded by the inner equator and $D_2$ is a disk bounded by a meridian.
Exercise. With amoeba-like abilities, this person can unlink his fingers without unlocking them. What will happen to the shirt?
Reminder: Homotopy is a relation between maps while homotopy equivalence is a relation between spaces.
Homotopy in calculus
The importance of these concepts can be seen in calculus, as follows:
Recall a vector field in the plane is a function $V: {\bf R}^2 \to {\bf R}^2$. It is called conservative on a region $D$ if it is the gradient of a scalar function: $$V = \nabla f.$$ Such a vector field may represent the vertical velocity of a flow on a surface ($z=f(x,y)$) under gravity or a force of a physical systems in which energy is conserved.
We know the following:
Theorem. Suppose we have a vector field $V = (P,Q)$ defined on an open set $D \subset {\bf R}^2$. Suppose $V$ is irrotational: $$P_y = Q_x.$$ Then $V$ is conservative provided $D$ is simply connected.
It is then easy to prove then that the line integral along a closed path is $0$.
But what if the region isn't simply connected? The theorem doesn't apply anymore but, with the tools of this section, we can salvage a lot.
Theorem. Suppose we have an irrotational vector field $V = (P,Q)$ defined on an open set $D \subset {\bf R}^2$. Then the line integral along any closed path homotopic to a constant, called null-homotopic, is $0$.
The further conclusion is that line integrals are path-independent, i.e., the choice of integration path from a point to another does not affect the integral.
The issue of path-independence is related to the following question: What if we can get from point $a$ to point $b$ in two "topologically different" ways?
In other words, there are several homotopy classes of paths from $a$ to $b$ relative to the end-points.
Exercise. Prove from the theorem that the line integrals are constant within each of these homotopy classes.
Recall next for any continuous vector field in ${\bf R}^n$, the forward propagation map $$Q_t:{\bf R}^n\to {\bf R}^n$$ is given by $$Q_t(a)=a+tV(a).$$ Of course, any restriction of $Q_t\Big|_{R}, \ R\subset {\bf R}^n$, is also continuous. Suppose $R$ is given and we'll use $Q_t$ for this restriction.
Now we observe that it is continuous with respect to either of the variables.
We realize that this is a homotopy!
But not just any; we have: $$Q_0(a)=a, \ \forall a \in R.$$ In other words, this is the inclusion $i_R:R\hookrightarrow {\bf R}^n$, $$Q_0=i_R.$$
Then we have the following.
Theorem. The forward propagation map $Q_t:R \to {\bf R}^n, \ t>0,$ generated by a continuous vector field is homotopic to the inclusion.
Corollary. Suppose the forward propagation map $Q_t:R \to S$ is well-defined for some $S\subset {\bf R}^n$, and some $t>0$. Suppose also that $R$ is a deformation retract of $S$. Then we have: $$rQ_t \simeq \operatorname{Id}_R,$$ where $r:S\to R$ is the retraction.
Of course, when $S=R$, we have simply $$Q_t \simeq \operatorname{Id}_R.$$
Therefore, as we shall see later, the possible behaviors of dynamical systems defined on a space is restricted by its topology.
Exercise. Prove the corollary.
Theorem (Fundamental Theorem of Algebra). Every non-constant (complex) polynomial has a root.
Proof. Choose the polynomial $p$ of degree $n$ to have the leading coefficient $1$. Suppose $p(z)\ne 0$ for all $z\in {\bf C}$. Define a map, for each $t>0$, $$p_t:{\bf S}^1 \to {\bf S}^1,\quad p_t(z):=\frac{p(tz)}{|p(tz)|}.$$ Then $p_t\simeq p_0$ for all $t>0$ with $p_0$ a constant map. This contradicts the fact that $p_t \simeq z^n$ for large enough $t$. $\blacksquare$
Exercise. Provide the details of the proof.
Is there a constant homotopy between constant maps?
We know that path-connectedness implies that every two constant maps are homotopic and homotopic via a "constant" homotopy, i.e., one with every intermediate function $f_t$ constant. What about the converse?
Suppose $f_0,f_1:X\to Y$ are two constant maps. If they are homotopic, does this mean that there is a constant homotopy between them?
You can fit the short version of the solution, if you know it, in this little box: $$[\quad\quad\quad\quad\quad]$$
However, what if we don't see the solution yet? Let's try to imagine how we would discover the solution following the idea: "Teach a man to fish..."
Let's first re-write the problem in a more concrete way:
given $f_0,f_1:X\to Y$, that satisfy
$f_0(x)=c\in Y,\ f_1(x)=d\in Y$ for all $x\in X$;
also given a homotopy $F:f_0\simeq f_1$, where $F:[0,1]\times X\to Y$, with $F(t,\cdot)=f_t$.
Is there a homotopy $G:f_0\simeq f_1$, where $G:[0,1]\times X\to Y$, with $G(0,\cdot)=f_0,G(1,\cdot)=f_1$ and $G(t,x)=g(t)$ for all $t,x$?
We will attempt to simplify or even "trivialize" the set-up until the solution becomes obvious, but not too obvious.
The simplest setup appears to be a map from an interval to an interval. Below, we show a constant, $G$, and a non-constant, $F$, homotopies between two constant maps:
Of course, this is too narrow as any two maps are homotopic under these circumstances: $Y$ is convex. The target space is too simple!
To get around this obviousness, we should
try to imagine that $Y$ is slightly bent, and then
try to build $G$ directly from $F$.
But, really, how do we straighten out these waves?
The fact that this question seems too challenging indicates that the domain space is too complex!
But what is the simplest domain? A single point!
In this case, all homotopies are constant. The domain space is too simple!
What is the next simplest domain? Two points!
Let's make the setup more concrete:
1. Given $f_0,f_1:\{a,b\}\to [c,d]$ (bent),
2. $f_0(a)=f_0(b)=c,\ f_1(a)=f_1(b)=d$;
3. also given a homotopy $F:f_0\simeq f_1$, where $F:[0,1]\times \{a,b\}\to [c,d]$, with $F(t,\cdot)=f_t$.
4. Find a homotopy $G:f_0\simeq f_1$, where $G:[0,1]\times \{a,b\}\to [c,d]$, with $G(0,\cdot)=f_0,\ G(1,\cdot)=f_1$ and $G(t,a)=G(t,b)$ for all $t$.
The setup is as simple as possible. As the picture on the left shows, everything is happening within these two green segments.
Meanwhile, the picture on the right is a possible homotopy $F$ and we can see that it might be non-trivial: within either segment, $d$ is "moved" to $c$ but at a different pace.
The setup is now very simple but the solution isn't obvious yet. That's what we want for our analysis!
Items 1 and 2 tell us that there are just two values, $c$ and $d$, here. Item 3 indicates that there are just two functions, $F(\cdot,a)$ and $F(\cdot,b)$ here, and item 4 asks for just one function, $G(\cdot,a)$ same as $G(\cdot,b)$.
How do we get one function from two?
It is tempting to try to combine them algebraically (e.g., via a convex combination) but remember, $[c,d]$ is bent and there is no algebra.
So, we need to construct $G(\cdot,a)=G(\cdot,b)$ from $F(\cdot,a)$ and $F(\cdot,b)$.
What to do?..
Exercise. Fill the box: $$[\quad\quad\quad\quad\quad]$$
Homotopy equivalence via cell collapses
The idea of homotopy equivalence allows one to try to simplify a topological space before approaching its analysis. This idea seems good on paper but, practically, how does one simplify a given topological space?
A stronger version of the Nerve Theorem, which we accept without proof (see Alexandroff and Hopf Topology) gives us a starting point.
Theorem (Nerve Theorem). Let $K$ be a (finite) simplicial complex and $S$ an open cover of its realization $|K|$. Suppose the finite intersections of the elements of $S$ are contractible. Then the realization of the nerve of $S$ is homotopy equivalent to $|K|$.
The question becomes, how does one simplify a given simplicial complex $K$ while staying within its homotopy equivalence class?
The answer is, step-by-step.
We shrink cells one at a time:
As we see, we have reduced the original complex to one with just seven edges. Its homology is obvious.
Now, more specifically, we remove cell $\sigma$ and one of its boundary cells $a$ at every step, by gradually pulling $\sigma$ toward the closure of $\partial \sigma \setminus a$. Two examples, $\dim \sigma =1$ and $\dim \sigma =2$, are given below:
This step, defined below, is called an elementary collapse of $K$.
Of course, some cells can't be collapsed depending on their place in the complex:
The collapsible $n$-cells are marked in orange, $n=1,2$. As we see, for an $n$-cell $\sigma$ to be collapsible, one of its boundary $(n-1)$-cells, $a$, has to be a "free" cell in the sense that it is not a part of the boundary of any other $n$-cell. The free cells are marked in purple. The rest of the $n$-cells aren't collapsible because each of them is either fully surrounded by other $n$-cells or is in the boundary of an $(n+1)$-cell.
Let's make this construction more precise.
The shrinking is understood as a homotopy equivalence -- of the cell and, if done correctly, of the whole complex. We deform the chosen cell to a part of its boundary and do that in such a way that the rest of the complex remains intact!
The goal is accomplished via a more general type of homotopy than the pointed homotopy.
Definition. Given two maps $f_0, f_1: X \to Y$ and a subset $A \subset X$, the maps are homotopic relative to $A$ if there is a homotopy $F: [0,1]\times X \to Y$ of $f_0, f_1$ which, in addition, is constant on $A$: for all $t\in [0,1]$ we have $$F(t,a) = f_0(a) = f_1(a)$$ for all $a \in A.$ We use the following notation: $$f_0 \simeq f_1 \quad {\rm rel } \ A.$$
Theorem. Suppose $\sigma$ is a geometric $n$-simplex and $a$ is one of its $(n-1)$-faces. Then $\sigma$ is homotopy equivalent to the union of the rest of its faces relative to this union: $$\sigma \simeq \partial \sigma \setminus a \quad {\rm rel } \ \partial \sigma \setminus a.$$
Exercise. Provide a formula for this homotopy equivalence. Hint: use barycentric coordinates.
Definition. Given a simplicial complex $K$, suppose $\sigma$ is one of its $n$-simplices. Then $\sigma$ is called a maximal simplex if it is not a boundary simplex of any $(n+1)$-simplex. A boundary $(n-1)$-simplex $a$ of $\sigma$ is called a free face of $\sigma$ if $a$ is not a boundary simplex of any other $n$-simplex.
In other words, $a$ is a maximal simplex in $K\setminus \sigma$.
Proposition. If $\sigma$ is a maximal simplex in complex $K$ and $a$ is its free face, then $K_1=K\setminus \{\sigma,a\}$ is also a simplicial complex.
This removal of two cells from a simplicial complex is called an elementary collapse and the step is recorded with the following notation: $$K \searrow K_1:=K\setminus \{\sigma, a\}.$$
Corollary. The homotopy equivalence of a realization of a simplicial complex is preserved under elementary collapses.
Definition. A simplicial complex $K$ is called collapsible if there is a sequence of elementary collapses of $K$ that ends at a single vertex $V$: $$K \searrow K_1 \searrow K_2 \ ... \ \searrow K_N =\{V\}.$$
Corollary. The realization of a collapsible complex is contractible.
Exercise. Show that the complexes of the dunce hat and the two-room house are examples of contractible but not collapsible complexes.
Since it is presented in the language of cells, the above analysis equally applies to both simplicial and cubical complexes.
Exercise. Prove the theorem for cubical complexes.
Exercise. Define the "non-elementary" cell collapse that occurs when, for example, a triangle has two free edges and the total of four cells are removed. Repeat the above analysis. Hint: It is the vertex that is free.
Exercise. Define elementary expansions as the "inverses" of elementary collapses and repeat the above analysis.
Exercise. Prove that the torus with a point taken out is homotopy equivalent to the figure eight: ${\bf T}^2 \setminus \{u\} \simeq {\bf S}^1 \vee {\bf S}^1$.
Invariance of homology under cell collapses
The real benefit of this construction comes from our realization that the homology can't change. In fact, later we will show that the homology groups are always preserved under homotopy equivalence. With cell collapses, the conclusion is intuitively plausible since it seems impossible that new topological features could appear or the existing features could disappear.
Let's analyze what we've got. Suppose we have a simplicial complex $K$ and a collapse: $$K \searrow K^1 .$$ We suppose that from $K$ an $n$-cell $\sigma$ and its free face $a$ are removed.
We need to find an isomorphism on the homology groups of these two complexes. We don't want to build it from scratch but instead find a map that induces it. Of course, we choose the inclusion $$i:K^1 \hookrightarrow K.$$ Since this is a simplicial map, it induces a chain map: $$i_{\Delta}:C(K^1)\to C(K),$$ which is also an inclusion. This chain map generates a homology map: $$i_*:H(K^1)\to H(K).$$ We expect it to be an isomorphism.
Exercise. Define the projection $P:C(K)\to C(K^1)$. What homology map does it generate?
For brevity, we denote: $$\partial:=\partial^{K},\ \partial^1:=\partial^{K^1}.$$ This is what the chain complexes and the chain maps look like: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccccccccccccc} ...& \ra{\partial_{n+1}} & C_{n}(K) & \ra{\partial_{n}} & C_{n-1}(K)& \ra{\partial_{n-1}} &...\\ ...& & \ua{i_n} & & \ua{i_{n-1}}& &...\\ ...& \ra{\partial_{n+1}^{1}} & C_{n}(K^1) & \ra{\partial_{n}^{1}} & C_{n-1}(K^1)& \ra{\partial_{n-1}^{1}} &... \end{array} $$ Only the part of this chain map diagram is affected by the presence or absence of these two extra cells. The rest has identical rows and columns: $$C_k(K)=C_k(K^1),\ \forall k \ne n,n-1,$$ $$\partial_{k} = \partial_{k}^1,\ \forall k\ne n+1,n,n-1,$$ $$i_k=Id_{C_k(K^1)},\ \forall k \ne n,n-1.$$
Let's compute the specific values for all elements of the diagram.
These are the chain groups and the chain maps: $$\begin{array}{lll} C_n(K) &=C_n(K^1)\oplus < \sigma >\\ C_{n-1}(K) &=C_{n-1}(K^1)\oplus < a >,\\ i_n &=Id_{C_n(K^1)}\oplus 0,\\ i_{n-1} &=Id_{C_{n-1}(K^1)}\oplus 0. \end{array}$$
Now, we need to express $\partial_k$ in terms of $\partial_k^1$. Then, with the boundary operators given by their matrices, we can handle cycles and boundaries in a purely algebraic way...
For the boundary operator for dimension $n+1$, the matrix's last row is $\partial^{-1} \sigma$ and it is all zero because $\sigma$ is a maximal cell: $$\partial_{n+1}=\partial_{n+1}^1 \oplus 0= \left[ \begin{array}{ccccccccccccccc} \\ \\ & \partial_{n+1}^1 \\ \hline \\ \\ 0,0,&... &,0 \end{array} \right]:C_{n+1}(K^1) \to C_{n}(K^1)\oplus < \sigma >.$$ Then $$\ker \partial _{n+1}=\ker \partial _{n+1}^1 ,\operatorname{Im} \partial _{n+1}=\operatorname{Im} \partial _{n+1}^1 \oplus 0.$$
For the boundary operator for dimension $n$, the matrix's last row is $\partial^{-1} a$ and it has only one $1$ because $a$ is a part of the boundary of only one cell. That cell is $\sigma$ and the last column, which is non-zero, is $\partial \sigma$ with $1$s corresponding to its faces: $$\partial_{n}=\left[ \begin{array}{cccc|c} &&&&\pm 1\\ &&&&\pm 1\\ &&\partial_{n}^1&&0\\ &&&&\vdots\\ &&&&0\\ \hline 0,&0,&... &,0&\pm 1 \end{array} \right]:C_{n}(K^1)\oplus < \sigma > \to C_{n-1}(K^1)\oplus < a >.$$ Then $$\ker \partial _{n}=\ker \partial _{n}^1 \oplus 0 ,\operatorname{Im} \partial _{n}=\operatorname{Im} \partial _{n}^1 \oplus \partial \sigma.$$
For the boundary operator for dimension $n-1$, the matrix's last column is $\partial a$, which is non-zero, with $1$s corresponding to $a$'s faces: $$\partial_{n-1}=\left[ \begin{array}{cccc|c} &&&&\pm 1\\ &&&&\pm 1\\ &&\partial_{n-1}^1&&0\\ &&&&\vdots\\ &&&&0 \end{array} \right]:C_{n-1}(K^1)\oplus < a > \to C_{n-2}(K^1).$$ Then $$\ker \partial _{n-1}=\ker \partial _{n-1}^1 \oplus 0 ,\operatorname{Im} \partial _{n-1}=\operatorname{Im} \partial _{n-1}^1 .$$
Finally, the moment of truth... $$H_{n+1}(K) := \frac{\ker \partial _{n+1}}{\operatorname{Im} \partial _{n+2}} = \frac{\ker \partial _{n+1}^1 }{\operatorname{Im} \partial _{n+2}^1} =: H_{n+1}(K^1);$$ $$H_n(K) := \frac{\ker \partial _{n}}{\operatorname{Im} \partial _{n+1}} = \frac{\ker \partial _{n}^1 \oplus 0}{\operatorname{Im} \partial _{n+1}^1 \oplus 0} \cong \frac{\ker \partial _{n}^1}{\operatorname{Im} \partial _{n+1}^1 }=: H_n(K^1);$$ $$H_{n-1}(K) := \frac{\ker \partial _{n-1}}{\operatorname{Im} \partial _{n}} = \frac{\ker \partial _{n-1}^1 \oplus 0}{\operatorname{Im} \partial _{n}^1} \cong \frac{\ker \partial _{n-1}^1}{\operatorname{Im} \partial _{n}^1 }=: H_{n-1}(K^1).$$ The rest of the homology groups are unaffected by the two extra cells: $$H_{k}(K)\cong H_{k}(K^1),\forall k\ne n,n-1,n-2.$$
We have proven the following.
Theorem. If $K \searrow K^1$ then $$H(K)\cong H(K^1),$$ under the homology map induced by the inclusion $i:K_1\hookrightarrow K$.
Exercise. Provide details for the last part of the proof.
All these lengthy computations were needed to demonstrate something that may seem obvious, that these spaces have the same homology:
Exercise. Suppose $K$ is a $2$-dimensional simplicial complex. Suppose $\sigma$ is a $2$-cell of $K$ that has two free edges, $a,b$, with vertex $A$ between them. Let $K^1:=K\setminus \{\sigma, a,b,A\}$. Use the above approach to prove that $H(K)\cong H(K^1)$.
Exercise. Under what conditions is an elementary collapse a homeomorphism?
Retrieved from "https://calculus123.com/index.php?title=Homotopy&oldid=1260"
|
CommonCrawl
|
More About Chance
The Paradox of the Chevalier De Méré
de Méré observed that getting at least one 6 with 4 throws of a die was more probable than getting double 6's with 24 throws of a pair of dice.
Explain Chevalier de Méré's Paradox when rolling a die
Chevalier de Méré originally thought that rolling a 6 in 4 throws of a die was equiprobable to rolling a pair of 6's in 24 throws of a pair of dice.
In practice, he would win the first bet more than half the time, but lose the second bet more than half the time.
de Méré asked his mathematician friend, Pascal, to help him solve the problem.
The probability of rolling a 6 in 4 throws is [latex]1-(\frac{5}{6})^4[/latex], which turns out to be just over 50%.
The probability of rolling two 6's in 24 throws of a pair of dice is [latex]1-(\frac{35}{36})^{24}[/latex], which turns out to be just under 50%.
independent event: the fact that $A$ occurs does not affect the probability that $B$ occurs
veridical paradox: a situation in which a result appears absurd but is demonstrated to be true nevertheless
equiprobable: having an equal chance of occurring mathematically
Chevalier de Méré
Antoine Gombaud, Chevalier de Méré (1607 – 1684) was a French writer, born in Poitou. Although he was not a nobleman, he adopted the title Chevalier (Knight) for the character in his dialogues who represented his own views (Chevalier de Méré because he was educated at Méré). Later, his friends began calling him by that name.
Méré was an important Salon theorist. Like many 17th century liberal thinkers, he distrusted both hereditary power and democracy. He believed that questions are best resolved in open discussions among witty, fashionable, intelligent people.
He is most well known for his contribution to probability. One of the problems he was interested in was called the problem of points. Suppose two players agree to play a certain number of games — say, a best-of-seven series — and are interrupted before they can finish. How should the stake be divided among them if, say, one has won three games and the other has won one?
Another one of his problems has come to be called "De Méré's Paradox," and it is explained below.
De Mere's Paradox
Which of these two is more probable:
Getting at least one six with four throws of a die or
Getting at least one double six with 24 throws of a pair of dice?
The self-styled Chevalier de Méré believed the two to be equiprobable, based on the following reasoning:
Getting a pair of sixes on a single roll of two dice is the same probability of rolling two sixes on two rolls of one die.
The probability of rolling two sixes on two rolls is [latex]\frac{1}{6}[/latex] as likely as one six in one roll.
To make up for this, a pair of dice should be rolled six times for every one roll of a single die in order to get the same chance of a pair of sixes.
Therefore, rolling a pair of dice six times as often as rolling one die should equal the probabilities.
So, rolling 2 dice 24 times should result in as many double sixes as getting one six with throwing one die four times.
However, when betting on getting two sixes when rolling 24 times, Chevalier de Méré lost consistently. He posed this problem to his friend, mathematician Blaise Pascal, who solved it.
Throwing a die is an experiment with a finite number of equiprobable outcomes. There are 6 sides to a die, so there is [latex]\frac{1}{6}[/latex] probability for a 6 to turn up in 1 throw. That is, there is a [latex](\frac{1}{6}) - (\frac{1}{6}) = \frac{5}{6}[/latex] probability for a 6 not to turn up. When you throw a die 4 times, the probability of a 6 not turning up at all is [latex](1-\frac{1}{6})^4 = (\frac{5}{6})^4[/latex]. So, there is a probability of [latex](\frac{6}{6}) - (\frac{5}{6})^4[/latex] of getting at least one 6 with 4 rolls of a die. If you do the arithmetic, this gives you a probability of approximately 0.5177, or a favorable probability of a 6 appearing in 4 rolls.
Now, when you throw a pair of dice, from the definition of independent events, there is a [latex](\frac{1}{6})^2 = \frac{1}{36}[/latex] probability of a pair of 6's appearing. That is the same as saying the probability for a pair of 6's not showing is [latex]\frac{35}{36}[/latex]. Therefore, there is a probability of [latex](\frac{36}{36}) - (\frac{35}{36})^{24}[/latex] of getting at least one pair of 6's with 24 rolls of a pair of dice. If you do the arithmetic, this gives you a probability of approximately 0.4914, or a favorable probability of a pair of 6's not appearing in 24 rolls.
This is a veridical paradox. Counter-intuitively, the odds are distributed differently from how they would be expected to be.
de Méré's Paradox: de Méréobserved that getting at least one 6 with 4 throws of a die was more probable than getting double 6's with 24 throws of a pair of dice.
Are Real Dice Fair?
A fair die has an equal probability of landing face-up on each number.
Infer how dice act as a random number generator
Regardless of what it is made out of, the angle at which the sides connect, and the spin and speed of the roll, a fair die gives each number an equal probability of landing face-up. Every side must be equal, and every set of sides must be equal.
The result of a die roll is determined by the way it is thrown; they are made random by uncertainty due to factors like movements in the thrower's hand. Thus, they are a type of hardware random number generator.
Precision casino dice have their pips drilled, then filled flush with a paint of the same density as the material used for the dice, such that the center of gravity of the dice is as close to the geometric center as possible.
A loaded, weighted, or crooked die is one that has been tampered with to land with a specific side facing upwards more often than it normally would.
pip: one of the spots or symbols on a playing card, domino, die, etc.
random number: number allotted randomly using suitable generator (electronic machine or as simple "generator" as die)
Platonic solid: any one of the following five polyhedra: the regular tetrahedron, the cube, the regular octahedron, the regular dodecahedron and the regular icosahedron
A die (plural dice) is a small throw-able object with multiple resting positions, used for generating random numbers. This makes dice suitable as gambling devices for games like craps, or for use in non-gambling tabletop games.
An example of a traditional die is a rounded cube, with each of its six faces showing a different number of dots (pips) from one to six. When thrown or rolled, the die comes to rest showing on its upper surface a random integer from one to six, each value being equally likely. A variety of similar devices are also described as dice; such specialized dice may have polyhedral or irregular shapes and may have faces marked with symbols instead of numbers. They may be used to produce results other than one through six. Loaded and crooked dice are designed to favor some results over others for purposes of cheating or amusement.
What Makes Dice Fair?
A fair die is a shape that is labelled so that each side has an equal probability of facing upwards when rolled onto a flat surface, regardless of what it is made out of, the angle at which the sides connect, and the spin and speed of the roll. Every side must be equal, and every set of sides must be equal.
The result of a die roll is determined by the way it is thrown, according to the laws of classical mechanics; they are made random by uncertainty due to factors like movements in the thrower's hand. Thus, they are a type of hardware random number generator. Perhaps to mitigate concerns that the pips on the faces of certain styles of dice cause a small bias, casinos use precision dice with flush markings.
Precision casino dice may have a polished or sand finish, making them transparent or translucent, respectively. Casino dice have their pips drilled, then filled flush with a paint of the same density as the material used for the dice, such that the center of gravity of the dice is as close to the geometric center as possible. All such dice are stamped with a serial number to prevent potential cheaters from substituting a die.
The most common fair die used is the cube, but there are many other types of fair dice. The other four Platonic solids are the most common non-cubical dice; these can make for 4, 8, 12, and 20 faces. The only other common non-cubical die is the 10-sided die.
Platonic Solids as Dice: A Platonic solids set of five dice; tetrahedron (four faces), cube/hexahedron (six faces), octahedron (eight faces), dodecahedron (twelve faces), and icosahedron (twenty faces).
Loaded Dice
A loaded, weighted, or crooked die is one that has been tampered with to land with a specific side facing upwards more often than it normally would. There are several methods for creating loaded dice; these include round and off-square faces and (if not transparent) weights. Tappers have a mercury drop in a reservoir at the center, with a capillary tube leading to another reservoir at a side; the load is activated by tapping the die so that the mercury travels to the side.
Provided by: Proof Wiki. Located at: http://www.proofwiki.org/wiki/De_M%C3%A9r%C3%A9's_Paradox. License: Public Domain: No Known Copyright
Antoine Gombaud. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/Antoine_Gombaud. License: CC BY-SA: Attribution-ShareAlike
Boundless. Provided by: Boundless Learning. Located at: http://www.boundless.com//statistics/definition/veridical-paradox. License: CC BY-SA: Attribution-ShareAlike
Boundless. Provided by: Boundless Learning. Located at: http://www.boundless.com//statistics/definition/independent-event. License: CC BY-SA: Attribution-ShareAlike
equiprobable. Provided by: Wiktionary. Located at: http://en.wiktionary.org/wiki/equiprobable. License: CC BY-SA: Attribution-ShareAlike
6sided dice. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/File:6sided_dice.jpg. License: CC BY-SA: Attribution-ShareAlike
Dice. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/Dice. License: CC BY-SA: Attribution-ShareAlike
pip. Provided by: Wiktionary. Located at: http://en.wiktionary.org/wiki/pip. License: CC BY-SA: Attribution-ShareAlike
random number. Provided by: Wiktionary. Located at: http://en.wiktionary.org/wiki/random_number. License: CC BY-SA: Attribution-ShareAlike
Platonic solid. Provided by: Wiktionary. Located at: http://en.wiktionary.org/wiki/Platonic_solid. License: CC BY-SA: Attribution-ShareAlike
BluePlatonicDice. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/File:BluePlatonicDice.jpg. License: CC BY-SA: Attribution-ShareAlike
|
CommonCrawl
|
Interference pattern and slit
Three questions about interference pattern and slit.
(i) In the 1-slit experiment, an interference pattern is observed if the slit is wide enough (a narrow slit gives a rather blurred interference pattern). In theory, if we widen the slit more and more but keep the source of light narrow (constant narrow source), the interference pattern will remain sharp or get even sharper. In the limiting case, one can take away the barrier altogether and the slit can become infinitely wide while the source of light is kept narrow (constant narrow source). In this limiting case, the interference pattern will still remain or become the sharpest. Has people done an experiment with no barrier and therefore no slit but with a very narrow source of light to see what interference pattern emerges?
(ii) According to http://web.mit.edu/viz/EM/visualizations/coursenotes/modules/guide14.pdf, in the 2-slit experiment, the distance of the first minimum (with m=0) from the centre of the central fringe is inversely proportional to the distance between the two slits, i.e., d in eqn. 14.2.9. Using this expression, if we let d decreases towards zero which amounts to the 2-slit experiment becoming a wide 1-slit experiment, the first minimum will be infinitely far away from the central fringe. That would mean that effectively there will be no first minimum and there will be no interference pattern for this 1-slit experiment. But this is contrary to what is observed where we find 1-slit experiment still produces interference pattern. Hence, we have to conclude that there is something wrong with this expression for the position of the first minimum and other minimums. Can someone provide us with a better formula than the one given in eqn. 14.2.9? For that matter, the expression for the maximums in eqn. 14.2.8 also looks suspect.
(iii) According to eqn. 14.2.9, the first minimum and other minimums depend linearly on the distance between the slits and the detecting screen (L). I wonder if people have done experiment by varying L and keep everything else the same, and have observed this linear dependence on L. Since the expression in 14.2.9 seems to be suspect by the argument in (ii), this dependence on L may also be suspect.For that matter, the dependence on L in the expression for the maximums in eqn. 14.2.8 also looks suspect.
quantum-mechanics double-slit-experiment interference
DamonDamon
$\begingroup$ Take a look at Fraunhofer diffraction: [en.wikipedia.org/wiki/Fraunhofer_diffraction] $\endgroup$
– S. McGrew
I think that the formalism of Fourier transforms and Fraunhofer diffraction will be able to answer all your questions. Indeed, when we place ourselves in the case where the distance $L$ between the slit of size $d$ and the screen is very large ($L >> d/\lambda$, where $\lambda$ is the wavelength of the light source), we know that the intensity profile $A(x,y)$ on the screen is related to the Fourier transform $\mathcal{F}$ of the intensity profile $a(x,y)$ at the slit, to be exact $A(x,y)=\mathcal{F}(\frac{kx}{2},\frac{ky}{2})$, where $k=\frac{2 \pi}{\lambda}$ is the wave number. This is the Fraunhofer diffraction.
In the case of the 1-slit experiment, if the slit is infinitely wide in $y$ and of size $d$ in $x$, $a(x,y)$ is a rectangular window function i.e. $a(x,y)=1$ if $x<d/2$ and $a(x,y)=0$ elsewhere. The Fourier transform of a rectangular function of size $d$ is given by $d \frac{\sin(\alpha d)}{\alpha d}$, which give the correct single-slit profile (sine cardinal function). When you take a function and its Fourier transform, a wider function gives a tiner Fourier transform and vice-versa, it explains why it becomes narrow when you increase the size of the slit.
The double slit experiment is quite similar, but the rectangular function is the sum of two rectangular function, if you do the Fourier transform you will observ that the solution is the single slit envelope and some interference pattern. If you decrease the size between the slit, the interference will disapear and the result will be the same as single slit experiment.
A. ReaganA. Reagan
$\begingroup$ But my objection in (ii) has not been resolved by this answer. $\endgroup$
– Damon
Not the answer you're looking for? Browse other questions tagged quantum-mechanics double-slit-experiment interference or ask your own question.
If only one slit is observed in the Double Slit experiment, will the unobserved slit produce an interference pattern?
Double-slit experiment: Difference between observing photon path and interference pattern?
Where does the interference pattern or diffraction pattern due to a single or double slit placed in front of a light source form?
What is the physical reasoning behind the mathematical derivation of double-slit interference pattern and single-slit diffraction pattern?
Diffraction pattern vs Interference pattern
Why is there interference in double slit experiment when there are many out of phase photons concurrently?
Double Slit Interference pattern - horizontal or vertical?
|
CommonCrawl
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.